Previous similar efforts have fallen short in providing a complete evaluation of graph databases, and drawing a clear picture on how they compare to each other.
We introduce a micro-benchmarking framework for the assessment of the functionalities of the existing systems and provide detailed insights on their performance.
We support the broader spectrum of test queries and conduct the evaluations on both synthetic and real data at scales much higher than what has been done so far.
We offer a systematic evaluation framework that we have materialized into an evaluation suite.
The framework is extensible, allowing the easy inclusion in the evaluation of other datasets, systems or queries.
Graph databases are grounded on the concepts of graph theory: abstracting data in the form of nodes, edges, and properties.
Graph database models can be characterized as those where data structures for the schema and instances are modelled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors.
In designing the evaluation methodology we follow a principled micro-benchmarking approach.
Instead of considering queries with a complex structure, we opt for a set of primitive operators.
The primitive operators are derived by decomposing the complex queries found in existing benchmarks and application scenarios.
Their advantage is that they are often implemented by opaque components in the system, thus, by identifying the underperformed operators one can pinpoint the exact components that underperform.
Furthermore, any complex query can be typically decomposed into a combination of primitive operations, thus, its performance can be explained by the performance of the components implementing them.
Query optimizers may change the order of the basic operators, or select among different implementations, but the primitive operator performance is always a significant performance factor.
The results of this work have been published in:
«Beyond Macrobenchmarks: Microbenchmark-based Graph Database Evaluation.»
by Lissandrini, Matteo; Brugnara, Martin; and Velegrakis, Yannis.
In PVLDB, 12(4):390-403, 2018.
GDB Test-suite: Code for Testing
The suite currently contains 35 classes of operations with both single queries and batch workloads for a total of about 70+ different tests.
The GDB Test-suite uses docker, and the code for running each system and the experiments is released open-source as a
GIT Repository at this link
You may freely use this code for research purposes, provided that you properly acknowledge the authors.
GDB Test results sample
We provide a sample of the results of our suite on some popular systems.
See the results
We disribute here the datasets used in the tests.
Download the files or part of them, they are stored on Google Drive.
The datasets used in the tests are stored in GraphSON format for the versions of the engines supporting Tinkerpop 3.
System using Tinkerpop 2 support instead GraphSON 1.0.
Our datasets can be easily converted to an updated or older version.
For an example see our Docker image.
The MiCo Dataset comes from the authors of GraMi
For more details, you can read:
«GRAMI: Frequent Subgraph and Pattern Mining in a Single Large Graph»
by Mohammed Elseidy, Ehab Abdelhamid, Spiros Skiadopoulos, and Panos Kalnis.
In PVLDB, 7(7):517-528, 2014.
The Yeast Dataset has been converted from the one transformed in Pajek format by V. Batagelj.
The original dataset comes from
«Topological structure analysis of the protein-protein interaction network in budding yeast»
by Shiwei Sun, Lunjiang Ling, Nan Zhang, Guojie Li and Runsheng Chen.
In Nucleic Acids Research, 2003, Vol. 31, No. 9 2443-2450
Moreover you can read about
the details of our Freebase ExQ datasets, or you can use our Docker image to generate the LDBC synthetic dataset.
Details on file sizes
|Graph Size (Nodes/Edges)
2.3K / 7.1K
0.1M / 1.1M
1.9M / 4.3M
0.5M / 0.3M
4M / 3.1M
28.4M / 31.2M
0.18M / 1.5M