Graph Benchmark

Comparing and Understanding Graph Databases

An Experimental Comparison of Graph Databases

We are witnessing an increasing interest in graph data. The need for efficient and effective storage and querying of such data has led the development of graph databases. Graph databases represent a relatively new technology, and their requirements and specifications are not yet fully understood by everyone. As such, high heterogeneity can be observed in the functionalities and performances of these systems. In this work we provide a comprehensive study of the existing systems in order to understand their capabilities and limitations.

Previous similar efforts have fallen short in providing a complete evaluation of graph databases, and drawing a clear picture on how they compare to each other. We introduce a micro-benchmarking framework for the assessment of the functionalities of the existing systems and provide detailed insights on their performance. We support the broader spectrum of test queries and conduct the evaluations on both synthetic and real data at scales much higher than what has been done so far. We offer a systematic evaluation framework that we have materialized into an evaluation suite. The framework is extensible, allowing the easy inclusion in the evaluation of other datasets, systems or queries.

Graph databases are grounded on the concepts of graph theory: abstracting data in the form of nodes, edges, and properties. Graph database models can be characterized as those where data structures for the schema and instances are modelled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors.

Micro-benchmarking

In designing the evaluation methodology we follow a principled micro-benchmarking approach.

Instead of considering queries with a complex structure, we opt for a set of primitive operators. The primitive operators are derived by decomposing the complex queries found in existing benchmarks and application scenarios. Their advantage is that they are often implemented by opaque components in the system, thus, by identifying the underperformed operators one can pinpoint the exact components that underperform.

Furthermore, any complex query can be typically decomposed into a combination of primitive operations, thus, its performance can be explained by the performance of the components implementing them. Query optimizers may change the order of the basic operators, or select among different implementations, but the primitive operator performance is always a significant performance factor.

The results of this work have been published in:

«Beyond Macrobenchmarks: Microbenchmark-based Graph Database Evaluation.»
by Lissandrini, Matteo; Brugnara, Martin; and Velegrakis, Yannis.
In PVLDB, 12(4):390-403, 2018.

GDB Test-suite: Code for Testing

The suite currently contains 35 classes of operations with both single queries and batch workloads for a total of about 70+ different tests.

The GDB Test-suite uses docker, and the code for running each system and the experiments is released open-source as a  GIT Repository at this link

You may freely use this code for research purposes, provided that you properly acknowledge the authors.

Graph Data

We disribute here the datasets used in the tests.
 Download the files or part of them, they are stored on Google Drive.

The datasets used in the tests are stored in GraphSON format for the versions of the engines supporting Tinkerpop 3. System using Tinkerpop 2 support instead GraphSON 1.0. Our datasets can be easily converted to an updated or older version. For an example see our Docker image.

The MiCo Dataset comes from the authors of GraMi
For more details, you can read:

«GRAMI: Frequent Subgraph and Pattern Mining in a Single Large Graph»
by Mohammed Elseidy, Ehab Abdelhamid, Spiros Skiadopoulos, and Panos Kalnis.
In PVLDB, 7(7):517-528, 2014.

The Yeast Dataset has been converted from the one transformed in Pajek format by V. Batagelj. The original dataset comes from

«Topological structure analysis of the protein-protein interaction network in budding yeast»
by Shiwei Sun, Lunjiang Ling, Nan Zhang, Guojie Li and Runsheng Chen.
In Nucleic Acids Research, 2003, Vol. 31, No. 9 2443-2450

Moreover you can read about the details of our Freebase ExQ datasets, or you can use our Docker image to generate the LDBC synthetic dataset.

Details on file sizes

Name Files Size (bytes) Graph Size (Nodes/Edges)
Yeast yeast.json
yeast.json.gz
1.5M
180K
2.3K / 7.1K
MiCo mico.json
mico.json.gz
84M
12M
0.1M / 1.1M
Frb-O freebase_org.json
freebase_org.json.gz
584M
81M
1.9M / 4.3M
Frb-S freebase_small.json
freebase_small.json.gz
87M
12M
0.5M / 0.3M
Frb-M freebase_medium.json
freebase_medium.json.gz
816M
117M
4M / 3.1M
Frb-L freebase_large.json
freebase_large.json.gz
6.3G
616M
28.4M / 31.2M
LDBC ldbc.json
ldbc.json.gz
144M
13M
0.18M / 1.5M