AllegroGraph’s RDFS++ engine dynamically maintains the ontological entailments required for reasoning: it has no explicit materialization phase. Materialization is the pre-computation and storage of inferred triples so that future queries run more efficiently. The central problem with materialization is its maintenance: changes to the triple-store’s ontology or facts usually change the set of inferred triples. In static materialization, any change in the store require complete re-processing before new queries can run. AllegroGraph’s dynamic materialization simplifies store maintenance and reduces the time required between data changes and querying.

In a real world example, reasoning in Life Science Datasets where on average there could be as many as 3.4 M subclass relationships, sometimes to 10 levels deep, reasoning using materialization is painfully slow, taking hours to perform. The process multiplies the number of triples, and any serious change to the ontology forces you to re-materialize, again. AllegroGraph does not materialize, but rather optimizes SPARQL and Prolog queries – statistics based predicates are indexed dynamically. This provides fast load and query performance.

Industry Leading LUBM results without materializing

Other triple stores:

  • Load the data in bulk
  • Precompute all types and other inferences
  • Perform queries

AllegroGraph, the only dynamic real time triple store:

  • Loading triples in linear time
  • Queries and Reasoning can be done at any point in time during the loading
  • AllegroGraph is finished loading the LUBM (8000) benchmark and has completed all the queries while the others are still loading

 


 AllegroGraph turns complex data into actionable business insights