Franz’s CEO, Jans Aasman to Present at SemTech ‘12

OAKLAND, Calif. — May 15, 2012 — Franz Inc.’s CEO, Dr. Jans Aasman, will have three presentations as well as leading a panel discussion at the June Semantic Technology Conference in San Francisco.   The Semantic Technology & Business Conference (SemTechBiz) brings together today’s industry thought leaders and practitioners to explore the challenges and opportunities jointly impacting both corporate business leaders and technologists. Attendees benefit from this unique opportunity to explore how semantic solutions and linked data are being embraced throughout companies across a diverse range of business categories.

MongoGraph – MongoDB Meets the Semantic Web

AllegroGraph is a fully ACID and highly scalable RDF triplestore that can be programmed with compiled, server side JavaScript. This allows programmers to easily manipulate individual triples and create their own intelligent graph or reasoning algorithms. However, one wish that has been expressed by many programmers is to work on the level of objects instead of individual triples, where an object would be defined as all the triples with the same subject. So we created a MongoDB interface where programmers can add, delete and modify JSON objects directly into MongoDB like substores. This gives us the best of both worlds. We get the beautiful simplicity of the MongoDB interface for working with objects and we get the all the properties of an advanced triplestore, in this case joins through SPARQL queries, automatic indexing of all attributes/values, ACID properties all packaged to deliver a simple entry into the world of the Semantic Web

A Semantic Platform for Tracking Entities in Real Time

Having engaged several Fortune 500 companies with projects to develop Semantic Technology solutions, we have identified several consistent requirements that have become the foundation for successful deployments of Semantic Technologies.

The overarching pattern that we see in these companies can best be described as real time entity tracking in order to perform real time business analytics. Typical entities are students, telephone customers, credit cards or insurance policies.

We identified and built out four components as the basis of our Semantic Technology Projects. Component one is an ETL system that takes data from various input streams and transforms the data into events, encoded as RDF triples that go into a publish subscribe queue. To facilitate this we created a number of plug-ins for the open source ETL tool Talend to provide an R2RML mapping from data into triples. The second component is a forward chaining/backward chaining rule system that takes events out of the queue and combines them with the already existing knowledge about a particular entity and generates new knowledge. For some applications we see more than 10,000 triples per entity. Rules need to be able to deal with a new event in a fraction of a second. The third component is a machine learning system that is trained to generate predictions based on the features of a particular entity (for example: what is the customer going to call about when calling the call center). These predictions are again coded as individual triples. Finally, the fourth component is a reporting system that allows us to do real time analysis over all existing entities.>

Implementations of R2RML – Panel

W3C’s R2RML (Relational to RDF Mapping Language) is poised to become a recommendation sometime during the first half of 2012. This opens up a huge opportunity for Semantic Technology vendors and consultants to show the potential benefits applicable to large relational data typically owned by enterprise customers.

This panel session will provide a brief overview of the thinking that went into the design of R2RML and how to use the R2RML constructs for mapping. Then each of the participants will explain how they have adopted R2RML in their own products so the audience can compare the different approaches.

Big Data, Fast Data, and Complex Data – Covering it all with RDF

This presentation will discuss the capabilities and requirements for applications to use RDF for Big Data solutions, applications that require a Fast Data component, and best practices for highly Complex data.