JSON-LD: A Method of Encoding Linked Data That Adds Meaning to JSON Objects

Hosting Advice – February 2019

Franz CEO Dr. Jans Aasman Explains JSON-LD: A Method of Encoding Linked Data That Adds Meaning to JSON Objects.

JSON-LD, a method of presenting structured Schema.org data to search engines and other parties, helps organize and connect information online. As Dr. Jans Aasman, CEO of Franz Inc. told us, the data-interchange format has far-reaching implications, from standardizing the ecommerce and healthcare industries to building knowledge graphs. With technologies like AllegroGraph helping to convert complex data into insights, JSON-LD is being put to use in a number of ways.

Read the full article at Hosting Advice.

 

 

 

 

 

 




Semantic Web and Semantic Technology Trends in 2019

Dataversity – January 2019

What to expect of Semantic Web and other Semantic Technologies in 2019? Quite a bit. DATAVERSITY engaged with leaders in the space to get their thoughts on how Semantic Technologies will have an impact on multiple areas.

Dr. Jans Aasman, CEO of Franz Inc. was quoted several times in the article:

Among the semantic-driven AI ventures next year will be those that relate to the healthcare space, says Dr. Jans Aasman, CEO of Semantic Web technology company Franz, Inc:

“In the last two years some of the technologies were starting to get used in production,” he says. “In 2019 we will see a ramp-up of the number of AI applications that will help save lives by providing early warning signs for impending diseases. Some diseases will be predicted years in advance by using genetic patient data to understand future biological issues, like the likelihood of cancerous mutations — and start preventive therapies before the disease takes hold.”

 

If that’s not enough, how about digital immortality via AI Knowledge Graphs, where an interactive voice system will bring public figures in contact with anyone in the real world? “We’ll see the first examples of Digital Immortality in 2019 in the form of AI Digital Personas for public figures,” says Aasman, whose company is a partner in the Noam Chomsky Knowledge Graph:

“The combination of Artificial Intelligence and Semantic Knowledge Graphs will be used to transform the works of scientists, technologists, politicians, and scholars like Noam Chomsky into an interactive response system that uses the person’s actual voice to answer questions,” he comments.

“AI Digital Personas will dynamically link information from various sources — such as books, research papers, notes and media interviews — and turn the disparate information into a knowledge system that people can interact with digitally.” These AI Digital Personas could also be used while the person is still alive to broaden the accessibility of their expertise.

 

On the point of the future of graph visualization apps, Aasman notes that:

“Most graph visualization applications show network diagrams in only two dimensions, but it is unnatural to manipulate graphs on a flat computer screen in 2D. Modern R virtual reality will add at least two dimensions to graph visualization, which will create a more natural way to manipulate complex graphs by incorporating more depth and temporal unfolding to understand information within a time perspective.”

 

Read the full article at Dataversity.




What is the most interesting use of a graph database you ever seen? PWC responds.

From a Quora post by Alan Morrison – Sr. Research Fellow at PricewaterhouseCoopers – November 2018

The most interesting use is the most powerful: standard RDF graphs for large-scale knowledge graph integration.

From my notes on a talk Parsa Mirhaji of Montefiore Health System gave in 2017. Montefiore uses Franz AllegroGraph, a distributed RDF graph database. He modeled a core patient-centric hospital knowledge need using a simple standard ontology with a 1,000 or so concepts total.

That model integrated data from lots of different kinds of heterogeneous sources so that doctors could query the knowledge graph from tablets or phones at a patient’s bedside and get contextualized, patient-specific answers to questions for diagnostic purposes.

Fast forward to 2018, and nine out of ten of the most value-creating companies in the world are using standard knowledge graphs in a comparable fashion, either as a base for multi-domain intelligent assistants a la Siri or Alibot or Alexa, or to integrate and contextualize business domains cross-enterprise, or both. The method is preparatory to what John Launchbury of DARPA described as the Third Wave of AI………….

Read the full article over at Quora

.

 




AI Requires More Than Machine Learning

From Forbes Technology Council – October 2018

This article discusses the facets of machine learning and AI:

Lauded primarily for its automation and decision support, machine learning is undoubtedly a vital component of artificial intelligence. However, a small but growing number of thought leaders throughout the industry are acknowledging that the breadth of AI’s upper cognitive capabilities involves more than just machine learning.

Machine learning is all about sophisticated pattern recognition. It’s virtually unsurpassable at determining relevant, predictive outputs from a series of data-driven inputs. Nevertheless, there is a plethora of everyday, practical business problems that cannot be solved with input/output reasoning alone. The problems also require the multistep, symbolic reasoning of rules-based systems.

Whereas machine learning is rooted in a statistical approach, symbolic reasoning is predicated on the symbolic representation of a problem usually rooted in a knowledge base. Most rules-based systems involve multistep reasoning, including those powered by coding languages such as Prolog.

 

Read the full article over at Forbes

.

 




Transmuting Machine Learning into Verifiable Knowledge

From AI Business – August 2018

This article covers machine learning and AI:

According to Franz CEO Jans Aasman, these machine learning deployments not only maximize organizational investments in them by driving business value, but also optimize the most prominent aspects of the data systems supporting them.

“You start with the raw data…do analytics on it, get interesting results, then you put the results of the machine learning back in the database, and suddenly you have a far more powerful database,” Aasman said.

Dr. Aasman is further quoted:

For internal applications, organizations can use machine learning concepts (such as co-occurrence—how often defined concepts occur together) alongside other analytics to monitor employee behavior, efficiency, and success with customers or certain types of customers. Aasman mentioned a project management use case for a consultancy company in which these analytics were used to “compute for every person, or every combination of persons, whether or not the project was successful: meaning, done on time to the satisfaction of the customer.”

Organizations can use whichever metrics are relevant for their businesses to qualify success. This approach is useful for determining a numerical rating for employees “and you could put that rating back in the database,” Aasman said. “Now you can do a follow up query where you say how much money did I make on the top 10 successful people; how much money did I lose on the top 10 people I don’t make a profit on.”

 

Read the full article over at AI Business.

 




Optimizing Fraud Management with AI Knowledge Graphs

From Global Banking and Finance Review – July 12, 2018

This article discusses Knowledge Graphs for Anti-Money Laundering (AML), Suspicious Activity Reports (SAR), counterfeiting and social engineering falsities, as well as synthetic, first-party, and card-not-present fraud.

By compiling fraud-related data into an AI knowledge graph, risk management personnel can also triage those alerts for the right action at the right time. They also get the additive benefit of reusing this graph to decrease other risks for security, loans, or additional financial purposes.

Dr. Aasman goes on to note:

By incorporating AI, these threat maps yields a plethora of information for actually preventing fraud. Supervised learning methods can readily identify what events constitute fraud and which don’t; many of these involve classic machine learning.  Unsupervised learning capabilities are influential in determining normal user behavior then pinpointing anomalies contributing to fraud. Perhaps the most effective way AI underpins risk management knowledge graphs is in predicting the likelihood—and when—a specific fraud instance will take place. Once organizations have data for customers, events, and fraud types over a length of time (which could be in as little as a month in the rapidly evolving financial crimes space), they can compute the co-occurrence between events and fraud types.

Read the full article over at Global Banking and Finance Review.

 




Why dynamically visualizing relationships in data matters

Franz’s CEO, Jans Aasman, recently wrote the following article for InfoWorld.

The ability to visualize data, their relationships to one another and connections to business objectives gives organizations the power to uncover insights that would otherwise elude them.

Data visualizations have pervaded nearly every aspect of today’s data landscape. Initially conceived of as a means of best presenting analytics results, data visualizations have evolved to affect everything from drag-and-drop approaches to data preparation to visual mechanisms for issuing queries. The ability to visualize data, their relationships to one another and connections to business objectives is central to the notion of data exploration, in which users manipulate these graphical representations for greater understanding of data’s overall meaning. Data visualizations are vital for exploring knowledge graphs, which determine relationships between even seemingly unrelated datasets to indicate their relevance to specific tasks.

Read the Full Article




Semantic Computing, Predictive Analytics Need Reliable Metadata

Our Healthcare Partners at Montefiore were interviewed at Health Analytics:

Reliable metadata is the key to leveraging semantic computing and predictive analytics for healthcare applications, such as population health management and crisis care.

As the healthcare industry reaches the saturation point of electronic health record adoption, and slowly moves past the pain of the implementation process, it may seem like the right time to stop thinking so much about hammering home basic data governance principles for staff members and start looking at the next phase of health IT implementation: the big data analytics environment.

After all, most providers are now sitting on an enormous nest egg of patient data, which may be just clean, complete, and standardized enough to start experimenting with population health management, operational analytics, or even a bit of predictive risk stratification.Many healthcare organizations are experimenting with these advanced analytics projects in an effort to prepare themselves for the financial storm that is approaching with the advent of value-based care.
The immense pressure to cut costs, meet quality benchmarks, shoulder financial risk, and improve patient outcomes is causing no small degree of anxiety for providers, who are racing to batten down the hatches before the typhoon overtakes them.

While it may be tempting to jump into quick-win analytics that use “good enough” datasets to solve a specific pressing use case, providers may be at risk of repeating the same mistakes they made with slapdash EHR implementations: creating data siloes, orphaned reports, and poor quality datasets that cannot be used in a reliable, repeatable way for meaningful quality improvements.

 

Read the full article at Health Analytics

 




Making Big Data More Meaningful through Data Visualization

We’ve all heard the saying, “a picture says a thousand words.” With today’s millisecond attention spans, communicating a complex topic to any audience – business professional, consumer, doctor, investor, policy-maker, voter — has become more challenging than ever. Some industries are now taking this seriously and investing in new data visualization techniques.

Data visualization is a fundamental part of scientific research. In a scientific journal, pictures certainly do seem to be worth a thousand words, with graphs translating large amounts of data into insightful, visual representations.

Read the full article at insideBIGDATA




Semantic Big Data Lakes Can Support Better Population Health

From HealthIT Analytics –

As healthcare providers navigate the treacherous transitional waters of Stage 2 and try to predict how future regulations will shape their actions, the need to lay the groundwork for advanced population health management and accountable care is only becoming clearer.

No matter what the outcome of debates about the future course of the EHR Incentive Programs, one thing remains abundantly clear for organizations of all shapes and sizes: advancements in healthcare big data analytics will not be driven solely by rules and mandates, but by the pressing financial need to collect, corral, understand, and leverage information in order to refine and expand population health management techniques.

Developing the underlying architecture for value-based reimbursement, namely a strong framework for population health management, data governance, and big data analytics, is becoming a top priority for a growing number of providers looking to get a head start on the new realities of healthcare reform.

These organizations, like Montefiore Medical Center, are looking for cutting edge analytics tools which won’t just help them meet the clinical and financial stresses of today’s environment, but will also prepare them for the uncertain paths ahead.

Read the Full Article