AllegroGraph Named “2023 – Trend Setting Product” by Database Trends and Applications

Franz Inc., is proud to announce it has been named a “2023 – Trend Setting Product” by Database Trends and Applications.

According to Database Trends and Applications, today’s data environments are highly diverse—residing on many platforms and requiring a variety of approaches to ensure data resiliency and availability. Delivering technology alone will not be enough in 2023.  According to Gartner, investing in sustainable technology will give companies a leading edge as we move forward into a new year.

Sustainable technology is comprised of a framework of solutions that increases the energy and efficiency of IT services; enables enterprise sustainability through technologies like traceability, analytics, emissions management software, and AI; and helps customers achieve their own sustainability objectives.

Investments in sustainable technology also have the potential to create greater operational resiliency and financial performance, while providing new avenues for growth.

“Today’s data environments are highly diverse—residing on many platforms and requiring a variety of approaches to ensure data resiliency and availability,” said Tom Hogan, Group Publisher, Database Trends and Applications. “To help make the process of identifying useful products and services easier, each year, DBTA presents a list of ‘Trend-Setting Products.’ These products, platforms, and services range from long-established offerings that are evolving to meet the needs of their loyal constituents, to breakthrough technologies that may only be in the early stages of adoption.”




The Hype Around Semantic Layers: How Important Are Standards?

There are several reasons why the notion of semantic layers has reached the forefront of today’s data management conversations. The analyst community is championing the data fabric tenet. The data mesh and data lake house architectures are gaining traction. Data lakes are widely deployed. Even architectural-agnostic business intelligence tooling seeks to harmonize data across sources.

Each of these frameworks requires a semantic layer to ascribe business meaning to data – via metadata – so end users can understand data for their purposes and streamline data integration. This layer sits between users and sources, so the former can comprehend data without knowing the underlying data formats.

Additionally, a semantic layer must incorporate a digital asset knowledge graph for a unified description of data assets in all sources – like those feeding data lakes and data lakehouses. This catalog is especially important for identifying what data is in unstructured data sources, relational databases, streaming data, document stores, and other sources for data fabric or data mesh deployments.

Some “semantic layers” use non-standard, proprietary technologies to store metadata. This approach prevents the use of industry-wide ontologies like FIBO, (financial services), SNOMED (medical), SCONTO (supply chain), OBML (life sciences), CDM-Core (manufacturing), GoodRelations (e-commerce), or SWIM (aviation). It also complicates future data integration and reinforces vendor lock-in.

Conversely, semantic layers implemented with W3C’s Semantic Technologies are based on open-source standards that complement an organization’s existing IT infrastructure. They future-proof the enterprise, prevent vendor lock-in, and provide a uniform view of all data (regardless of differences in formatting, types, and structure) that’s optimal for data integration, data governance, and monetization opportunities.

Read the Full Article at Dataversity.




AllegroGraph Named “2022 Best Knowledge Graph” by KMWorld Readers’ Choice

Franz Inc., is proud to announce it has been named the “Best Knowledge Graph” in the 2022 KMWorld Readers’ Choice Award voting.

According to KMWorld,  Global enterprises are making substantial investments in developing innovative approaches and strategies for competing successfully in a knowledge-based market. Such innovative practices, resulting in the development of knowledge-intensive products and services, are prevalent among enterprises in North America and Europe.

In the November 2022 issue, KMWorld magazine announces the winners of the 2022 KMWorld Readers’ Choice Awards. The categories for competition were wide-ranging. In all, there were 13 areas in which products and technologies could be nominated and ultimately voted upon. They include business process management, cognitive computing and AI, customer service and support, e-discovery, knowledge graphs, text analytics, and NLP.

With the diverse array of knowledge management products, services, and technologies to consider, and the stakes getting higher for information-driven success, it can be challenging to make the right choices. There are many ways to learn more about what is available, including white papers, research reports, and webinars, as well as consulting with experts and peers. We hope the KMWorld Readers’ Choice Awards list provides an additional resource to help make the job of identifying solutions to investigate easier.

 




Using Ansible for AllegroGraph multi-server installation

Introduction

Visit our Github example page for more details on creating Ansible playbooks for installing, starting and stopping an AllegroGraph server on one or more machines.

You must edit three files to personalize the configuration and then you can use the Makefile to install, start and stop the AllegroGraph servers on one or more machines.

Configuration

There are a vast number of server configuration parameters for AllegroGraph, far more than you would want to express as arguments to a configuration function.

In this directory there is a file agraph.cfg-template that you should edit to add or modify the configuration options (Link Here) you wish to set.

The only options you should not specify in agraph.cfg-template are

 Port
 SSLPort

as these will be added to the final agraph.cfg file based on values you put in vars.yaml.

basedir

One important variable in vars.yaml is basedir. The server will be installed in a newly created directory that is the value of basedir. Also a

BaseDir

directive will be put in the agraph.cfg specifying this value. This means that inside agraph.cfg you can (and should) use relative pathanmes to refer to directories and files inside this directory tree.

settings

This line is always in the agraph.cfg

SettingsDirectory settings

It places the settings directory as a subdirectory of the basedir. Do not change this line as the settings directory has to be here in order for the super user password to be installed correctly.

Installation

Before starting the installation edit the following files:

inventory.txt

Insert the name of the machines on which you want the AllegroGraph server to be installed. Replace the sample machine names already in the file with the names of your machines. After you edit the inventory.txt file you can type

% make

to see if the machines you specified are reachable by ansible.

vars.yaml

That file contains descriptions of the variables to be set as well as some sample values that you’ll need to change.

agraph.cfg-template

This is the file that will be modified to create the agraph.cfg that will be installed with AllegroGraph. You should review the server settings document to see which additional configuration parameters you wish to specify.

make install

The command

% make install

will run though the installation steps to install the server. It will do a superseding install meaning it will overwrite the server executables but it will not remove any repositories. However it is best to backup your installion before doing the install in case something unexpected happens and repos are lost.

make clean-install

If you wish to completely remove an installed AllegroGraph server so that make install gives you a totally fresh directory then

% make clean-install

will delete everything including repos that are found in subdirectories of the installation.

make start

To start the server on all machines do

% make start

make stop

To stop the server on all machines do

% make stop

If you are going to make install be sure to stop all servers before doing so.




IEEE – Entity Event Knowledge Graph for Powerful Health Informatics

As part of Franz’s participation in the IEEE – ICHI conference, our paper has been published and is available from the IEEE Website.

ICHI 2022 is a premier community forum concerned with the application of computer science, information science, data science, and informatics principles, as well as information technology, and communication science and technology to address problems and support research in healthcare, medicine, life science, public health, and everyday wellness.

Franz Inc. presented on June 14th – Entity Event Knowledge Graph for Powerful Health Informatics

Download Franz’s IEEE Publication – Entity Event Knowledge Graph for Powerful Health Informatics.

Conference Website

 

 

 

 

 

 




Franz Inc. Named a Big Data 50 Innovator

Franz Inc. has been named to the “Big Data 50: Companies Driving Innovation in 2022” by Database Trends and Applications.

AllegroGraph provides organizations with essential Knowledge Graph solutions, including Graph Neural Networks, Graph Virtualization, GraphQL, Apache Spark graph analytics, and Kafka streaming graph pipelines. These capabilities exemplify AllegroGraph’s leadership in empowering data analytics professionals to derive business value out of Knowledge Graphs.

“Data has only become more important as organizations look ahead to what a post-pandemic world could look like,” said Tom Hogan, Group Publisher, Big Data Quarterly. “To support organizations in navigating through new challenges and a rapidly evolving big data ecosystem, Big Data Quarterly presents 2022s ‘Big Data 50,’ a list of companies driving innovation and expanding what is possible in terms of collecting, storing, and extracting value from data.”

Some of the new approaches being embraced to help drive greater benefit from data are DevOps and DataOps, data quality and governance initiatives, hybrid and multi-cloud architectures, IoT and edge computing, and a range of next-gen databases.

According to ResearchAndMarkets.com, big data in business intelligence apps will reach $54.9B by 2027, data integration and quality tools are projected to reach $10.2B globally by 2027, and enterprise performance analytics will reach $31.4 globally by 2027.

Industry verticals of various types have challenges in capturing, organizing, storing, searching, sharing, transferring, analyzing, and using data to improve business. Big data is making a big impact in certain industries such as the healthcare, industrial, and retail sectors.

Another report by Quest Software titled, “The 2022 State of Data Governance and Empowerment Report,” found that data quality has overtaken data security as the top driver of data governance initiatives, with 41% of those surveyed agreeing that their business decision-making relies fundamentally on trustworthy, quality data.

At the same time, however, 45% of IT leaders say that data quality is the biggest detractor from ROI in data governance efforts. While they recognize its importance, they’re struggling to improve the quality of their data, and thus the ability to strategically and maximally leverage data in practice.

While the challenges of data visibility and observability differ across industries, DataOps was overwhelmingly recognized as the primary solution to drive forward data empowerment. Nine in 10 people surveyed agreed that strengthening DataOps capabilities improves data quality, visibility, and access issues across their businesses. The biggest opportunities to improve DataOps accuracy and efficiency lie in investing in automated technologies and deployment of time-saving tools, such as metadata management.

To support organizations in navigating through new challenges and a rapidly evolving big data ecosystem, Big Data Quarterly presents 2022s “Big Data 50,” a list of companies driving innovation and expanding what is possible in terms of collecting, storing, and extracting value from data.




Dr. Jans Aasman, Named Keynote Speaker for SEMANTiCS Conference 2022

Dr. Jans Aasman, CEO, of Franz Inc. will be delivering a Keynote presentation at the 2022 SEMANTiCS conference in Vienna, Austria.

Dr. Aasman’s presentation, “The Role of Graphs in AI and Quantum Computing” will describe three emerging technology trends that will impact the Graph community and thought leadership opportunities for these technologies. Dr. Aasman’s talk will cover Knowledge Graph’s role in Natural Language Understanding, Graph Neural Networks (GNN) for predictive AI applications, and the convergence of Graph technologies and Quantum Computing.

Jans Aasman is a Ph.D. psychologist and expert in Cognitive Science – as well as CEO of Franz Inc.  As both a scientist and CEO, Dr. Aasman continues to break ground in the areas of Artificial Intelligence and Knowledge Graphs as he works hand-in-hand with numerous Fortune 500 organizations as well as government entities worldwide.

The SEMANTiCS conference is an annual gathering of technology professionals, industry experts, researchers and decision makers to share and learn about new technologies, innovations and enterprise implementations in the fields of Linked Data and Semantic AI. Since 2005, the conference series has focused on semantic and graph technologies, which are today together with other methodologies such as NLP and machine learning the core of intelligent systems.




AI50 – Companies Empowering Intelligent Knowledge Management

Franz Inc. has been named to the AI 50: The Companies Empowering Intelligent Knowledge Management.

Read our View from the Top.

AI continues to rise in importance as forward-thinking organizations strive to elevate services, enhance information access, reduce costs, respond faster to opportunities and threats, and create better products. AI and a host of related technologies such as augmented intelligence, machine learning, deep learning, process automation, and natural language processing are being deployed in areas as diverse as supply chain management, manufacturing, healthcare, medical research, and financial services.

Although definitions of AI and what it can provide organizations vary, a widely cited description developed by Gartner is that AI applies “advanced analysis and logic-based techniques, including machine learning (ML) to interpret events, support and automate decisions and to take actions.” While leaving room for differences of opinion, Gartner points out that in order to capture the opportunity of AI, it is important for an organization to articulate and agree upon a generally accepted definition focused on what it wants AI to accomplish.

AllegroGraph provides organizations with essential Knowledge Graph solutions, including Graph Neural Networks, Graph Virtualization, GraphQL Apache Spark graph analytics, and Kafka streaming graph pipelines. These capabilities exemplify AllegroGraph’s leadership in empowering data analytics professionals to derive business value out of Knowledge Graphs.

“AI and a host of related technologies such as augmented intelligence, machine learning, deep learning, process automation, and natural language processing are being deployed in areas as diverse as supply chain management, manufacturing, healthcare, medical research, and financial services,” said Tom Hogan, Group Publisher, KMWorld. “With organizations recognizing the great potential of AI, it is not surprising that the market size is also expected to increase dramatically. As part of our efforts to focus attention on the innovative knowledge management vendors that are imbuing their offerings with AI and automation, in the July issue, KMWorld presents the KMWorld AI 50: The Companies Empowering Intelligent Knowledge Management.”

Read the Full Press Release.

 

 




Innovative knowledge-sharing tools elevate the modern workplace – Article

The most meaningful developments to the knowledge-sharing space—and, by extension, to that of knowledge management as a whole—do not pertain to specific tools, platforms, or technologies.

Instead, they pertain to the goals of knowledge sharing, which have been irrevocably shaped by numerous forces in the modern workplace to include everything from distributed paradigms for working remotely to increasingly low latency responses characteristic of the digital age in which we live.

As such, today’s knowledge-sharing tools are designed for collaboration, engagement, interactivity, and crowdsourcing. The tools themselves have changed little over the past couple of years and still involve facets of data catalogs, taxonomies, search, text analytics, data discovery, and data governance.

What’s evolved, however, is their features, which have been updated for the sort of real-time interactions that make knowledge more accessible, reliable, and utilitarian than ever before.

The point of cataloging enterprise knowledge is to provide a central place to steer users to information relevant to their particular needs. “If you have a metadata graph that links the domain objects in your enterprise to the data catalog, then you can start doing recommendations,” said Jans Aasman, CEO of Franz. “Like, here’s all the databases that are used the most for when people want to do something like this.”

Read the Full Article at KMWorld.




Semantics for Data Lakehouses

By Dr. Jans Aasman, CEO, Franz Inc

Without a semantic layer, data lakes become data swamps. With semantics, users access a host of benefits from the data lake architecture.

Data lakehouses would not exist — especially not at enterprise scale — without semantic consistency. The provisioning of a universal semantic layer is not only one of the key attributes of this emergent data architecture, but also one of its cardinal enablers.

In fact, the critical distinction between a data lake and a data lakehouse is that the latter supplies a vital semantic understanding of data so users can view and comprehend these enterprise assets. It paves the way for data governance, metadata management, role-based access, and data quality.

Without this semantic layer, data lakes are just proverbial data swamps.

With semantics, however, users access a host of benefits from the data lake architecture. Users can help themselves to scalable cloud storage and processing platforms, store all data for both transactional and analytics/BI use cases, and comprehensively query data to support modern machine learning and Artificial Intelligence applications.

Consequently, some of the most respected vendors in the data sphere — including Google and Amazon Web Services — are embracing this concept and delivering consumable options to their respective user bases.

The linked data approach of knowledge graphs is predicated on technologies that provide granular semantic understanding of data. These technologies excel at delivering a uniform semantic layer to make the data lake house a reality — and one of the best choices for managing data in the AI age.

Read the full article at DZone.