Today, efforts in conventional Data warehouses and BI analytics are being complemented with the AI and Big Data analytics to yield competitive standing, in virtually every industry vertical. AI and Big Data are firmly positioned now as practical and invaluable techniques, to lead enterprises to decisions that are truly data driven. Now, on top of typical AI and ML approaches to automating decision making driven by data in the enterprise, the focus is on building applications to be powered by 'knowledge'.

And, businesses are building their 'Enterprise Knowledge Graphs' to remain competitive. Existing data stores in the enterprise such as relational databases, NoSQL data stores, etc. are typically the sources of the first knowledge in this build-out process, besides custom projects. But, most organizations are over looking 'text' documents as a source of knowledge. Text is inherently difficult to process, and most organizations stop at only building document search stores, document caregorizing applications or nominal information extraction applications, which still leaves out the most valuable content that is with in the text - Knowledge! Text is a valuable, ubiquitous source of knowledge in the enterprise.

Our advanced language processing platform – TextDistil consists of NLP pipelines that process unstructured data (raw text) to extract and assign structure and use neural language models to assign meaning to parts of sentences and paragraphs. TextDistil employs novel methods in deep learning and linguistic theory to extract candidate word tokens and phrases to generate high probability predicate matches to a target Ontology and create ‘Knowledge triples’ in RDF and load into a target Semantic Knowledge Graph.

TextDistil is a distibuted and parallel software system and scales routinely to high volumes (tens of thousands) of documents in typical production configurations and scale to higher volumes with compute cluster being the only constraint. TextDistil works with ALL W3C compliant RDF triple stores as target knowledge graph stores.

Knowledge Graphs promote new breed of enterprise applications such as Chatbots, etc. which in turn enable business decisions that are transparent, automatic and supported by data that is curated and whose lineage and context is well understood. We believe Knowledge Graphs are lasting and valuable outcomes of BI and AI pipelines in an enterprise.

about-usimage

Collaboration with Franz

OAKLAND, Calif. — January 31, 2017 — Franz Inc., an early innovator in Artificial Intelligence (AI) and leading supplier of Semantic Graph Database technology and Lead Semantics, a Big Data Analytics start-up delivering cloud based Advanced Analytics and Data Science, today announced their partnership to deliver Smart-Data Integrated Data Science.

"The integration of Lead Semantics' Hiddime and AllegroGraph delivers new types of analytic outcomes and insights to provide 'Smart Data' for the Enterprise", said Dr. Jans Aasman, CEO, Franz Inc. "AllegroGraph will bring knowledge integration to the Hiddime platform for one of a kind data science capabilities that will deliver unique value for each user."




AllegroGraph’s Technology Featured by Gartner for Driving Business Value in Data -


ecommerce-image

Emerging Startups




ecommerce-image


ecommerce-image ecommerce-image

Lead Semantics, a new generation AI company integrating knowledge bases and machine learning, develops products and services targeting the area of 'Semantics Integrated Data Science’ for both the Enterprise and the Cloud environments. Hiddime.com is first of its kind Semantic Cloud-BI tool that enables advanced analytics on the cloud. an 'Interactive Discovery and Exploratory Analytics' tool (IDEA tool) in the cloud, hiddime.com enables end business users with little IT knowledge to deliver routine to sophisticated BI and advanced Analytics with just point and click interactions in the browser.

Their data science teams deliver NLP, Graph, Machine Learning and Semantic Technology projects that also include integration of complex Big Data engineering pipelines feeding into BI Datawarehouses and Smart Data Lakes. Their pedigree and experience uniquely positions them to take advantage of the recent surge in interest in Smart Data to deliver cutting edge data science that enterprises are striving for globally.