Forge.AI: Technical Overview

Forge.AI
Forge.AI — Fueling Machine Intelligence
4 min readFeb 21, 2018

Overview

The human brain is a remarkable instrument with highly evolved regions for understanding, reasoning, and decision making. When humans communicate, they typically speak or write to directly convey information. Information can be transmitted through email, text message, phone, web page, social media etc. to desired targets. Effective communication enables the semantics of what is attempting to be communicated to be properly received and interpreted. Our brains have highly developed regions, such as the angular gyrus, Wernicke’s area and Broca’s area, focused on reading and speech comprehension and the application of information to reasoning and decision-making processes.

A large proportion of the world’s information is unstructured — news, social media, radio, television, podcasts, etc. — and designed for human consumption. Given the growing velocity and volume of unstructured data generation, there has been significant interest in developing technology that can help computers make sense of human-to-human communication. The capability for machines to interpret human communication suitable for machine reasoning and decision making is maturing. But there is still significant work to be done. How can unstructured data be transformed to enable efficient incorporation of the information by an analytical process or intelligent machine?

Forge.AI’s raison d’etre is to instantly capture and represent the principal events being conveyed in human language in a manner suitable Machine learning, decision making, and other algorithmic tasks.

Let’s explore some of the key technical challenges related to capturing language in a manner suitable for computational processing — Data usability, Knowledge enrichment, and Temporal meaning — and examine why techniques such as Topic classification, Sentiment analysis, and various Natural Language Processing (NLP) techniques have not yet solved these challenges.

Data Usability

When developing a new algorithm or exploring a new technique, the representation of the information used by the algorithm is essential. Garbage in, garbage out. Numeric data or data over which a metric may be imposed is ideal. Examples of such data include:

  • Money ($5.00)
  • Time (10 seconds)
  • Weight (20 grams)
  • Power (60 watts)

Discrete data items, such as clicks, parking lots, and pencils are ideally incorporated in machine learning processes by transforming these quantities into rates and frequencies using time and space, such as:

  • Clicks per hour
  • Parking lots per square kilometer
  • Pencils per student

The imposition of a metric supports multiple optimization techniques integral to most machine learning algorithms. Processing unstructured data requires a different approach because language does not have a direct translation onto a representation suitable for inclusion in machine learning, automated reasoning, or decision support applications.

Lack of Knowledge Enrichment

Some of the techniques mentioned above generate measurements that lend themselves to analytical processes such as sentiment analysis, entity extraction and document classification. These processes can identify what may be trending, enable information retrieval, and implement alerts, but much of the crucial information content being expressed in the source communication is lost.

The machine representation of language is mostly done as sets of entities and objects, with directed relationships between them. An embedding space would allow for a natural means of measurement, but is not yet broadly used.

Words, sentences and documents can be embedded into spaces that maintain a sense of usage similarity, but work still needs to be done to integrate ontological meaning, which is necessary in order to support reasoning. The lack of a “measurable space” in which to computationally express assertions and events expressed in natural language introduces a challenge to the incorporation of speech and text into computational models.

Limited Representation of Temporal Meaning

Similar to text classification and measuring sentiment, “NLP processed” information is often “time stamped” with the time it was collected or published. NLP extracted dates are frequently left as a sequence of unresolved text tokens. The document date does support “document level” time series analysis (e.g. are more people talking about IBM today than yesterday?), etc. this is just a proxy for what the author is conveying. For example, the author may be referring to an upcoming activity or something that happened in the past. There is a significant amount of information loss or misinformation that can occur by using document “meta dates” instead of extracting the dates expressed by the author, relative dates (e.g. next Monday), and time periods (e.g. the third quarter), transforming them from a sequence of tokens to a computational format such as a time_t or a std:;chrono::duration and associating them with the actions and events being conveyed.

The limited temporal extraction and resolution capabilities of most NLP systems is one of the reasons that they offer limited utility for mission critical solutions and are used instead to support filtering and routing information, helping to identify trends, or transforming information into an alternative, solution specific format for human interpretation such as news readers and alerting systems.

Forge.AI’s Platform

To provide a holistic solution to address these gaps, Forge is developing a set of capabilities which transforms and integrates the real-time monitoring of global information into machine-readable event streams.

The core capabilities — semantic information processing, event extraction and enrichment — are powered by our innovative neural architectures, probabilistic graphical models, and a novel semantic knowledge base that incorporates temporal and probabilistic reasoning capabilities.

Forge’s platform can be divided into four principal sections:

  1. The automated collection of (low latency) source information.
  2. The transformation of the unstructured information into a structured representation.
  3. The semantic resolution of the events, entities and concepts in the structured information.
  4. The transmission of the resolved, structured information to recipients.

At Forge we are creating technologies to enable continuous real-time transformation of the world’s communications into a format that is specifically designed to fuel an AI world. This post provides a broad overview of the science and technologies we are applying to this space. Over the upcoming weeks and months we will be providing technical posts that more deeply explore each of these topic areas.

Please feel free to reach us at info@forge.ai with any questions.

Note: This post was originally published on our blog: https://www.forge.ai/blog/forge.ai-technical-overview

--

--