How Natural Language Processing Is Solving For Modern Data Overload

In an increasingly digital world, with an exponentially growing amount of unstructured data from all facets of our digital existence, the need for more natural ways of navigating complex questions has never been greater.

Will Dunlop
Resultid Blog
4 min readSep 22, 2021

--

Transformers have been making the headlines in the natural language processing (NLP) and artificial intelligence (AI) space for the last couple of years. Nearly everybody in that field and most people indirectly intersecting fields have heard of BERT and its derivatives as those advances picked up national headlines.

These advances in machine learning have clear implications across all fields — BERT powers some of the major functionalities of Google’s search platform, among other major, broad roles. In this article, we’ll more closely explore ways in which such models and methods have impacted fields that rely on humans processing large amounts of data quickly. First, it’s important to understand how such models fundamentally work.

Transformers are a broad type of neural model that are commonly used to create representations of text (though there are other applications). They create representations of text in vectors of numbers, as part of a “task” they are assigned to complete, like filling in a blocked-out word in a sentence. These representations are developed by designing a training case that forces the transformer to gradually improve the vectors in order to better perform on the task. These tasks are occasionally very “hard” for a computer — “hard” in the sense that they require a lot of information that we as humans can intuitively understand in a way that is difficult from a machine perspective; such as identifying the correct word(s) to fill in gap(s) in a sentence.

The transformers often can be trained for hundreds (sometimes thousands) of hours. Eventually, after millions of training examples, the models produce very robust representations that can be used in other machine learning tasks and applications. What makes these models good is the amount of training that occurs, as well as the sheer amount of inputs they see in training. Simply put, these models create very “good” representations of natural language sentences. With these representations, it’s easy to use other methods to solve problems that otherwise would have been very difficult to approach.

One big intermediate outcome of these representations is that sentences that are similar in meaning have more similar vectors, which can be said to have a high cosine similarity or a lower cosine distance. To illustrate very simply what this entails, consider the sentences:

  1. I am going to the store.
  2. I am about to go shopping.
  3. I had an onion for breakfast.

Sentences 1 and 2 might have very similar vector representations, and sentence 3 would be different from both of these.

Across fields, looking through large amounts of data is something that is challenging and time-consuming. This can be proprietary data, or it can be publicly available patents, financial filings, and other long and dense documents. The representations that BERT and other models create allow a simple way of combing for sentences that match the desired search input. Whereas earlier search methods (and some present-day ones!) might rely on keyword-based inputs, many modern platforms allow for semantic-style searches that involve the comparison of the semantic content of documents using representations similar to those generated by transformers.

What this means for users of platforms powered by transformers is that searches don’t need to be verbatim matches. Most platforms implement this in the form of giving the user the ability to search through some set body of text or some user input text, often referring to it as “semantic search” or “intelligent search”. This functionality has huge time-saving impacts on all fields that rely on the ability to quickly identify what parts of some dataset are important — a user doesn’t have to sift through each line, or try out different sets of keywords until they get what they’re looking for.

Broadly, semantic search is only one example of an advance from the expanding field of natural language processing with neural networks directly improving productivity and performance in other fields. By allowing users to interact more intuitively with a large set of data, questions are answered and decisions are made faster. In an increasingly digital world, with an exponentially growing amount of unstructured data from all facets of our digital existence, the need for more natural ways of navigating complex questions has never been greater.

At Resultid, we are always trying to identify and track the latest market trends. Interested in more articles like this one? Join our beta list to stay connected.

--

--