AI for Patents: From Idea to Reality

How our startup is using AI to improve patent search & analysis


This is a very informal overview of our new AI product and how we tested it, so please comment below or reach out to us with any questions.

- Yaroslav Riabinin, CEO, Legalicity


NLPatent is an online software tool that does a few things really well.

It’s designed to be a platform for anyone who has an idea for a new invention, like the engineer tinkering in his garage or the scientist working in her lab.

The current version is especially tailored for professionals — those familiar with the patent process, such as lawyers and researchers. In fact, we developed NLPatent with patent examiners in mind; our goal was to automate the patent examination process as much as possible using Artificial Intelligence.

To that end, NLPatent excels at the following tasks:

(1) CPC classification; (2) Prior art search; (3) Prediction of patentability.


Before we get into that, let’s pause to consider the role of AI in all of this.

Isn’t it just a fad?

If it is then national patent offices didn’t get the memo.

The World Intellectual Property Organization (WIPO) has compiled a long list of AI initiatives in intellectual property offices around the globe.

You can also watch WIPO Director General’s speech on the importance of AI to the future administration of IP, in which he states that:

“[W]e are not going to be able to deal, as a world, with this volume of data and this complexity of data without applications based on Artificial Intelligence.”

The US Patent & Trademark Office (USPTO) went even further by issuing a Request for Information (RFI) titled Challenge to Improve Patent Search With Artificial Intelligence in September 2018.

It did a much better job than our marketing team at outlining why an effective AI patent search tool is necessary:

“Examiners are challenged with searching all of an ever increasingly complex and vast corpus of human knowledge in a limited amount of time. They must read and understand the patent application, perform an extensive prior art search, citing references to determine new contributions made or to explain the basis for rejecting the application.”

The RFI characterized prior art search as looking for a needle that did not already exist in an ever growing haystack.

It also stressed that language matters, terms evolve, and that patents often contain new words and phrases beyond common vocabulary.


Coincidentally, we’ve been working on a solution to these problems for several years.

We’ve taken a data-driven approach, leveraging the abundance and availability of patent documents and feeding that data into the latest Natural Language Processing (NLP) algorithms.

Our goal was to have the AI learn the semantics of patent-specific language from millions of examples.

The result is a tool that can read full-text descriptions of inventions and extract their “inventive concept” to find similar patents.

This is a major improvement over traditional keyword search because it eliminates the need to craft complicated queries using Boolean operators.

But how well does it actually work?


The first test of our semantic engine — and often the first step in the prior art search process — is classification by CPC category.

The Cooperative Patent Classification (CPC) system provides a framework for organizing patents by technology area. This allows you to significantly narrow your search space — if you know which category your invention belongs to, you can spend your time looking in the right place.

The CPC is arranged as a tree structure with a total of five levels:

sectionclasssubclassgroupsubgroup

There are only eight sections at the top but tens of thousands of subgroups at the bottom. So we’ve found that the optimal level for search is subclass.

But CPC classification isn’t an exact science.

Patent applications are often assigned to multiple overlapping subclasses, sometimes from different sections.

So we ran a set of general experiments to get a sense of how our AI performs on this task.

We looked at a very wide range of data — over seven million US patents and published patent applications from 1976 to 2017 — and grouped all the documents by their main CPC subclass, which we got from the CPC Master Classification files provided by the USPTO.

Then, we randomly selected 1,000 documents from each of the eight sections, A to H.

For example, if a patent’s main CPC subclass is A47J, it falls under section A.

An example of a pour-over coffee maker that is classified as A47J

NLPatent reads in snippets of text from each document — such as the Abstract and Claims — and produces a ranked list of CPC subclasses, ordered by likelihood that the document belongs there.

The measure of success is whether the document’s actual main CPC subclass is among the top 5 or top 10 subclasses in NLPatent’s list. If it is then we count it as a “match” and move on to the next document; at the end, we look at how many matches there are out of 1,000.

We didn’t control for anything else and simply ran the experiments several times, with the results being consistent across all runs:

CPC classification

As you can see, there’s some variation between technology areas.

But that may be explained by the total number of CPC subclasses per section, which is shown at the bottom of the table.

Notice that section E has the least number of subclasses (30) and the highest performance (94%), while section B has the most number of subclasses (166) and the lowest performance (79%).

This makes sense because the more subclasses there are, the harder it is to tell them apart. And some subclasses become increasingly rare, which is a problem because the AI needs real-world examples to learn from.

It’s also interesting that while sections A and C have a similar number of subclasses, NLPatent performs much better on section A.

This is probably because section A covers human necessities like clothing, cleaning supplies, kitchenware, etc., which are typically described in concrete terms; and section C covers chemical compounds and metallurgy, which are harder to describe and include formulas that aren’t represented linguistically.


The bottom line is that NLPatent was able to distinguish between hundreds of CPC subclasses relatively accurately.

However, the true test of our semantic engine is prior art search.

Can it really identify relevant documents just by concept-matching?

We knew the answer was ‘yes’ when we built the first prototype. We even started using NLPatent in our own work as IP lawyers and convinced our colleagues at other law firms to try it.

It turns out people like having NLPatent as a “sanity check” to make sure they didn’t miss anything, and to streamline new searches because it gets them 50 to 80% of the way there with minimal effort.

In other words, the prior art results generated by NLPatent are sufficiently on point so you don’t have to look much further.

And there’s a way to prove this empirically as well.

We can compare NLPatent’s output with prior art cited by a trained human patent examiner to see if there’s any overlap.

To be clear though — this is a far more difficult task than CPC classification.

As per USPTO’s RFI, it’s like looking for a needle in a giant and ever-expanding technological haystack.

Even human experts don’t always get it right. They miss important prior art, which leads to unpleasant surprises down the road, like patents being invalidated after they were issued.

So what chance does a machine have?

To find out, we ran another set of experiments, this time more rigorous.

We focused on US patents issued in 2017 and randomly selected 100 documents from each CPC section, for a total of 800 samples. We made sure they came from a variety of technology areas, with at least five different CPC classes and 10+ subclasses represented in the data for each section.

We began by looking at all the prior art cited by the Examiner.

This included a mix of documents, such as those cited in support of section 102 (novelty) and section 103 (obviousness) rejections.

The trouble with this approach is that only the s. 102 documents are “guaranteed” to be relevant, in the sense of being the closest prior art.

That’s because s. 103 documents don’t actually need to describe similar subject matter, as long as they disclose a single feature they have in common with your invention.

In other words, if all the prior art cited by the Examiner is lumped together, it’s impossible to tell which of it is actually relevant.

With that in mind, here are the results:

Prior art search (all cited documents)

We measured how many of the Examiner’s references were found by NLPatent within the top 50, top 100, or top 200 matches. Additionally, we noted how often NLPatent found at least one cited document and also all of the cited documents.

As you can see, the numbers tell an interesting story.

NLPatent finds, on average, 20–32% of the cited prior art. It finds at least one cited document over 50% of the time (sometimes over 80%); and every single document 8–13% of the time.

Given the limitations discussed above, this is already a win for AI.

It proves that NLPatent finds at least some of the same prior art that likely took a human expert hours to come across.

But what if we looked exclusively at prior art cited by the Examiner for s. 102?

After all, these are the only documents we know for sure are similar, so it makes sense to use them as a benchmark.

In theory, we should expect to see:

(1) a higher avg. % of prior art found, because NLPatent finds documents based on semantic similarity;

(2) a lower % of at least one document found, because now there are fewer documents to be found; and

(3) a higher % of all documents found, for the same reasons as (1) and (2).

We tested these ideas using the Office Actions data released by the USPTO that breaks down exactly which prior art was cited for which rejection.

We focused on s. 102 documents and ran the same experiment, with the following results:

Prior art search (documents cited for s. 102)

Not only were our assumptions correct, but the new numbers are staggering.

NLPatent finds, on average, 30–50% (!) of the Examiner’s s. 102 prior art. It finds at least one of the cited documents 34–52% of the time; and every single document 25–40% (!) of the time.

That means NLPatent can give you all the prior art you need to assess the novelty of an invention, instantly, within the first 50 hits, one out of every four times.

And that’s if we accept the Examiner’s prior art list as the gold standard — completely accurate and exhaustive — which isn’t always the case.

It’s very possible that other matches identified by NLPatent as being highly relevant were simply overlooked up a human searcher.

Empirically that’s harder to prove, but anecdotally it’s exactly what our users have found, so we plan to shed more light on this very soon.


To wrap up, the implications of our work extend beyond patent search.

If we can get AI to find prior art, what else can we get it to do?

Our own answer is to this question is predict the patentability of new ideas.

It may sound far-fetched — machines helping evaluate whether something can be patented — but we’re already making progress.

In an upcoming piece we’ll talk about our collaboration with researchers at the University of Toronto and how got our AI to predict the likelihood of the dreaded section 101 rejection with up to 89% accuracy.


Photo by rawpixel on Unsplash
In the meantime, please like, subscribe, follow us on Twitter, add me and Stephanie on LinkedIn, and let us know your thoughts!