Knowledge Graph Creation: Part II

How to construct a knowledge graph from the text?

Selen Parlar
Analytics Vidhya
5 min readDec 31, 2019

--

From the previous stories, we know what the knowledge graph is and we get the required information in order to extract information for knowledge graph creation. In this story, we will combine these two pieces of information and create our own knowledge graph!

Introduction

Even folks who do not interested in geography or history have heard about Balkans. Here is the Wikipedia page:

As you can see there is a lot of information there, not only in the form of text but also in hyperlinks and pictures.

Most of the information is relevant and useful for research about Balkans. However, we cannot directly use this data source in our programs. In order to make this data readable for our machines and also interpretable by us, we will transform it into a knowledge graph!

Before getting started with building our knowledge graph, let’s see how we embed information in these graphs. As in most of the graphs, we have entities represented as nodes and the connections between them, namely edges.

If we directly map the first sentence in the Wikipedia page of Balkans, which is “The Balkans known as the Balkan Peninsula”, into our graph we get the following simple graph:

This example is made by our hands, however, it is not feasible or scalable for us to manually build a whole knowledge graph, thus we need to extract the entities and the relations by machines! However, here comes the challenge: Machines cannot interpret natural language. In order for our machines to understand our texts, we will make use of Natural Language Processing techniques, namely NLP, such as sentence segmentation, dependency parsing, parts of speech tagging, and entity recognition. We have discussed and experienced these techniques in the previous story. Let’s use these in here!

Knowledge Graph Creation

A knowledge graph consists of facts based on the relationship that connects the entities. The facts are in the form of triples, subject-object-predicate. For example;

“The Balkans is known as the Balkan Peninsula.”

As a triple, the above fact can be represented as isKnownAs(The Balkans, the Balkan Peninsula) where,

  • Subject: The Balkans
  • Predicate: isKnownAs
  • Object: the Balkan Peninsula.

There are several possible ways of extracting the triplets from the text. One can create his\her own sets of rules for the specific data source. In this story, we will use an already existing library, Krzysiekfonal’s textpipeliner that is created for advanced text mining. Let’s start with creating a text using spaCy:

Now we can use these sentences produced by spaCy in textpipeliner that provides an easy way of extracting parts of sentences in the form of structured tuples from unstructured text. textpipeliner provides 2 main parts: Pipes and PipelineEngine. From pipes, you can create a structure that will be used to extract parts from every sentence in the document. The Engine will use this pipes structure and apply the processing of it for every sentence in the provided document and return list of extracted tuples.

The extracted tuples are:

We can change the parameters according to entity types listed in spaCy. Let’s use these extracted tuples to create our knowledge graph. To do so, first, we need to identify what are the source nodes, target nodes, and relations. Using simple python operations you can directly get the following lists and store them into a DataFrame:

After extracting the lists and creating a DataFrame, we can use this DataFrame in the NetworkX package to draw our knowledge graph:

And in the end, we can see the final graph:

Here, our resulting graph is so small, since we only used one type of Pipeline that only considers the named entities of type location and people. If you enrich your pipeline as well as the raw text you’d get a greater graph in which you could also perform inference!

All in all…

In this series of stories, we learned how to use NLP techniques to extract information from a given text in the form of triples and then build a knowledge graph from it. Even though we only use a small dataset and create a very limited knowledge graph, we are able to build quite informative knowledge graphs with our current experience. Knowledge Graphs are one of the most fascinating concepts in data science. I encourage you to explore this field of information extraction more.

References

--

--