A comprehensive introduction to GNNs — Part 1

Nicolas Raymond
Analytics Vidhya
Published in
7 min readJul 9, 2021

Why should I read this?

As you are reading this, you might wonder what are Graph Neural Networks (GNNs), how they are working or in what context these deep learning models naturally fit in. If it is not the case, I don’t know how you ended up here but you are still invited to read the whole thing! The purpose of this series of publications is intended to make a complete introduction to the GNNs in an illustrated and comprehensive manner.

What do I need to know beforehand?

This introduction will start from the definition of a simple graph and slowly work its way through the functioning of GNNs. However, I expect from you to understand matrix multiplications to grasp the essence of the mathematics behind the GNNs architectures and also have a basic knowledge about Feedforward Neural Networks (FNNs) since I will use them to make few comparisons. Do not feel overwhelmed by the mathematical notations in some of the figures you will eventually encounter, I will make sure to summarize the idea behind them with comprehensive language.

Content of the whole introduction

The series of publications aims to cover the following topics :

However, this part will only elaborate on the first 3 topics.

Graph

Since the G in GNN stands for graph, a dip in the subject is might worth a little of your reading time. Not to mention that it will give me an opportunity to introduce some notations that will be used throughout the introduction. I will remind you about them from times to times, do not worry!

As shown in the figure below, to its simplest mathematical form, a graph (or simple undirected graph to be precise) is simply a pair of two sets. The first set contains elements known as vertices (or nodes). The second one contains edges, which are unordered pairs of vertices.

Definition of a graph.

But, let me be honest, graphs are rarely seen in this form wandering in their natural habitat. There are more common and convenient ways of representing them. For example, when a graph has only few vertices and edges, a little drawing does the trick!

Illustration of a graph.

Another more computer friendly way of representing a graph is by using its adjacency matrix (A). To build the adjacency matrix of a graph with n vertices, what you essentially have to do is:

  • Identify each vertex with a distinct number from 1 to n.
  • Create an n x n matrix filled with zeros.
  • Replace the 0 at the i-th row and the j-th column with a 1 if an edge exists between the i-th vertex and the j-th vertex. Here, i and j are arbitrary numbers that were assigned to the vertices at the first step.
Adjacency matrix of a graph.

One fascinating fact about the adjacency matrix is that, if you multiply it by itself k times, the number you will find at the i-th row and the j-th column of the resulting matrix is the number of different paths that leads from the i-th vertex to the j-th vertex in exactly k steps.

Powers of the adjacency matrix.

Finally, if you have already encountered the subject of graph in the past, you have probably heard or read about the nodes’ degrees in a graph. The degree of a node simply refers to the number of edges that are connected to it. In some graph applications such as Graph Convolutional Networks (GCNs), it is useful to store the nodes’ degrees in a matrix called the degree matrix (D). As defined in the figure below, it is simply a matrix where the only non-zeros terms are on the diagonal.

Degrees of nodes in a graph.

Directed Graph

You have met the graph, now let me introduce you to its closest but sometimes “edgy” relative, the directed graph (or simple directed graph to be more precise). Both are lookalikes but they stand on their own. As shown in the following picture, the directed graph is also composed of a pair of two sets. Nonetheless, the second set is, in this case, constituted of directed edges, which are ordered pairs of vertices. The first vertex and the second vertex in a pair can hence be respectively interpreted has the starting and ending points of an arrow connecting both vertices.

Definition of a directed graph.

As you might already expect, the representation methods discussed earlier for graphs are still valid for directed graphs. There is just a little difference in the way of drawing the edges.

Representations of a directed graph.

Concerning the notion of nodes’ degrees, there is a minor adjustment to do in the case of directed graphs since the orientation of edges now matters. Instead of talking about a node’s degree, it will be more likely to talk about its in-degree (number of arrows pointing at itself) and its out-degree (number of arrows leaving from itself).

Degrees of nodes in a directed graph.

Note that the results concerning the powers of the adjacency matrix and the k-hop neighbors still holds for the directed graph!

Information networks

Now that we know more about directed graphs, let us build upon them to make them not only beautiful, but useful! Various types of data can naturally be expressed via graphs.

Data expressed in the form of graphs.

The proper way of expressing real-world data using graphs is with information networks. Information networks are simply graphs provided with an object mapping function and a link mapping function that respectively describe the types of vertices (objects) and the types of edges (links). In other words, when each vertex and each edge of a graph is associated to a particular meaning, it is an information network. In the following figures, for each information network, the set of possible types of objects is represented with a cursive “A” while the set of possible types of edges is represented with a cursive “R”.

Definition of an information network.

When an information network has more than one type of vertices or more than one type of edges, it is referred to as an heterogeneous information network. Otherwise, it is called an homogeneous information network. Both types of networks are illustrated below.

Example of homogeneous information network.
Example of heterogeneous information network.
Heterogeneous graph expressed in multi-layered way.

Using the different types of links in an information network, it is possible to build more complex relations between objects. These more complex relations are identified as meta paths. A meta path is simply defined by a sequence of nodes’ and edges’ types. A meta path will a have a length of n if the sequence is composed of n types of edges, which do not need to be distinct.

Definition of a meta path.

Since the concept of meta path might be difficult to grasp at first, I will show you few examples! If we look back at the bibliographic network shown earlier, we can see that authors are not directly connected together. However, we can still develop relations between them using meta paths. For example, using the green and red directed edges, we can connect two authors using the following sequence of nodes’ and edges’ types:

  • Author -> Paper -> Author (meta path of length 2)

When the meta path described is obvious, it is common to only uses first letters of the types of nodes in the sequence to identify it (ex. APA).

Meta path example with bibliographic network.

Here is another example with a more complex meta path.

More complex meta path example.

Where are we going with all of this?

“I wanted to learn about GNNs and all you are showing me is some colorful figures and random stuff about graphs, where are we going?” you are probably thinking. Trust me, we are not wasting our time here. We can see the GNNs at the horizon and I will surely bring them to you. I just feel like it would be a crime to start talking about a graph convolution before even introducing the graphs properly.

However, in the next figure, I give you few hints about what will come in the next publications and how it is linked to what you read about today! See you soon!

Before you go

Thank you for your reading! Feel free to visit my LinkedIn page.

References

Lee, Bohyun, et al. “Heterogeneous Multi-Layered Network Model for Omics Data Integration and Analysis.” Frontiers in Genetics, vol. 10, Jan. 2020, doi:10.3389/fgene.2019.01381.

Shi, Chuan, et al. “A Survey of Heterogeneous Information Network Analysis.” ArXiv:1511.04854 [Physics], Nov. 2015, http://arxiv.org/abs/1511.04854.

--

--

Nicolas Raymond
Analytics Vidhya

MSc. BMath. Machine Learning Intern at Alberta Machine Intelligence Institute (Amii).