Getting the Intuition of Graph Neural Networks

Inneke Mayachita
Analytics Vidhya
Published in
8 min readMay 5, 2020

--

Graph Neural Networks (GNN) have caught my attention lately. I have encountered several Machine Learning/Deep Learning problems that led me to papers and articles about GNNs. While trying to implement GNNs using Keras, I came across Spektral, a Python library for Graph Neural Networks based on Keras and Tensorflow 2 developed by Daniele Grattarola.

This library really sped up my understanding about GNNs, which is why I want to share some of my findings with everyone! In this article I would mainly touch on some basic theory and how to translate graphs into features that can be used by neural networks and some other applications of GNNs.

Basic Graph Theory

Let’s start from the beginning — basic graph theory. Nowadays, a lot of information are represented in graphs. For example Google’s Knowledge Graph that helps with the Search Engine Optimization (SEO), chemical molecular structure, document citation networks (document A has cited document B), and social media networks (who is connected to who?). A graph consists of 2 main elements, nodes (vertices or points) and edges (links or lines) where the nodes are connected by edges.

Now comes the next question, which part of the data are nodes, and which one are edges? There is no strict answer to this as we should define nodes and…

--

--