Member-only story

GRL Series

Understanding the Building Blocks of Graph Neural Networks (Intro)

Intuitions (with running code) on the neural framework for analyzing and learning from graph data

Giuseppe Futia
Towards Data Science
8 min readMay 14, 2020

--

This post is an introduction to a series of articles on Graph Neural Networks (GNNs). The goal of this series is to provide a detailed description, with intuitions and examples, of the GNNs building blocks.

In this series, I will also share running code, using Numpy, Pytorch, and the most prominent libraries adopted in this field, such as Deep Graph Library (DGL) and Pytorch Geometric. At the end of this series, you will be able to combine these building blocks and create a neural architecture to perform analysis and learning tasks on graph data.

This series will analyze the main components to set up a GNN, including (i) the input layer, (ii) the GNN layer(s), and (iii) the Multilayer Perceptron (MLP) prediction layer(s).

The framework to analyze and decompose the standard GNN architectures is based on the recent paper entitled “Benchmarking Graph Neural Networks”, whose metadata is available below:

Dwivedi, V. P., Joshi, C. K., Laurent, T., Bengio, Y., & Bresson, X. (2020). Benchmarking Graph Neural Networks. arXiv preprint arXiv:2003.00982.
Source:
https://arxiv.org/abs/2003.00982

This post does not cover the fundamentals of graph theory and neural networks. For an…

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Giuseppe Futia
Giuseppe Futia

Written by Giuseppe Futia

AI Consultant and Educator | Ph.D. at Politecnico di Torino (Italy). Passionate about Knowledge Graphs, LLMs, and Graph Neural Networks.

Responses (8)