A Comprehensive Survey on Graph Neural Networks (Part 1): Types of Graph Neural Network

Anak Wannaphaschaiyong
5 min readMay 12, 2019

Deep learning has shown its potential in solving that lie in Euclidean space, such as, images classification problem where each pixels always have the same amount of neighbors which contain same type of information, color.

However, there are a lot of application that use data that are generated from the non-Euclidean domain. For example, in chemistry, molecules are modeled as graphs and their bioactivity need to be identified for drug discovery.

The survey generalize the term “Graph Neural Networks” to represent all deep learning approaches for graph data.

Types of Graph Neural Network (aka GNN) I will be discussing is as followed

  1. Graph Convolutional Network
  2. Graph Attention Network
  3. Graph Autoencoder
  4. Graph Generative Network
  5. Graph Spatial-temporal Network

This is a introduction to type of GNN. I will briefly explain what they are without getting too deep into mathematical detail. later in the series, I will explain more in dept into each type of GNN.

Network Embedding vs Graph Neural Network

Network Embedding and Graph Neural Networks are closely related and shared similarity.

Network Embedding aims to represent network vertices into a low-dimensional vector space while preserving both network topology (shape of the graph)structure, and content of each node ( information that each node carries). Vectors produced by network embedding will then get fed for tasks, such as classification, clustering, and recommendation using any machine learning technique available.

Network embedding algorithms are typically unsupervised algorithms. There can be categorized into 3 groups.

  1. matrix factorization
  2. random walks
  3. deep learning approaches ( This is what we are focus on in the discussion)

Deep learning methods that is used for network embedding is as what we expected GNN. As of now, you can think of GNN as one of the technique to do network embedding.

Types of GNN that is used for network embedding is as followed

  1. Graph autoencoder-based algorithms
  2. Graph convolution neural netwokrs with unsupervised training.

picture above shows how network embedding and GNN are related.

As of now, I will skip all the notation and just explain high level idea of each types of GNN.

Taxonomy of GNN

Graph Convolution Networks (GCN)

GCN generalize the operation of convolution from data like images and grids like nodes to graph data.

What is convolution? since we will be hearing a lot about convolution. it is fair to get the definition out of the way.

This is definition of convolution from WolframAlpha.

A convolution is an integral that expresses the amount of overlap of one function as it is shifted over another function

You can think of convolution as an operation of multiplying component of graph that lie on the same dimension.

What is graph data? graph data contain nodes and edges where nodes and edges may or may not contain its contents.

now let me explains what GCN is.

The key of GCN is to learn a function F to generate a node v_i representation by aggregating its own features X_i. (ignore notation if you prefer).

In simpler words, GCN learn to represent nodes by aggregating its features. Done. That is it!

Graph Attention Networks

An upgraded version of GCN.

Graph attention Networks is similar to GCN, but it has different mechanism in aggregating information from node’s neighbor. It uses end-to-end neural network architecture, so that more important nodes receive larger weights. In The other word, it pays more attention to more important nodes.

Graph Auto-encoders

As I mention above, this is a technique widely used as network embedding.

Graph Auto-encoders are unsupervised. It aims to learn low dimensional node vector via an encoder, and then reconstruct the graph data via a decoder.

Graph Auto-encoders are popular to be used to learn plains graphs with out attribute information.

For attributed graphs, it tend to employ GCN as a building block for the encoder (construct vector representation with GCN) then reconstruct the structure information via a link prediction decoder.

Graph Generative Network

Graph Generative Network aims to generate plausible structure from data.

Graph Generative Networks are different from Graph Autoencoder in that Graph Generative Networks learn info about input data to create new data while Graph Autoencoder learn about input date to create low-representation of the input data.

It generate graphs given a graph empirical distribution. However, it is a difficult task because graphs are complex data structures. (hard to obtain graph distribution)

One promising application domain is chemical compound synthesis. This is done by treating atoms as nodes and treating chemical bonds as edges. The task is to discover new synthesize molecules.

Graph Spatial Temporal Network

Graph Spatial Temporal Network aims to learn unseen future pattern from spatial-temporal graphs (pattern of the same graph in the past). In a simpler word, it learn to predict future pattern from the past patterns of the same graph.

The key idea of graph spatial-temporal networks is to consider spatial dependency and temporal dependency at the same time. GCN is used to capture the dependency while RNN or CNN is used to capture the temporal dependency.

That is the end for now. I will later explains each model in more detail including math behind it.

Thank you for reading.

--

--