Member-only story
Graph Neural Networks Part 3. How GraphSAGE Handles Changing Graph Structure
And how you can use it for large graphs
In the previous parts of this series, we looked at Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs). Both architectures work fine, but they also have some limitations! A big one is that for large graphs, GCNs and GATs will become v-e-r-y slow. Another limitation is that if the graph structure changes, GCNs and GATs will not be able to generalize. So if nodes are added to the graph, a GCN or GAT cannot make predictions for it. Luckily, these issues can be solved!
In this post, I will explain GraphSAGE and how it solves common problems of GCNs and GATs. We will train GraphSAGE and use it for graph predictions to compare performance with GCNs and GATs.
New to GNNs? You can start with post 1 about GCNs (also containing the initial setup for running the code samples), and post 2 about GATs.
Two Key Problems with GCNs and GATs
I shortly touched upon it in the introduction, but let’s dive a bit deeper. What are the problems with the previous GNN models?