Intuitive Relational Reasoning for Deep Learning

Carlos E. Perez
Intuition Machine
Published in
6 min readOct 19, 2018
Photo by Clint Adair on Unsplash

In June of this year (2018), a position paper was written by a large group of 27 researchers from DeepMind, Google Brain, MIT and the University of Edinburgh. The title of this position paper was “Relational inductive biases, deep learning, and graph networks.” Honestly, the title doesn’t stand out from the thousands of other papers that are published every year on Deep Learning. A majority of the authors belong to DeepMind.

DeepMind, as you may already be aware, is the Google subsidiary in London with a massive research budget (i.e., $433m in 2017) focused on Deep Learning methods. The company had also previously released a position paper in November 2017 that had 17 authors arguing that machines should be designed to think for themselves (i.e., Model-Free). These position papers are very insightful reading to help understand the motivations of the various research teams at DeepMind.

I am certain that a majority of Deep Learning researchers may have overlooked this paper. I believe the reason is that the term selected for this new paradigm “graph networks” is all too ambiguous and can be easily confused with other computer science work that involves the study of graphs. I would have used perhaps a phrase that is more distinct, for example, “Relational Neural Network.” The gist of a Graph network isn’t really its input and output structures, but rather the focus is on learning the relationships between objects. This is in contrast with other more conventional Deep Learning networks that focus on recognizing the attributes of an object. This is, of course, one may argue, is too subtle a distinction but important enough to emphasize. Quoted from the paper:

Recently, a class of models has arisen at the intersection of deep learning and structured approaches, which focuses on approaches for reasoning about explicitly structured data, in particular graphs…. What these approaches all have in common is a capacity for performing computation over discrete entities and the relations between them.

Graph networks or more specifically Relational Neural Networks are actually a very new thing. Although there have been many attempts to employ inference on graphs using conventional architectures like CNNs and RNNs, in the last year or so, there’s been surprising progress in creating systems that can extract the semantic relationships out of observations. “Interaction Networks for Learning about Objects, Relations and Physics” written in December 2016 explored the reasoning of objects that interact with each other. To give additional credit assignment, other groups (Thomas Kipf) have also been busy exploring graphs and neural networks:

We may argue that the advantages of this new architecture are its flexibility in architecture and the richer representations. This is the motivations for these architectures. In most presentations about this subject, it is the universality of the graph data structure as a representation as the underlying motivation for this new method. A lot of research has been done previously in Graph Embeddings.

The universality of Graph Representations https://arxiv.org/pdf/1806.01261.pdf

DeepMind argues further that the universality of graphs leads to more modular architectures:

Universality in Networks: Universality of Graph Representations https://arxiv.org/pdf/1806.01261.pdf

Which I’m still on the fence as to the utility of this idea. It appears to be a regurgitation of a generic computational graph. I’ve advocated previously for the need for more ‘modular deep learning,’ I’m however unsure of how this relates to my ideas. Perhaps the universality of the graph representation supports ease of composability. However, I do see the design utility of having a common vocabulary to express different kinds of networks.

I do think that there is more to this than just graphs. Previously, I wrote about a capability maturity model. At Level 3 of that model, I called “intuitive causal reasoning” where I argue the importance of models that have an awareness of the causal relationships between objects it, observers. General intelligence absolutely requires the awareness of two objects and the relationship between them. In fact, autonomous in intelligence requires at the very least the awareness of the self and its relationship with each of the objects it observes. There is just no way to divorce oneself from a reality that is absent of relationships. Therefore, Deep Learning methods with an emphasis on discovering relationships are critical to moving forward.

This is what excites me about ‘Graph Networks.’ That is why the use of the word ‘Graph’ is not only ambiguous but also a distraction. The key term and most important word are ‘Relations.’ Architectures that are cognizant of relations are key to thought. Douglas Hofstadter would describe this as analogies. He argues that the more common knowledge structuring mechanism known as categorization (or classification) is the same as analogies.

In Douglas Hofstadter’s latest book “Surfaces and Essences: Analogies as the Fuel and Fire of Thinking,” he writes endlessly and repetitively making his arguments using analogies about analogies. This book is an extremely long and laborious book that unfortunately I have to recommend to anyone exploring the nature of thinking. It’s one of those books that you have to treat like a meditation. That is if you’ve ever done raja yoga, transcendental meditation or praying the rosary you’ll understand the excruciating difficulty of having something repeated over and over again for 500 plus pages. In the end, Hofstadter is absolutely right; that thought is about analogy-making.

What are analogies then? Analogies are relationships between concepts. Plain and simple, there’s just no way to do any thinking without at least three concepts. Two concepts plus its relationship (which also is a concept). Deep Learning networks deal only with a single concept, to make any serious progress you need to at least deal with relationships. That’s why Graph Networks as DeepMind has chosen to christen them, is extremely important.

DeepMind has been gracious enough to have released (October 2018) in open source their Graph Network framework. We thus now are not relegated to being on just the sidelines to this unfolding development. Rather, we can become full participants in creating the next innovation (Level 3) required for Artificial General Intelligence. To achieve Artificial Ingenuity, you have to at least understand how relationships between concepts.

Further Reading

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution

.

Exploit Deep Learning: The Deep Learning AI Playbook

--

--