DeepMind et al Paper Trumpets Graph Networks

Synced
SyncedReview
Published in
3 min readJun 15, 2018

The paper Relational inductive biases, deep learning, and graph networks, published last week on arXiv by researchers from DeepMind, Google Brain, MIT and University of Edinburgh, has stimulated discussion in the artificial intelligence community. The paper introduces a new machine learning framework called Graph Networks, which some believe promises huge potential for approaching the holy grail of artificial general intelligence.

Due to the development of big data and increasingly powerful computational resources over the past few years, modern AI technology — primarily deep learning — has show its prowess and even outsmarted humans in tasks such as image recognition and speech detection. However, AI remains challenged by tasks that involve complicated learning and reasoning with limited experience and knowledge, which is exactly what humans are good at. Although “a word to the wise is sufficient,” machines require much more.

The paper argues that Graph Networks can effectively support two critical human-like capabilities: relational reasoning — ie drawing logical conclusions of how different objects and things relate to one another; and combinatorial generalization — ie constructing new inferences, predictions, and behaviors from known building blocks.

Graph Networks can generalize and extend different types of neural networks that perform calculations on graphs, and implement relational inductive bias, a capacity for reasoning about inter-object relations.

The GN framework is based on Graph Network blocks, also referred to as “graph-to-graph” modules. Each graph’s features are represented in three forms: nodes, edgesas relations, and global attributes as system-level properties.

The Graph Network block will take a graph as an input, perform calculations from the edge, to the node, and to the global attributes, and then come up with a new graph as an output.

The 38-page paper has been met favorably by many AI researchers, who praised the authors’ efforts. Founder of AI chip unicorn Graphcore Christopher Gray tweeted that “this paper…will kickstart what seems to be a far more fruitful basis for AI than DL alone.” Oriol Vinyals, a renowned research scientist at DeepMind, praised the paper as “a pretty comprehensive review.”

Meanwhile, some questioned how well GNs will live up to the hype. As this is a review paper, it does not offer any convincing experiment results. Graph Networks are thus far an early-stage research theory that still requires more proof.

The Graph Network concept was spawned with ideas not only from AI research, but also from computer and cognitive sciences. The paper emphasizes that “just as biology does not choose between nature versus nurture — it uses nature and nurture jointly, to build wholes which are greater than the sums of their parts — we, too, reject the notion that structure and flexibility are somehow at odds or incompatible, and embrace both with the aim of reaping their complementary strengths.”

* * *

Journalist: Tony Peng | Editor: Michael Sarazen

* * *

Follow us on Twitter @Synced_Global for more AI updates!

* * *

Subscribe here to get insightful tech news, reviews and analysis!

* * *

Synced and TalkingData will be jointly holding DTalk Episode One: Deploying AI in Mobile-First Customer-facing Financial Products: A Tale of Two Cycles. Jike Chong will share his ideas on employing AI techniques in FinTech business model. Scan the QR code to register! See you on June 21st in Silicon Valley.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global