Calls for research

David Mack
Octavian
Published in
4 min readMar 30, 2019

During our work at Octavian, we’ve come across a range of problems that we’d love to see more work focussed on. We’re sharing this list here to hopefully help direct those looking for interesting problems.

We’re happy to help support people working on these problems, as well as publicize your solutions through our Medium and Twitter channels.

  • QA Reasoning on Knowledge Graphs: This is a topic close to home for us. We believe that a RNN based approach incorporating graph attention and convolutions could be very powerful and have been solving our own dataset challenge to prove this out. A good place to start is MacGraph, our general solution to the CLEVR-Graph dataset.
  • The algorithmic capabilities of Transformers: The Transformer architecture has rapidly become the de-facto sequence processing network. Whilst it is incredibly powerful at mining sequence correlations, and looks to have plenty more benefit from scaling bigger, we believe that Transformers are not efficient nor sufficient to perform many types of reasoning/algorithmic operations that AGI requires. It is likely they’ll need augmented with other facilities. Prove this by generating translation pairs of formula & algorithms and their output, train a Transformer translation model on them, and see when the translations fail.
  • How to write to a graph: Using attention is an easy way to read from a graph in a neural network however how to write is not so obvious. Devise a network (either supervised or reinforcement learning based) that both reads and writes from a graph during its execution to achieve something useful. For example, take one of the solved MacGraph tasks and make a version where some of the important graph facts are fed in as input separate from the graph.
  • A better quality score for GANs: The most popular metrics for GANs are based on using a trained Inception network to help measure how well formed the generated images are. This has a major shortcoming: it’s only as powerful as the Inception network, and convolutional networks seem to rely mostly on local textures. Current state of the art GANs coordinate detail badly (see photo in Appendix). Our current quality metrics are unlikely to detect this. This research has two parts: (1) prove the deficiency in current quality metrics, (2) propose a better metric
  • Automatic curricula: Algorithm learning networks generally need learning curriculums (e.g. initially easier training examples that gradually get harder as the network trains) to learn their tasks. These curricula are often created by hand. Devise a scheme to automatically deploy a curriculum (e.g. to a DNC to learn sorting). For inspiration, Generative Adversarial Networks use a second network that learns as the primary network learns to provide a curriculum of sorts. Furthermore, a network’s loss could be a good signal of when the training set can be made harder.
  • Language as a control structure for reasoning: The voice inside our heads is a crucial part of how we reason and achieve our goals. Can you build a Transformer model that takes a question, talks to itself for a while, then provides the answer?
  • Internet-scale GAN training: OpenAI have shown that crawling the internet to build a massive training dataset, then training a Transformer model on it, can return valuable results. Do the same for GANs. This has a few notable challenges: (1) Computational resources to train a big enough model to mine the dataset, (2) Can you produce a helpful conditional signal for the GAN from the images’ context?
  • Sentence to Knowledge Graph auto-encoder: Graphs are a great way to represent related concepts. Graphs open up many ways to traverse and extract knowledge. Text has been used to represent much of human knowledge. Build some form of auto-encoder structure that takes a set of input sentences, transforms them into a graph, then transforms that graph into a set sentences that has some level of equivalence to the input set. Note that the interim graph need not be a format humans can understand (as is common in word, image and sentence embeddings).
  • Growing architecture during training: Progressive GANs showed that adding neurons over time to a network whilst it trains can be successful. This training approach has the advantage that the earlier networks are faster to train since they have fewer parameters. Also, it opens up the possibility of adding just enough neurons until desired performance has been achieved. It even allows for split testing different architectures in parallel (like Population Based Training). Try adopting a progressive growing approach in a different setting (feedforward, sequential, image classification) and see if it works.

If you’re interested in these, come talk with our community!

Appendix

BigGAN struggles with texture coordination

Generated using the Google BigGAN demo

--

--

David Mack
Octavian

@SketchDeck co-founder, https://octavian.ai researcher, I enjoy exploring and creating.