This Week in AI, January 11th 2018

GANs in the New York Times, a new version of TensorFlow, projections for the future of AI, and more!

Mat Leonard
Udacity Inc
3 min readJan 11, 2018

--

GANs in the Wild

Generative Adversarial Networks (GANs) were recently highlighted in the New York Times.

This type of architecture has made amazing progress in generating realistic but completely synthetic data such as images. Work like this is a huge step towards artificial general intelligence.

A large part of human intelligence is being able to accurately represent the past, present, and future in our minds.

What makes humans exceptional is our ability to imagine, to generate completely synthetic data. We work through math in our head, visualize the design of websites and machinery, predict how customers will use our products, and consider how our decisions will effect the people around us.

Through our internal representation of the universe — our memories, awareness, and imagination — we have connected the world through the Internet, sent humans to the Moon, and measured gravity waves from colliding black holes.

Artificial general intelligence will need this internal representation and the ability to generate new possibilities. This is exactly what GANs have done, at least one of our best attempts so far. The network learns an internal representation from the images given to it, and can then produce new images from that representation, in effect an imagination. There is still a lot of work to be done and GANs might not be the final answer. But, we’re getting closer to machines that understand the world.

TensorFlow 1.5

A new version of TensorFlow was released with some interesting improvements.

First up is eager execution which allows you to immediately get the results of operations as you’re writing the code. TensorFlow has a reputation for being difficult to develop due to the way you build the graph and run data through it. If you’ve used PyTorch before, this type of execution will be familiar. Adding eager execution is a good move for TensorFlow, and will reduce development time with the framework.

Secondly, now there are pre-built binaries supporting CUDA 9 and AVX which means we should start seeing better runtime performance for some networks on GPUs and CPUs.

A look into where AI is and where it’s going

Some interesting points:

  • We’re seeing the most gains from hardware. Hardware specifically for deep learning networks will be on phones in 1–2 years. What this does is move AI systems such as Siri out of the cloud and onto local devices, greatly increasing the usefulness of these applications.
  • Recurrent networks will be supplanted by attention networks. In a lot of cases convolutional networks work better than RNNs on NLP problems as well. RNNs will likely be out of favor for most situations in the next couple years. Side note here, TensorFlow is notoriously bad at attention networks.
  • Deep learning networks right now are able to do very well at very constrained things. They can’t learn continuously, can’t transfer learning between domains, and for the most part can’t interact with the environment. Also, we need unsupervised and/or semi-supervised learning, can’t keep feeding labeled data to networks. Neural networks need to learn from the environment and interact with it for true artificial intelligence.

Other interesting links:

~

Stay tuned for new updates as we continue to review all that’s new in the world of AI! And if you’re interested in mastering these transformational skills, and building a rewarding career in this amazing space, consider one of our Nanodegree programs:

--

--

Mat Leonard
Udacity Inc

Teaching all things machine learning and AI at Udacity. Loves cats. @MatDrinksTea on Twitter