NIPS 2016: Cake, Rocket AI, GANs and the Style Transfer Debate

Luba Elliott
Intuition Machine
Published in
7 min readDec 14, 2016

Or, if you put them all together, my experience of the NIPS conference can be summarized as the image below. Read along if you prefer words, details and creative applications of AI.

Thanks Prisma for the style transfer and my stepdad for the somewhat GANy cake

Mentioned in Yann LeCun’s invited talk on Monday, the cake became the meme of NIPS 2016, appearing in presentations throughout the conference and workshops. Turns out, machine learning researchers must really like cake, sometimes with extra cherries (oh RL!).

Source: Yann LeCun’s presentation slides
Source: attentive NIPS attendees active on Yann LeCun’s Facebook page

The real cherry on the NIPS cake was of course the launch of Rocket AI. I was there, still not sure what they do, but impressive team, private residence in the embassy district and an on-call police force for when the parties get too rowdy (the team need their beauty sleep!). Yup, next year I’m betting on Temporally Recurrent Optimal Learning becoming the new GAN.

And now more seriously… This was my first NIPS and it was probably the best conference I had ever been to. So much inspiration, energy and positivity from the AI community amongst the chaotic frenzy of invited talks from industry titans, hundreds of posters, artistic demos, company parties, workshops and satellite events for the almost 6,000 machine learning researchers assembled from all over the world. Read on for my NIPS week highlights.

Music knowledge extraction using machine learning seminar

Organised by Xavier Serra at Universitat Pompeu Fabra on the Sunday before NIPS, the seminar was a great music-focussed start to the week. I only managed to catch the final few presentations, but it was a delight to hear what was happening industryside with presentations from Aaron van den Oord on the Wavenet audio generation model, Oriel Nieto on music recommendation models at Pandora and Colin Raffel on using the Lakh MIDI dataset.

Lines, grids and reflections at the CCIB conference venue

Women in Machine Learning Workshop 2016

In its 11th year, the WIML workshop welcomed 570 participants to its event on Monday, featuring invited speakers, contributed talks and mentorship roundtables on academic and career topics. I was invited to mentor on music applications and building your professional brand; we had some fascinating discussions on separating professional and personal lives online as well as figuring out what our dream applications of AI in music would be (music tutors, music-to-image and improved recommendation systems were all mentioned).

Despite the fact that WIML numbers doubled from the previous year, women still made up only 15% of the almost 6,000 attendees. Judging by the number of women-specific sponsor events hosted on Sunday before the workshop, it is clear that companies are trying to address the gender balance. I do wonder if they’re going about it in the right way, but that is a topic for another article.

Real life sea demo found just outside the conference centre

The demos

Monday, Tuesday and Wednesday evenings are reserved for posters with 200 on display every day and only 3.5 hours to see them all. Alongside the posters, there were 10 demos each on Tuesday and Wednesday allowing you to play music, figure out your personality, generate text interactively and turn frowns into smiles. They brought the research to life and were great fun to play around with. Some favourites are listed below.

Memo Akten’s Real-time interactive sequence generation and control with Recurrent Neural Network ensembles: a system to gesturally ‘conduct’ the generation of text. What was particularly entertaining here is watching the generated text change as you moved your hand across the screen to increase the presence of the style of the Bible, Trump or the Linux source code.

Tom White’s Neural Puppet: How can we understand and use the structured latent space of generative networks? This uses a neural network to remove or add smiles to a photograph.

Magenta, the music generation project from Google Brain, Interactive Musical Improvisation with Magenta. The short video is of Magenta playing the bassline with Sageev Oore playing over it. Winner of the best demo award.

Anh Nguyen et al’s Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space. Like a never-ending film , generating image after image in response to a word prompt.

The GANs

Generative Adversarial Networks (GANs) were one of the hottest trends this year, with a dedicated tutorial from Ian Goodfellow on Monday, a workshop on Adversarial Training on Friday and pop-up appearances pretty much anywhere and everywhere else.

I’m still in awe of the artistic potential of these models. Below are some slides from Ian Goodfellow’s tutorial, with some results described as “problems” because of incorrect perspective, global structure, and counting. What is a bug in the quest for realistic image generation can be a feature in creative explorations. With the amount of interest in the field, I can’t wait to see how the technical models will develop and how they will be used creatively. Get in touch if you spot some cool creative uses of GANs.

Public engagement interlude

During Friday lunch, I was pleasantly surprised to find a workshop on public engagement. Titled ‘ People and machines: Public views on machine learning, and what this means for machine learning researchers’, it featured presentations from Sabine Hauert, Zoubin Ghahramani and Katherine Gorman.

Sabine Hauert gave an overview of the public opinion insights gained from the survey conducted by The Royal Society earlier this year. Turns out that only 9% of people know the term ML, but 76% know NLP and 75% driverless cars as possible applications.

Meanwhile, Zoubin Ghahramani presented an overview of the main research and public opinion challenges facing the machine learning community.

Katherine Gorman of TalkingMachines talked about storytelling best practices, her ideas presented as “story algorithms”. She sure knows how to translate her thoughts into the appropriate format for a predominantly scientist audience (I have never heard the term story algorithm during my arts education!).

Constructive Machine Learning Workshop

My Saturday was spent at the Constructive Machine Learning Workshop, a term I was previously unfamiliar with. The website explains the term as:

Constructive machine learning describes a class of related machine learning problems where the ultimate goal of learning is not to find a good model of the data but instead to find one or more particular instances of the domain which are likely to exhibit desired properties. While traditional approaches choose these domain instances from a given set/databases of unlabeled domain instances, constructive machine learning is typically iterative and searches an infinite or exponentially large instance space.

Therefore, this workshop covered everything from music and recipe generation to drug modelling. Here is a selection of the posters presented at the event from Jasmina Smailovic, Tetsuro Kitahara and Memo Akten:

Ross Goodwin was one of the invited speakers and I can’t help but include this definition of love generated by his lexiconjure bot which was trained on the OED. Ross is also the writer of writer (best job title ever?) for the science fiction short film Sunspring, which was screened at the workshop.

Style transfer is only cool to computer scientists?!

Ever since the Leon Gatys et al paper on A Neural Algorithm of Artistic Style in August 2015, the so-called neural style transfer has been growing in popularity and exploded this summer with the launch of the mobile app Prisma. Here is Magenta’s version of live video style transfer with the option of combining multiple styles and tweaking their individual strength. Pretty cool, right?

Turns out, only if you’re a computer scientist (or journalist!). Simon Colton from Goldsmiths delivered a controversial talk on computational creativity with heavy criticism of the current experiments in style transfer carried out by the computer science community due to their excessive focus on the so-called “pastiche generation”. He also gave some additional uses of style transfer — more acceptable to the art world in his view - including style exploration and style invention. The discussion continued on twitter.

With that, I sign off. What was your experience of NIPS? What are the most creative and unusual applications, papers and projects that you saw? Anything major I missed? Tweet me @elluba or comment below.

--

--

Luba Elliott
Intuition Machine

All about AI in creative disciplines. Researcher, producer, curator