Image for post
Image for post
Machine understanding of the visual arts (credit: Author).

What was AI in 2017?

joseph reisinger
Jan 18, 2018 · 5 min read

When I started grad school in 2005, AI was still a taboo word, an anachronism used only by centuries-old emereti and some of the more colorful cranks hanging around the edges of the academic system (the proper nomenclature was “machine learning”). I was told by folks on the admissions committee in no uncertain terms that Systems was much more important — UT was on a hiring tear in Systems at the time — and that if I wanted to be competitive on the job market I should study Systems, and stay away from AI.

Image for post
Image for post

Fast forward to 2018, where just last month Jeff Dean at Google Brain released mind-blowing work on learned index structures — that is, Machine Learning replacing core heuristics used in computer systems design. The tables have turned, and now AI is eating software.

The set of algorithms and technologies that we colloquially call “AI” are rapidly changing what is possible with software. This is already profoundly affecting people’s lives in unforeseen ways as such algorithms become incorporated into core social institutions, like healthcare, criminal justice and education (as Kate Crawford notes in her NIPS 2017 keynote). Looking forward, as AI technology continues to move ahead, it is clear that it will leave large swaths of people, organizations and institutions behind (Eric Brynjolfsson, “The Second Machine Age”). This has lead to blunt calls for regulation of AI by Elon Musk, MIRI and others.

Last year, Ruchi Sanghvi (founder of South Park Commons) and I wanted to better understand how the specific technologies that are driving the AI revolution worked and precisely what changes might be coming in the future. In fall 2017, we produced a technical speaker series charting some of its breadth and impact. The series sought to answer three questions:

  • What fields and disciplines will be the most critically impacted in the next 5–10 years?
  • What are the specific technical advances that will drive this transformation?
  • What ethical challenges and risk factors must be addressed?

The 2017 AI Series at South Park Commons

Links to all recorded talks and videos:

Below are a few highlights from what we learned.

The second golden age of computer architecture

Dave Patterson believes that we are at the beginning of a second “golden age of computer architecture”, with a Cambrian explosion of new and old companies focused on making matrix multiplication primitives incredibly fast. Hardware titans like NVIDIA and Intel, new entrants like Google TPU, and smaller startups like Cerebras are driving investment in a variety of new hardware models: GPUs, ASICs, TPUs, and more. Hardware capabilities are rapidly co-evolving with software needs and its uncertain what architecture(s) will ultimately win out.

Watch the full panel here:

AI is eating software

With the rise of machine learning frameworks, clean abstractions and modular design patterns inherent to the practice of software engineering are being replaced by high-dimensional floating-point tensors and efficient matrix multiplication. As this trend continues, it will necessitate new entirely new engineering paradigms:

At SPC, we welcomed Zak Stone, the PM for Tensorflow and Cloud TPUs at Google Brain, to give a talk at where he wove together a several challenging threads that are shaping future machine learning progress, including the rise of scalable frameworks for differentiable programming such as Tensorflow, and the race to build cross-platform “linear algebra” compiler to keep up with the proliferation of new hardware (read more here).

Unlocking human creativity

Finally, @hardmaru and Adam Roberts joined us to discuss how AI is making strides in augmenting human creative work, highlighting work from the Google Magenta project on building ML-powered creative tools to help artists and musicians. They covered recent advances in the space including vector drawing generation with Sketch-RNN and music generation with Google’s Magenta platform, which won Best Demo for the second year in a row at NIPS (check out their in-browser demo here).

AI has the potential to unlock true creative collaboration, “understanding” the visual or musical world not just in terms of pixels or notes, but in terms of the underlying concepts and themes. That is, AI can be seen as a bridge between the powerful, but rote automation inherent to classical computing, and the softer, more intuitive world of human concept understanding.

What’s next for AI?

This year saw tremendous advances in AI, but as Josh Tenenbaum put it in his CCN talk this year: “All of these ‘AI’ systems we see, none of them is ‘real’ AI”. That is, we’re able to interpolate pretty well between known examples, but we’re still nowhere close to having algorithms that can learn novel concepts from scratch. Furthermore, our algorithms learn nowhere nearly as efficiently as animals or humans (for example: requiring hundreds of thousands or millions of examples of cats in order to learn to classify cats).

I’m personally excited by research in transfer-learning and one-shot concept learning that aims to reduce the amount of training data required. Not only will this make our models more efficient, it will effectively democratize models training — you’ll no longer need the data and infrastructure scale of a Google or Facebook to learn effectively (Yann LeCun recently addressed this).

Looking forward to 2018

In 2018, we’re excited to continue the series and we’re going to hone in our focus on two core themes:

  • The transformative effects AI will have on digital media, and
  • The ethical challenges of this and other new developments in AI.

Note that these two areas have non-trivial overlap! For example, AI coupled with the rendering and special effects pipelines of Pixar or ILM has profound implications on what constitutes evidence in criminal cases. There is incredible potential for abuse for systems that, e.g., can synthesize photo-realistic video of Obama synced up to any audio clip. What will the next 5 to 10 years look like as these technologies mature? How will our existing institutions have to change to cope with this?

To stay in the loop on the discussion in 2018, join the South Park Commons email newsletter.

South Park Commons

A community that helps people share ideas and explore new…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store