A.I. Researchers Synthesized Fake Obama

Artur Kiulian
Algorology
Published in
4 min readJul 14, 2017

Algorology Digest #10: AI teaches itself parkour, New Funds, Giant Icebergs

Researchers over at the University of Washington have achieved astounding results in faking up video footage. Their approach builds on similar talking head techniques like Face2Face, though Face2Face has far worse visual results because it transfers the mouth from another video sequence instead of generating it within the given footage. Quite impressive results, you can dive into the actual research paper here.

Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.

DeepMind’s AI is Teaching Itself Parkour

The video above is one of the many attempts to induce desired behavior of a software agent through reinforcement learning. Everything the stick figure is doing in this video is self-taught. The jumping, the limboing, the leaping — all of these are behaviors that the computer has devised itself as the best way to get to the final destination.

Which also explains why it looks so weird, since it doesn’t have any internal guidance in terms of the actual posture behavior that we as humans have. The only thing agent has is sensors and incentivized reward as guidance to reach a certain destination in a virtual space.

This research paper proves that reinforcement learning can be quite efficient to teach complex movements within environments agent is not familiar with.

Google Launches AI Venture Fund

Google parent company Alphabet just launched a new venture arm to support artificial intelligence innovation.

“If we’re really going to help AI happen faster, we needed to be more involved in the community,” said Patterson, who’s spent about a decade at Google working on Android, search, advertising and AI. “That’s why we decided to do this — to spur innovation in the AI space.”

The idea behind the venture fund is not only to provide funding (which already exists in abundance at the front doors of the recent hype) but to also support startups with internal resources.

These resources are not limited to just AI training and counseling but also proposes actual talent rotation within the portfolio of ventures. Google engineers may have these productive breaks and work for one of the portfolio companies or even multiple at the same time.

Can Machine Learning Help Those Giant Floating Icebergs?

Microsoft just announced AI for Earth program during event today in London. This program will assist organizations using AI for environmental protection, innovation and research — particularly those addressing issues in water conservation, agriculture, biodiversity and climate change.

Along with the announcement Microsoft is investing $2 million into the program, which will manifest as research grants enabling access to its cloud and AI tools, as well as technical training on its various platforms.

Not sure what to do with the giant trillion-ton iceberg that recently splintered off western Antarctica and is now floating at sea but the AI for Earth is an amazing initiative that researchers can greatly benefit from right away. I’m sure researchers could’ve timely predicted the event of such significance if they had appropriate resources.

Originally published through Algorology email newsletter. If you would like to receive it before anyone else — signup below.

--

--