AI News Roundup — August 2020

by Gabriella Runnels and Macon McLean

Opex Analytics
The Opex Analytics Blog
5 min readAug 31, 2020

--

The AI News Roundup provides you with our take on the coolest and most interesting Artificial Intelligence (AI) news and developments each month. Stay tuned and feel free to comment with any stories you think we missed!

_________________________________________________________________

Bayes-ic Understanding

Photo by Markus Winkler on Unsplash

Since the beginning of the pandemic, world governments and the general public alike have sought definitive answers to questions about the coronavirus. How infectious is it? How long should you quarantine after a potential exposure? Are masks effective? It can sometimes feel like the best available answers keep changing — the masks were considered ineffective at first, for example, but now the official CDC guidelines definitely recommend them. While it is understandable to search for concrete, unimpeachable facts, this way of thinking may not be that useful in times like these.

According to University of Florida biostatistics professor Natalie Dean, “We should be less focused on finding the single ‘truth’ and more focused on establishing a reasonable range.” As new research emerges, we need to update our prior beliefs and assumptions to refine our knowledge and work towards greater understanding. This kind of thinking is called “Bayesian reasoning” — a process that Harvard epidemiologist Marc Lipsitch describes as “a principled way to integrate what you previously thought with what you have learned and come to a conclusion that incorporates them both.” For more on a Bayesian approach to understanding the epidemic, check out this article from The New York Times (and for more intuition on Bayes’ rule in general, see this blog post from our 101 series).

Fake Faces, Ooh La La!

Photo by James Barr on Unsplash

The generative adversarial network seemed like such a uniquely confusing but interesting idea — pit two networks against each other, with one creating fake images and the other attempting to classify them correctly. Iron sharpens iron, and each improves at its task until one network is a master detective and the other a master deceiver.

Now keep all that and double it. Then you have CycleGAN, which uses four networks to be able to translate images from one kind to another. While this seems akin to the harmless art-oriented experiments of neural style transfer, it’s actually far more distressing in terms of its potential impact. McAfee Security recently showed how airport-grade facial recognition systems could be fooled by CycleGAN-created images of faces that look like one person but get classified as another. By training on these adversarial examples, Person A could waltz through airport security, a hacked machine having classified them as Person B.

Read the whole McAfee blog post here.

Birds of a Feather Are Classified Together

Photo by Zdeněk Macháček on Unsplash

Studying birds seems to have always been a difficult task. Tracking them with sensors omits critical visual information. Although using humans to identify birds in pictures and video is usually very accurate, it takes enormous amounts of time. Some combination can usually be used, but not without significant effort both technologically and from experts who tag records manually. Enter AI: now CNNs can be used to identify birds automatically. This in turn increases data sample size and quality, and can be applied both in controlled settings and in wild populations.

Deepfakes For the Common Good

Photo by Jonathan Daniels on Unsplash

Chinese AI giant Tencent recently released a white paper detailing their future plans for AI in their organization. Part of this comprehensive document was actually a section on potential positive uses for deepfakes, usually the subject of negative reactions ranging from deep concern to outright revulsion.

Rebranding deepfakes as “deep synthesis,” Tencent discusses several relatively anodyne applications for the controversial tech: “realistic ‘body double’ performances” (citing movies from the Fast and Furious and Star Wars franchises), facial and image synthesis (think Snapchat filters), e-commerce experience improvement (e.g., digital models trying on clothes), and more involved representations of “digital humans” (though not directly mentioned, Tupac’s hologram obviously looms large over this passage).

Check out the translated paper here.

When Galaxies Collide

Photo by Greg Rakozy on Unsplash

In a few billion years, our beloved galaxy, the Milky Way, will likely collide with its nearest major galactic neighbor, Andromeda. But it won’t be the first time we’ve collided with other galaxies — in fact, many of the stars that are now in the Milky Way weren’t actually born here. A long, long time ago (in a galaxy not so far away), a dwarf galaxy collided with the Milky Way, adding foreign stars to our night sky. Scientists have been able to identify these newcomers, often by profiling their chemical composition or using other tried-and-true techniques.

Like clockwork, AI appears: using a neural net and a data set of roughly a billion stars, researchers from Caltech have discovered a cluster of 250 stars not originally from the Milky Way. Further research will determine exactly where these stars came from, and hopefully AI can continue to help us explore the furthest reaches of our galaxy.

That’s it for this month! In case you missed it, here’s last month’s roundup with even more cool AI news. Check back in September for more of the most interesting developments in the AI community (from our point of view, of course).

_________________________________________________________________

If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.

--

--