AI News Roundup — October 2020

by Gabriella Runnels and Macon McLean

Opex Analytics
The Opex Analytics Blog
5 min readNov 6, 2020

--

The AI News Roundup provides you with our take on the coolest and most interesting Artificial Intelligence (AI) news and developments each month. Stay tuned and feel free to comment with any stories you think we missed!

_________________________________________________________________

Forget Me Not

Photo by Kelly Sikkema on Unsplash

Researchers have long studied the human brain for clues on how to improve AI. From the time we are born, humans absorb information from our environments and make inferences that allow us to quickly learn and adapt. Machine learning algorithms are often structured in a similar way — they “learn” from data and respond to new data based on what they’ve learned. However, AIs can kind of only focus on one thing at a time: in order to learn a new task, they basically have to forget everything they already know and start afresh. New research from University of Massachusetts Amherst and the Baylor College of Medicine attempts to overcome this problem of “catastrophic forgetting” through a new technique they’re calling “memory replay.” Check out this article for more on this intriguing breakthrough.

One and Done

Photo by Kelly Sikkema on Unsplash

Besides the problem of “catastrophic forgetting” discussed above, another major difference between an AI and a human brain is the amount of data needed to learn something new. A human child can recognize something — like a cat, a dog, or a truck — after seeing very few examples of it before. AI algorithms, on the other hands, typically need to be fed many, many images before they are able to recognize and classify a new example. This hurdle, like the catastrophic forgetting issue, may also soon be solved. Researchers from the University of Waterloo in Ontario have devised a new method called “less than one”-shot learning, wherein an algorithm “should be able to accurately recognize more objects than the number of examples it was trained on.” For more on this novel technique, check out this article from MIT Technology Review.

Who Watches the Watchmen?

Photo by Scott Webb on Unsplash

We’ve written about ethics in AI before, and no doubt we will many times more in the future. As exciting and potentially beneficial the field of AI is, it can also be incredibly dangerous and harmful if not kept in check. We’ve discussed in the past the importance of implementing policies and audits to eliminate bias and prevent adverse or unfair outcomes in AI applications, but what happens when the policies and audits are themselves sources of bias? In this opinion piece from MIT Technology Review, the authors argue that global companies, advisory boards, and research focused on AI ethics are all disproportionately controlled by representatives of countries and populations that have historically been privileged on the world stage. If this disparity is allowed to persist, it will likely perpetuate the very harmful biases that these organizations are trying to combat.

Battery Innovation

Photo by Thomas Kelley on Unsplash

It’s pretty boring to say that batteries power everything, because they do, but it’s worth reminding ourselves that improving battery tech is the only way that electric vehicles will become more affordable and more logistically possible for the populace as a whole. Improving charge time, in particular, remains a thorn in the side of battery experts — filling up a gas tank is way, way faster.

A Stanford lab is using AI to predict battery effectiveness far earlier in its lifecycle than would be possible using traditional methods. By charging and discharging scores of batteries over and over, day after day, researchers are able to log critical data that feeds a model. This model helps distinguish high-performance battery charging techniques from lower-quality ones more quickly, making experimentation and iteration far more efficient.

Adversarial Assistance

Photo by Patrick Tomasso on Unsplash

I’m sure you’ve heard of adversarial attacks, wherein seemingly innocuous signal (like a static pattern) can trigger bizarre responses (like classifying said static as a llama, for example). This is a rich area of computer vision research, both for reasons of preserving privacy, as well as combating the potential for bad actors to use it maliciously. In this tutorial from PyImageSearch, you’ll learn a fuller history and understanding of adversarial attacks, how to set up a boilerplate image recognition network, and then how to put a basic adversarial attack into production.

That’s it for this month! In case you missed it, here’s last month’s roundup with even more cool AI news. Check back in November for more of the most interesting developments in the AI community (from our point of view, of course).

_________________________________________________________________

If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.

--

--