AI News Roundup — September 2020

by Gabriella Runnels and Macon McLean

Opex Analytics
5 min readSep 30, 2020

--

The AI News Roundup provides you with our take on the coolest and most interesting Artificial Intelligence (AI) news and developments each month. Stay tuned and feel free to comment with any stories you think we missed!

_________________________________________________________________

Impractical Applications

Are AI researchers ignoring important real-world applications of machine learning? In this opinion piece from MIT Technology Review, Hannah Kerner argues that researchers in the AI community have a strong bias towards completely new algorithms that lead to “marginal or incremental improvements on benchmark data sets,” to the near-total exclusion of useful applications of machine learning to important real-world problems. She says that “even a hint of the word ‘application’ seems to spoil the paper for reviewers.”

The issue with this bias, she argues, is that it decreases the motivation for researchers to work on potentially groundbreaking developments in AI that could improve life, advance science, and generally lead to a better human society. She says that the purpose of AI is “to push forward the frontier of machine intelligence” — but if the “frontier” is defined only in terms of algorithmic novelty and ignores the impact on human beings and the world at large, we risk missing out on the true realm of possibilities AI can provide.

Making Strides in AI Ethics

Photo by Photos Hobby on Unsplash

Given that AI as a field evolves as quickly as it does, AI ethics naturally changes at a pretty fast pace as well. But this is harder than it seems — while experts can push AI forward at will, that results in AI ethicists scrambling to keep up. As time goes on, AI ethics will only become more important. Read the latest from futurist Andra Keay, founder of Silicon Valley Robotics.

AI-Driven Climate Change Risk

Photo by Karsten Würth on Unsplash

In 2020, the United States has dealt with a significant number of extreme weather events, from the fires in California to the hurricanes in the Gulf Coast. These natural disasters are not an anomaly, but rather they are part of a pattern of increasingly devastating effects of climate change.

Using data from a company that models risks to financial markets due to the changing climate, the New York Times has created an interactive map of the U.S. that shows the greatest “climate threats” by county. You might not be shocked to see “wildfire” as the highest risk in California or “sea level rise” as the biggest concern for South Floridians, but you may be surprised at the areas of the country that will be most affected by extreme heat and extreme rainfall. Seeing so clearly the dangers your own home town may face can be scary, but we cannot change what we don’t understand — and if you aren’t sure how you personally will be affected by climate change, this data viz is a good place to start.

A Brief History of AI

Photo by Michael Dziedzic on Unsplash

To physicists, everything is a black box, waiting to be illuminated bit by dark bit. The physicist’s quest is to understand, often by applying a conceptual and mathematical framework to understanding natural phenomena.

Consequently, the idea of cracking into the black box of an AI is a perfectly fitting challenge. With AI’s increasing penetration into physics research, some physicists now believe that they should investigate further the inconsistencies of AI to attempt to make it more reproducible and unbiased. Read more from Wired below.

Creepy Crawler

Photo by Jamie Street on Unsplash

OpenAI’s GPT-2 text generation model shocked the world in early 2019… or would have, if its creators hadn’t declined to release it in full out of fear that its lifelike writing would be misused. They eventually relented, and even developed a successor, GPT-3.

But although these language models consistently display impressively human style, they often get basic facts wrong. Diffbot takes a different approach, instead using extensive web scraping to form complex, knotty knowledge graphs that allow it to nail facts, not just style. Diffbot is a continuous web crawler, “rebuild[ing] its knowledge graph every four to five days.” Read more from the MIT Technology Review below.

That’s it for this month! In case you missed it, here’s last month’s roundup with even more cool AI news. Check back in October for more of the most interesting developments in the AI community (from our point of view, of course).

_________________________________________________________________

If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.

--

--