AI News Roundup — June 2020

by Gabriella Runnels and Macon McLean

Opex Analytics
5 min readJun 30, 2020

--

The AI News Roundup provides you with our take on the coolest and most interesting Artificial Intelligence (AI) news and developments each month. Stay tuned and feel free to comment with any stories you think we missed!

_________________________________________________________________

Apps to the Rescue… Or Not

Contact tracing — the process of tracing people who came in contact with someone infected by the virus — is an important method of tracking and containing the spread of COVID-19. In general, contact tracing is a labor-intensive and time-consuming effort, so when Google and Apple announced in April that they had jointly created a technological toolkit that state governments could use to track the virus, it seemed pretty promising.

Unfortunately, contact tracing apps haven’t had the impact that tech companies and public health officials had hoped. Even if every U.S. state planned to use an app (and a significant number have said they won’t), in order for such an app to be effective, it would require a majority of the population to download it. So far, no country has successfully broken a 40% adoption rate for any such app. It seems that most Americans have concerns about downloading an app like this, and people who believe that they have been infected or exposed are even less inclined to trust this technology.

It’s Just You and A.I.

Photo by Franck V. on Unsplash

University of Maryland computer scientist Dr. Ben Shneiderman is a leader in the field of human-computer interaction, and he has warned against unconsidered and irresponsible use of automation and A.I. since the early ’80s. Dr. Shneiderman is wary of fully automating certain tasks, including using self-driving cars for transportation.

Lack of control by humans over automated systems not only makes users anxious and uneasy, but it also has the dangerous potential to absolve human beings of the ethical responsibility for harmful outcomes. Dr. Shneiderman has argued that scientists and technologists should seek not to replace humans fully, but to “extend the abilities of human users.” Check out this piece from the New York Times for more on why humans and robots should work together.

Stimulating Simulation

Photo by ThisIsEngineering from Pexels

Many readers probably remember the viral Washington Post article that showed simulations of the coronavirus spread and argued for the importance of “flattening the curve” through social distancing and sheltering in place. The easy-to-understand yet information-dense animations captured people’s attentions and explained complex mathematical concepts to a wide audience.

Simulation as a method of emergency management and response aren’t new; for example, when a hurricane is coming, disaster response teams use data simulation to show the potential impacts of power outages on critical infrastructure. Games and simulations are a valuable crisis management tool because they not only convey information, they also “cultivate empathy” by fully involving the viewer in the progression of events and ultimate outcomes. In fact, data simulation might be even more effective than alternatives for situations like the coronavirus pandemic. Harry Stevens, the author of the viral Washington Post article, says that “[s]imulations help readers build up their intuition about how diseases work in a way that words and even static charts cannot.”

Anti-Theft Measures Don’t Measure Up

Photo by Tobias Tullius on Unsplash

Anonymous sources within Walmart’s home office have raised concerns with the retail giant’s partnership anti-theft software shop Everseen. A small firm in Ireland, Everseen makes computer vision software designed for real-time theft detection in retail stores, software that’s been in use at Walmart locations since 2017. However, detractors within Walmart claim that the software is less than top-notch.

The internal sources created a video displaying a fairly routine checkout being misclassified as theft after a shopper places a cell phone down on the scanner, and another checkout in which theft is ignored simply by placing two items on top of each other. Though it’s not clear the extent to which the software errs overall, its periodic misclassifications create a slower checkout process, frustrating customers at Walmart’s expense.

Check out the original article here.

All Brains Need Sleep

While neural networks are usually touted as inspired by real neurology concepts, there are always ways to hew closer to the source design. One such variant is the “spiking neural network,” in which output signals/responses are only elicited after the network receives multiple sets of input signals over time, rather than the standard input-always-creates-output, mathematical function approach. While many characteristics of classic neural networks translate to spiking ones, there’s one notable new problem: with uninterrupted training, the network will eventually fire continuously, independent of whatever the input data actually represents.

Researchers have, however, identified a fix: put the network to sleep. Much like human brains, these spiking neural networks need to rest periodically to avoid this constant-spiking fate. “Rest” in this context means “cycles of oscillating noise” that are reminiscent of deep-sleep brain waves, preventing the networks from becoming overstimulated to the point of uselessness.

Read more on these sleepy deep neural networks here.

That’s it for this month! In case you missed it, here’s last month’s roundup with even more cool AI news. Check back in July for more of the most interesting developments in the AI community (from our point of view, of course).

_________________________________________________________________

If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.

--

--