Deep Learning Could Reveal Why the World Works the Way It Does

At a major AI research conference, one researcher laid out how existing AI techniques might be used to analyze causal relationships in data

MIT Technology Review
MIT Technology Review

--

Photo: Brandonrobbins.com/Moment/Getty Images

By Karen Hao

This week, the AI research community has gathered in New Orleans for the International Conference on Learning Representations (ICLR, pronounced “eye-clear”), one of its major annual conferences. There are over 3,000 attendees and 1,500 paper submissions, making it one of the most important forums for exchanging new ideas within the field.

This year the talks and accepted papers are heavily focused on tackling four major challenges in deep learning: fairness, security, generalizability, and causality. If you’ve been following along with MIT Technology Review’s coverage, you’ll recognize the first three. We’ve talked about how machine-learning algorithms in their current state are biased, susceptible to adversarial attacks, and incredibly limited in their ability to generalize the patterns they find in a training data set for multiple applications. Now the research community is busy trying to make the technology sophisticated enough to mitigate these weaknesses.

--

--

MIT Technology Review
MIT Technology Review

Reporting on important technologies and innovators since 1899