After a Year of Tech Scandals, Our 10 Recommendations for AI

Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

AI Now Institute
Dec 6, 2018 · 6 min read

Today the AI Now Institute publishes our third annual report on the state of AI in 2018, including 10 recommendations for governments, researchers, and industry practitioners.

It has been a dramatic year in AI. From Facebook potentially inciting ethnic cleansing in Myanmar, to Cambridge Analytica seeking to manipulate elections, to Google building a secret censored search engine for the Chinese, to anger over Microsoft contracts with ICE, to multiple worker uprisings over conditions in Amazon’s algorithmically managed warehouses — the headlines haven’t stopped. And these are just a few examples among hundreds.

At the core of these cascading AI scandals are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and existing regulatory frameworks fall well short of what’s needed. As the pervasiveness, complexity, and scale of these systems grow, this lack of meaningful accountability and oversight — including basic safeguards of responsibility, liability, and due process — is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.


Recommendations

1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.

2. Facial recognition and affect recognition need stringent regulation to protect the public interest.

3. The AI industry urgently needs new approaches to governance.

4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.

5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.

6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.

7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.

8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”

9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.

10. University AI programs should expand beyond computer science and engineering disciplines.


You can read the full report here.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store