2020 has been a year of hard truths and tragedy, as interlocking crises put the failures, inadequacies, and structural limitations of our core institutions in the spotlight. At the same time, we see the AI industry rushing to profit in the space left by an absent social safety net, bolstered by governments’ increasing turn to tech solutions. AI companies are ramping up surveillance of our workplaces, schools and communities; cracking down on worker organizing and ethical research; and bankrolling the passage of bills that gut worker protections for millions — while growing richer and more powerful in the process.
This moment raises urgent and difficult questions about the role of research like ours, and the place of this work within a broader set of disciplines and social movements. Indeed, what does it mean to study AI’s social implications when AI’s operational logics can be traced to industrial capitalist forces driving commercialization, on one hand, and deep and complex histories of racialized inequality spanning centuries and contexts, on the other? What is our role in movements for justice and accountability that stretch well beyond a given set of technologies? And how can we honor the complex and interlocking forces at work in AI, and more effectively push back on narratives that paint AI as a sole deterministic agent?
These aren’t questions with easy answers. We believe this moment calls on us to step back, take time, and reflect carefully on the role of research and critique, especially when it’s produced within an elite university in the global north.
In this spirit, we won’t be publishing an annual report summarizing the year in AI, at least not right now. It would be impossible to fit 2020 into our traditional “view from above” framework, and possibly unproductive if we did expend the energy, which is considerable.
So instead of spending our time there, we are spending it planning for the future, and reflecting not only on what needs doing, but how it should be done — how to conduct research and advocacy in line with our values, how to ensure that our work is truly in solidarity with the people who are disproportionately hurt by AI, and how to meaningfully acknowledge and contend with the forces and histories entangled in AI’s design and use. We believe this work is urgent and necessary, and we’re committed to getting it right.
While this is a short statement, it benefited from the participation of the full AI Now team, and reflects a slow and deliberate decision-making process that belies its brevity. In our work moving forward, we will make the labor of deliberation, sitting with complexity, and the full research process visible and valorized in our public-facing artifacts — including ensuring that everyone involved in research, writing, editing, and operational work receive credit for their contributions.