Sign in

AI Now Institute
Researching the social implications of artificial intelligence now to ensure a more equitable future
Mississippi River floodwaters, July 20, 1951, Gerald R. Massie Photograph, box 54, folder 280, Division of Commerce and Industrial Development, RG104.1; Missouri State Archives, Jefferson City.¹

By Dr. Theodora Dryer

Dr. Theodora Dryer, who leads AI Now’s Climate and Water research, testified before the European Parliament on February 3rd, 2021, covering the complex relationship between AI, climate policy, and possibilities for a just and liveable future. This is in response to two of the European Commission’s key priorities for the upcoming years to “accelerate innovation and digitalisation” while at the same time “reaching climate neutrality and high environmental standards.”

The European Commission posed the overarching question: How can AI be deployed to benefit society, advance research, and accelerate our climate transition without its impact hampering our…

A guest post by Lucy Suchman

Strategic Defense Initiative (SDI) aka the ”Star Wars” Missile Defense Program

On March 1st, the National Security Commission on Artificial Intelligence (NSCAI) released its Final Report and Recommendations to the President and Congress. While the Report is the outcome of an extended period of discussion and consultation, the Commission’s recommendations rest upon a set of unexamined, and highly questionable, assumptions.

1. National security comes through military advantage, which comes through technological­ (specifically AI) dominance.

The Commissioners counsel that accelerated adoption of AI-enabled weapon systems is necessary to maintain US military advantage. They promise that AI can enable the achievement of a fully integrated, interoperable command and control system. For the Intelligence Community, Commissioner Jason Matheny states: “Decision-makers should be…

Image via Milwaukee Independent

The AI Now Institute, Ada Lovelace Institute, and the Open Government Partnership (OGP) are partnering to launch the first global study evaluating this initial wave of algorithmic accountability policy.

Governments are increasingly turning to algorithms to automate decision-making for public services. Algorithms might, for example, be used to predict future criminals, make decisions about welfare entitlements, detect unemployment fraud, decide where to send police, or assist urban planning*. Yet growing evidence suggests that these systems can cause harm and frequently lack transparency in their implementation, including opacity around the decisions about whether and why to use them. Many algorithmic systems…

Join us and Whistleblower Aid on March 25th!

Tech companies wield vast power, but face limited accountability, and are able to hide socially significant policies and harmful working conditions behind corporate secrecy. Whistleblowers disclosing misconduct have had astonishing impact — but the process remains fraught.

Join Ifeoma Ozoma, Veena Dubal, the AI Now Institute and Whistleblower Aid for a tech-worker focused webinar, covering the basics of safe whistleblowing and your rights as a worker. You can join securely and anonymously from a personal device:

Thursday, March 25, 2021

3:30pm Pacific / 6:30pm Eastern

Livestreamed on YouTube, Chat & Questions on Slido

Use Tor Browser for anonymous access


Some of the most common questions we’ve received, we’ll continue to update this.

Illustration detail by Somnath Bhatt

Is co-authorship allowed?

  • Co-authorship is allowed. Feel free to submit essay ideas written with 2 or more people.
  • For payment and taxation purposes we will require one person to be the lead contributor. Co-contributors may share the compensation at their own discretion. Please bear in mind that contributor fees/honoraria are taxable and the person receiving payment will be responsible for paying taxes on it.

When is the deadline for submission?

  • We are accepting contributions on a rolling basis from January until March 31st, and aim to publish…

Call for Contributors

Illustrations by Somnath Bhatt

We need to generate narratives that can both offer perspectives from other places but also offer crucial anticipatory knowledge and strategy that can help ensure the incursion of AI does not follow the path of social control and consolidation of decision making power that is marking its proliferation in the West.

Critical thinking in AI has moved beyond examining specific features and biases of discrete AI models and technical components to recognize the critical importance of the racial, political, gendered and institutional legacies that shape real-world AI systems as well as the material contexts and communities that are most vulnerable…

2020 has been a year of hard truths and tragedy, as interlocking crises put the failures, inadequacies, and structural limitations of our core institutions in the spotlight. At the same time, we see the AI industry rushing to profit in the space left by an absent social safety net, bolstered by governments’ increasing turn to tech solutions. AI companies are ramping up surveillance of our workplaces, schools and communities; cracking down on worker organizing and ethical research; and bankrolling the passage of bills that gut worker protections for millions — while growing richer and more powerful in the process.


Gloria Conyers Hewitt

by Joy Lisi Rankin

In the 1930s Dr. Gertrude Blanch led the important Mathematical Tables Project, a nearly 450-person effort to compute logarithmic, exponential, and other calculation results essential to American government, military, finance, and science. After earning her doctorate in mathematics at Cornell, she led new approaches to computation and published volumes of tables and calculations in scientific journals. Despite her contributions, Blanch did not appear as the author of the papers she wrote. For the majority of her time on the project, her male supervisor Arnold Lowan instead received credit.

This is a lasting injustice: women, Black, Brown…

This post reflects on and excerpts from our most recent report: Regulating Biometrics: Global Approaches and Urgent Questions.

Image from How Do You See Me? by Heather Dewey-Hagborg

The proliferation of biometric surveillance technology in schools, protests, criminal trials, and as a condition to access welfare has led to widespread calls to introduce new laws, update existing ones, pause these systems, or outright ban their use. The list of regulatory developments this year alone is large and growing:

By Erin McElroy, Meredith Whittaker, Genevieve Fried

The following is excerpted from a piece we wrote in the Boston Review on how property technology (proptech) is leading to new forms of housing injustice in ways that increase the power of landlords and further disempower tenants and those seeking shelter. To learn more about how real estate technology companies are expanding surveillance, data accumulation, and algorithmic means testing, read the full article here.

As millions worry about sickness, layoffs, and paying rent on the first of the month, tech companies are positioning themselves to profit. Some are rapidly spinning up…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store