How New York City is Advancing Equity in AI

by Clayton McDonald, 2L Santa Clara Law JD/MBA student

Source: Unsplash

On April 25th, 2022, the AI, Equity, and Law Series hosted by Santa Clara Law and organized by Professor Colleen Chien, welcomed John Paul Farmer. Farmer is the current president of WeLink Cities, former CTO of the City of New York, and former managing director of Microsoft Cities. In particular, Farmer discussed two seminal documents that his office produced while he served as CTO under the leadership of Mayor Bill de Blasio: the AI Primer and AI Strategy.

With the strategies laid out in these publications, as well as NYC’s passage of an AI bias audit requirement for employment tools, which will take effect in 2023, the Big Apple has made itself the first U.S. city to take the issue of regulating AI seriously, by proposing a series of steps for a municipality to maximize the benefits of AI while ensuring equitable outcomes for all of its citizens. A recording of the talk can be found here.

A Need for a Unified, Local Approach

When Farmer took over as CTO of New York City, he knew there was a need to address artificial intelligence. Farmer and the rest of his team at the Mayor’s Office of the Chief Technology Officer (NYC CTO) embarked on a literature review of AI Strategies employed by governments worldwide. They quickly discovered that significant public-sector engagement with AI had only happened at the national or state, but not city, level. Much of the discussion revolved around national security issues, military applications, and how to win at the international level in business opportunities related to AI. Given the lack of frameworks for regulating AI at the municipal level, the NYC CTO decided it would develop a unified approach to AI for NYC. They hoped to lay down the foundation for using AI to its fullest potential while protecting people from harm and building a better society for all in the process.

The team’s first order of business was to identify the target audience for their policy papers. They began by gathering information and consulting various groups both within and outside the United States, in government and in nonprofits, and in academia and in the private sector. “It is really important to understand how those who are building AI tools are doing so and how those using AI tools are doing so,” Farmer stated.

“We did take a very initial 30,000-foot view to consider all of the options as we figured out where would we add the most value, where would we do something unique, and where would we actually influence the decisions that are going to affect people in the years ahead.”

Ultimately, the NYC CTO decided to tailor its message to decision-makers in the public and private sectors.

Challenges in Regulating AI at the Local Level

An immediate challenge facing Farmer and his team’s efforts to create a basis for regulating AI at the municipal level was the sheer size and archaic structure of the New York City government, which employs 330,000 people — more than any other city in the country (and more than all but 3 state governments) — and was founded in 1625. Moreover, the only place it all comes together is with the Mayor of NYC. As Farmer attests, “the way decisions get made is by getting time with the Mayor.” Its hulking bureaucratic machinery makes it challenging for the city to be nimble and exceedingly difficult for different city agencies to create a unifying set of rules, regulations, and even best practices. Hence it was essential for the NYC CTO to shepherd the city toward a unified approach to AI.

Source: New York City AI Strategy

The NYC AI Primer

Following their discussions with various groups and stakeholders, the team decided to create two distinct-but-related documents: an AI Primer and an AI Strategy. Relatively few people today are knowledgeable about AI beyond what they read in newspaper articles and salacious headlines. Therefore, an important goal of the AI Primer was to establish a common “understanding of the technology itself that was not swayed by the agenda of the writer,” as Farmer put it.

The AI Primer addressed foundational topics such as classical programming versus Machine learning, which led to discussions of why the data used in machine learning is so key. The AI Primer also provided simple examples of AI, like binary classification. Through these examples and discussions, it demonstrated the failure modes for AI systems and discussed methods for eliminating errors. The goal, Farmer said, “was to help people better understand how machine learning is making the recommendations that it makes.”

Through its AI Primer, the NYC CTO provided a solid foundation for understanding AI and its risks for technical, policy, and other decision-makers in or adjacent to the NYC government.

The NYC AI Strategy

Next, Farmer turned to the NYC AI Strategy, which was structured into five chapters:

  1. Data Infrastructure
  2. Applications
  3. Governance
  4. Partnerships
  5. Business, Education, and Workforce

With respect to the final chapter involving the private sector, Farmer notes, “if you are only operating inside the walls of government, you are missing significant opportunities to make improvements that affect people’s lives.”

“For anyone addressing these issues, who really wants to drive equity, who really wants to make sure that AI is deployed [equitably] in people’s daily lives, you have to have some sort of cross-sector approach.”

Source: Illustration by NYC CTO; photos sourced from Unsplash/

Farmer then provided a few examples from the AI Strategy of how AI is being deployed in NYC. One example is the use of AI by the Mayor’s Office of Criminal Justice in performing “Pretrial risk assessment,” which refers to the process of determining a defendant’s risk of failure to appear in court or to commit other offenses before trial. “This is a power that has existed with judges, with human beings for a long, long time, and we know that the data showed that certain groups absolutely were negatively impacted by the bias that these human beings have… The fact of the matter is Black and Brown people were being kept in jail and White people were being released,” says Farmer.

So the Mayor’s Office of Criminal Justice engaged with multiple different modelers and conducted a triple-blind study, where they took the data that existed and tried to model the likelihood that someone would commit a pre-trial offense or flee. They ended up with a recommendation engine that indicates a defendant’s pretrial risks, an objective measurement that judges can use to help check their own existing biases.

“I think this is an example that was done transparently, publicly, and with academic rigor and review, that allows the Mayor’s Office of Criminal Justice to develop a tool that really does make things better than they were before,” Farmer said.

Another example highlighted in the AI Strategy is the Sounds Of New York City project, or SONYC, which measures noise nuisances and differentiates between many sounds to identify the particular source of the noise at issue, such as a jackhammer, a dog barking, or people yelling after a party. This system is used to supplement the city’s 311 call log, which allows residents to make noise complaints. Farmer remarked, “we know the wealthier neighborhoods make more calls and if you simply look at the 311 call log, you would think we need to spend more time and money on these particular neighborhoods. But when you actually measure the data itself [through SONYC], you see that other communities — lower-income, immigrant communities, communities that don’t speak English as their first language — are also impacted by many of these noises but don’t necessarily log these complaints.” Thus, by relying on AI via SONYC, the city can allocate resources appropriately and respond to everyone’s needs, not just those that speak the loudest in calling the 311 hotline.

These two examples demonstrate how NYC is using AI to ensure more equitable outcomes for its citizens.

Conclusion

Together, NYC’s AI Primer, and AI Strategy represent the first attempt by a city to form a unified action plan around artificial intelligence. The AI Primer delivers a base-level understanding of the technology of AI, and the AI Strategy lays out the steps to make the most of it while ensuring equitable outcomes for all. These documents illustrate the areas where AI can improve the current situation and point out where AI can lead to inequitable outcomes. As Farmer aptly noted: “our view was that the AI Primer and AI Strategy were first steps in an on-going conversation that needs to happen.”

The AI, Equity, and Law Speaker and Blog Series covers developments in AI regulation at the local, state, national, and international levels and is curated by Professor Colleen Chien. Blog summaries in the series are written by Santa Clara Law students in Professor Chien’s AI class and include links to recordings of the public talks. For updates, follow @colleen_chien or @iethics.

--

--