Resilient AI Systems

Safety and Security in the Future of AI

Elizabeth Anne Watkins
Data & Society: Points
4 min readOct 8, 2019

--

By Data & Society Research Analyst Elizabeth Anne Watkins and Research Lead Madeleine Clare Elish

This is the third blog post in our series on AI & security. As AI becomes increasingly integrated into our everyday lives, we need to start reconceptualizing our notions of security. Read the other posts in the series here and here.

Novel applications of artificial intelligence can endanger people in new ways. As AI is integrated into new parts of our lives, we must keep safety and security in the development and maintenance of AI systems top of mind. Last year at Data & Society, we convened experts from a number of fields, including cybersecurity, machine learning, computer science, political science, national security, activism, and advocacy, to conceptualize the greatest opportunities and challenges to building safe and secure socio-technical systems.

Among our most significant findings was a rift between what it means to make something “safe” and make something “secure.” Safety and security have different valences for different communities. It’s important to take these discrepancies into account because the future of safe and secure AI involves all of us.

Safety and security have different valences for different communities.

Many discussion participants said they valued the language of security for its rhetorical force. Invoking “security” in development can be an effective way to garner organizational priorities and resources. After all, the idea of security is, in its most basic sense, the state of being free from danger. It calls attention to what must be protected and kept safe, like “food security” or “economic security.” In the words of one participant, “security is a way to legitimate discussions about threats.” Another observed “people can argue about whether software ought to be fair or inclusive. But no one argues whether it should be secure.”

On the other hand, the language of security can be intrinsically divisive and privileged, linked to the military and the use of force. Human rights advocates reminded the group that the goals of “security,” and its attendant infrastructures of weaponization, have been used to perpetuate violence against vulnerable communities. “Security,” one activist said, “means securing the status quo of power.” He explicitly rejected the frame of “security” in favor of the less-militarized “safety.”

A principle challenge in bringing together different kinds of expertise to bear on the relational, socio-technical nature of machine learning vulnerabilities, is finding consensus across the values of respective communities. We’ve learned that what precisely constitutes “being secure” or “being safe” is context and community specific. But, in our minds, this need not be the end of the conversation. Rather, the flexibility of both “safety” and “security” could enable new framings to ensure the safety of diverse and differently vulnerable populations.

Many existing civil society and activist organizations have long histories of articulating and planning for the potential failures and harms of systems. Regardless of terminology, such organizations will be invaluable in helping to articulate how to protect vulnerable and marginalized communities. When computational systems become more interconnected—and attacks themselves cross boundaries of any particular sector, organization, or legal framework—threats become an increasingly shared responsibility.

Our colleagues have called for socio-technical thinking to be integrated into engineering best practices, in a step they call “heterogenous engineering.” Others in our orbit posit that socio-technical thinking could provide a more inclusive and community-oriented framework of incident analysis in cases of online harassment. In another example of more widely scoped engineering practices, former Federal Trade Commission Chief Technologist Ashkan Soltani calls for developers to conduct research beyond customary “usability” testing, and instead perform what he terms “abusability” testing: “There are hundreds of examples of people finding ways to use technology to harm themselves or other people, and the response from so many tech CEOs has been, ‘We didn’t expect our technology to be used this way.’ We need to try to think about the ways things can go wrong. Not just in ways that harm us as a company, but in ways that harm those using our platforms, and other groups, and society.”

Now is a critical inflection point for designing better defenses and modes of accountability.

As AI permeates more of what we must come to understand as socio-technical systems, where humans and tools work together and encounter each other in intertwined and often novel ways, we must decide how to build structures that are resilient to humanity’s endless drive towards exploitation. Now is a critical inflection point for designing better defenses and modes of accountability within our technologies and institutions.

Elizabeth Anne Watkins and Madeleine Clare Elish are members of the AI on the Ground Initiative at Data & Society.

--

--

Elizabeth Anne Watkins
Data & Society: Points

Elizabeth is a Research Scientist in the social science of A.I. at Intel Labs.