
Member-only story
We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center
Designers need a methodology that helps them weigh the benefits of using a new technology against its potential harm
By Caroline Sinders
AI is going to radically change society. It will do so in exciting and even life-saving ways, as we’ve seen in early projects that translate languages (in your own voice!), create assistant chat bots, make new works of art, and more accurately detect and analyze cancer.
But AI will also alter society in ways that are harmful, as evidenced by experiments in predictive policing technology that reinforce bias and disproportionately affect poor communities, as well as AI’s inability to recognize different skin tones. The potential of these biases to harm vulnerable populations creates an entirely new category of human rights concerns. As legislation that attempts to curb these dangers moves forward, design will be integral in reflecting those changes.
“We need a new framework for working with AI, one that goes beyond data accountability and creation.”
Indeed, there are many civil society organizations, nonprofits, think tanks, and companies that already understand AI’s effect on society, and have been working toward creating ethical standards for this burgeoning field. But for designers working with AI, we need something that goes even further than general guidelines and speaks directly to how design often impacts and perpetuates the biases in technology.
We need a new framework for working with AI, one that goes beyond data accountability and creation. We need Human Rights Centered Design.

Here’s why we need this: AI is technology, and technology is never neutral. How we make technology, how we conceptualize it, how we imagine where it fits into culture, and what problems it will solve when placed into product design — these are design choices that can have a deep impact on society.