Drift Mapping and Software Design

Marianne Bellotti
Software Safety
Published in
8 min readJul 13, 2020

--

Computers can’t avoid automating decisions if software engineers can’t figure out what decisions are being made.

Image created by stories

Human in the loop has become a critical part of the conversation around AI and machine learning. Many of the catastrophically negative outcomes associated with AI — or really any software that automates — come from either removing human oversight, or designing a user experience that encourages humans to neglect to provide oversight. Software should be designed as a tool that humans manipulate to make more efficient decisions. Many of the emerging AI ethics guidelines emphasize both human agency and transparency. Computers should neither take decision making power away nor hide decisions from humans.

Which all sounds great until you try to figure out when something is a decision. When computer vision labels an object as a cat or a truck … is that a decision? When anomaly detection blocks a transaction on a credit card or a login form … was that a decision? What about auto-formatting?

One of the most problematic elements of the conversation around ethics in software is that most of the action is happening away from the people who build software professionally. The research is in universities, policy groups and think tanks. Even in the rare case that a company has in-house ethics people, they are often separate from the…

--

--

Marianne Bellotti
Software Safety

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)