The Value of Digital Harm Reduction
I got into a twitter thread, and it got big enough to post.
It started with https://twitter.com/QuinnyPig/status/1344467144008863744 — musing that they’re not convinced that anyone in tech has ever been asked “How would you design our products to reduce the potential they’re abused?”
Tech organizations have definitely been asked this question about designing to reduce digital harms. Some tech organizations - @ushahidi, and other organizations that are part of a network of work that connects at @rightscon, @EngnRoom, and similar - have definitely worked on answering it. And I’ve been in the room with very large tech organizations trying to work out how to reduce their own digital harms.
But designing to reduce digital harms isn’t the main problem. It’s non-trivial work, but the larger problem is that tech organizations have competing priorities. They can try to help humanity by reducing digital harms, but they also have to satisfy shareholders by optimizing returns on investments, and patch that work into existing tech, schedules, etc.
The real hack is to make harm reduction a top-level value. Making harm reduction a top-level value means it can compete with shareholder pressure for financial value, and engineering pressure to improve product for users. Although we can put internal and external pressure on an organization’s leads to raise the value on harm reduction, and that work is valuable and shouldn’t stop, but realistically, harms usually become a top-level value for a tech organization when they bring financial or regulatory risk to the organization.
Thinking about how to highlight those risks (e.g. the nazi bar argument), and how to embed harm reduction right from the start of designing and building tech, is valuable work. For inspiration, I’d say that it’s worth looking at how organisations started to build in security from the design stages rather than just tacking it on at the end of a build, but that’s still a work in progress too.