Building Ethical Software
Good engineering is about more than writing code. It also requires thinking about the larger impact of what you build.
By Michael Schuller, Senior Engineer, BCG Digital Ventures
In 1935, Eastman Kodak introduced a mass-market color film called Kodachrome. In order to drive adoption of the new film, Kodak designed a system that made it easy for amateur photographers to get high-quality prints: customers simply loaded their films into their existing camera, shot a roll, and mailed it back to Kodak, who would process it in a rigorously standardized way to make sure the color was precisely correct.
By introducing this new process, Kodak provided a way of quickly scaling out a new and more complex technology, without having to train existing photo labs in a more expensive and more difficult technique. Many customers, everyone from amateur photographers to large photo studios, now had access to high-quality color photo development.
However, there was one large segment of their user base for whom this process didn’t work well. When calibrating the development process, Kodak had used a set of test cards on which all of the models were white. From its inception, Kodak’s film was never designed to photograph people with darker skin. As Syreeta McFadden describes in her essay Teaching the Camera to See My Skin, many black users of Kodak’s film simply assumed that the problem was with their photography skills, not with the product they were using. “Perhaps we didn’t understand the principles of photography,” she writes, “It is a science, after all.”
For me, this is a very important lesson in not only how technology operates in society, but also about how we, as technologists, build and present the products we put out into the world. It’s also a lesson I take to heart about what it means to present your technology as ‘scientific’ or ‘logically exact’: such assertions rely on a lot of assumptions which you don’t necessarily know you’ve made, and they influence how the people who use your products treat the results they see.
This was a technical choice made by people building a product, not a limitation of the technology of the time. Kodak only went about correcting this issue in the 1970s, driven by advertising photographers who were having difficulties capturing the detail in dark wooden furniture.
We Are What We Build
Melvin Kranzberg’s First Law of Technology proposes that ‘technology is neither good nor bad; nor is it neutral.’ Another way of thinking about this idea is that every piece of technology we build has the potential to be enormously helpful, but also to ruin someone’s day. How do we maximize the former and minimize the latter?
All of the technology that we build is to a large extent a reflection of a set of received assumptions that we hold, or which we make in the process of modelling the world in the programs we write. Regardless of the inherent indifference of the computer or the programming language (which itself is debatable), these assumptions mean that nothing we build can be accepted as perfectly neutral or ‘simply’ and purely logical.
The classic example of this in recent years has been the rise of neural network-based machine learning algorithms, where both the chosen parameters and the initial tagged dataset represent a set of decisions by individuals about how the resulting model will operate, despite the fact that we tend to talk about the training process as the computer ‘learning on its own’.
But you don’t need an ML model in your software to encode bias or create attack vectors within a product. In February of 2018, a study from the University of Warwick found a strong correlation between the number of anti-immigrant messages posted on social media and the number of crimes against refugees in Germany. The effect was so strong, even after rigorously accounting for a number of other variables, that when internet connectivity was slow or down altogether in a town or region, the number of crimes against refugees dropped.
It’s tempting to leave these ethical issues to non-engineers. There’s a tendency to shift ethical responsibilities onto designers, product managers and other more business-facing colleagues. But as the people building the products that go out into the world, we as engineers must also take our share of responsibility. We have to treat social and cultural attack vectors as seriously as cryptographic and logical code vulnerabilities. That requires having a broader view of what we build, and sometimes even on pushing back against features or ideas that otherwise would go into development. As an industry, we have learned to do this with what we consider traditional security topics, so why not also with ethical ones?
Technology for Good
The hacker ethics of old aren’t enough anymore. And in the current cultural and technological climate, that old Silicon Valley saying about moving fast and breaking things risks seeming a bit reckless, or at least not qualified heavily enough with regard to being okay for your testing pipeline but not so much for everything else.
The amazing possibilities of software to change the world are still very real, of course, and we can and should push the boundaries of what is possible.
In the world of startups, we often talk about ‘disruption’: of industries, of businesses, of ways of working. We also talk about ‘changing the world’.
History gives us a lot of examples of disruptive technology, a lot of people and companies that have changed the world. I don’t think it’s safe to assume that just because something is disruptive or world-changing it is therefore strictly a good thing.
I think it’s time to replace the word ‘disrupt’ with something more specific: ‘radical improvement’. That’s a harder challenge to rise to, and it requires us as engineers to take a broad view of not only the products we work on, but the social and cultural contexts into which those products are deployed. It requires us to stay educated on more topics than the latest framework or language, and to educate our colleagues.
Not everything we build will have the chance to radically improve a business, an industry or society. There will be days of mundane login form implementation, or yet another payment integration. But if we keep our eyes on the goal of making sure that what we build always improves the world, at least a little; if we pay attention to the ways in which our products can be misused to cause harm, or fail in ways that make people’s lives harder; and if we are willing to accept our part of the responsibility, and therefore the power that comes with such acceptance to iterate and improve, then at least we know that we can push towards that change in the world for good, step by step.
This is a harder challenge than that which we have faced before now. But at BCG Digital Ventures, we like hard challenges.