Ethics and the Algorithm

MIT SMR
MIT Sloan Management Review
4 min readJul 13, 2016

By Bidhan L. Parmar and R. Edward Freeman

Behind every piece of code that drives our decisions is a human making human judgments about what matters and what does not.

Editor’s Note: This is the fourth in a special series of commissioned essays MIT Sloan Management Review will be publishing in Frontiers over the Spring and Summer of 2016. Each essay gives the author’s response to this question:

“Within the next five years, how will technology change the practice of management in a way we have not yet witnessed?”

Are we designing algorithms, or are algorithms designing us? How sure are you sure that you are directing your own behavior? Or are your actions a product of a context that has been carefully shaped by data, analysis, and code?

Advances in information technology certainly create benefits for how we live. We can access more customized services and recommendations; we can outsource non-essential tasks like driving, vacuuming floors, buying groceries, and picking up food. But there are potential costs as well. Concerns over the future of jobs has led to discussions about a universal basic income, or a salary just for being human. Concerns over the changing nature of social interaction have covered topics from how to put your phone down and have a conversation with someone to the power dynamics of a society where most people are plugged into their virtual reality headsets. Underlying these issues is a concern for our own agency: How will we shape our futures? What kind of world will information technology help us create?

Advances in information technology have made the use of data — principally data about our own behaviors — ubiquitous in the online experience. Companies tailor their offerings based on the technology we employ — as Orbitz was called out for steering Mac users to higher-priced travel services than PC users. Dating sites like eHarmony and Tinder suggest partners based on both our stated and inferred preferences. News stories are suggested based on our previous reading habits and our social network activities. Yahoo, Facebook, and Google tailor the order, display, and ease of choices to influence us to spend more time on their platforms, so they can collect even more data, and further intermediate our daily transactions. A recent study demonstrated that only four unique spatio-temporal data points are needed to accurate identify an individual — because our movements are both predictable and unique.

Increasingly, our physical world is also being influenced by data. Consider self-driving cars or virtual assistants like Siri and Amazon’s Echo. And, there are even children’s toys like Hello Barbie that listen, record, and analyze your child’s speech and then customize the interaction to fit your child.

As our lives become deeply influenced by algorithms — what kind of effect will it have?

First, it’s important to note that the code that is used to make judgments about us based on our preferences for shoes or how we get to work is written by human beings, who are making choices about what that data means and how it should shape our behavior. That code is not value neutral — it contains many judgments about who we are, who we should become, and how we should live. Should we have access to many choices, or should we be subtly influenced to buy from a particular online vendor?

As that code becomes more broadly implemented, it has the potential to shape the behavior of large numbers of people. Recently, Facebook has been put on the defensive by allegations that the algorithm it uses to display stories in your newsfeed is biased — specifically that it reduces the likelihood of certain politically conservative stories being displayed. The values of one coder shaped the experience of millions of people and subtly influence how they think — and therefore how they behave.

Similarly, think of the ethical challenges of coding the algorithm for a self-driving car. Under certain unfortunate circumstances, where an accident cannot be avoided, the algorithm that runs the car will have to make a choice about whether to sacrifice its occupants or risk harming — maybe even fatally — other passengers or pedestrians. How should developers write this code? Despite our advances in information technology, data collection, and analysis, our judgments about morality and ethics are just as important as ever — maybe even more so.

We need to figure out how to have better conversations about the role of purpose, ethics, and values in this technological world, rather than simply assume that these issues have been “solved” or that they don’t exist because “it’s just an algorithm.” Questions about the judgments implicit in machine-driven decisions are more important than ever if we are to choose how to live a good life. Understanding how ethics affects the algorithms and how these algorithms affect our ethics is one of the biggest challenges of our times.

About the Authors

Bidhan L. Parmar is an assistant professor in business administration at the University of Virginia Darden School of Business.

R. Edward Freeman is University Professor and Olsson Professor of Business Administration at the University of Virginia Darden School of Business.

Originally published at sloanreview.mit.edu on July 13, 2016.

--

--