ICTC’S TECH & HUMAN RIGHTS SERIES

Designing the Good Life

An Interview with Andrew Sears

ICTC-CTIC
ICTC-CTIC

--

On May 14, 2020, the Information and Communications Technology Council (ICTC) spoke with Mr. Andrew Sears. Mr. Sears is a tech ethicist and design-minded product strategist who is passionate about emerging technologies and the future we’re building with them. He is the founder of technovirtuism, a project dedicated to exploring fundamental questions about virtue, justice, and beauty in the context of our relationship with technology. He is also an advisor to All Tech is Human, a hub for responsible tech that brings together technologists, academics, artists, designers, students, and policymakers to promote knowledge-sharing and collaboration. Mr. Sears is a product manager at IBM, working with AI and Cloud teams. He has previously driven innovation for companies like IDEO and Wyre. He received a classical education at Biola University and holds an MBA from Duke University.

“I’ll share a quote by a designer named Cennydd Bowles, who I spoke with recently, which I think is perfect. He says: ‘All design involves futures, and all futures involve ethics.’”

Photo by Louis Paulin on Unsplash

Kiera: Thank you so much for speaking with me today, Mr. Sears. To start with, can you tell me a little bit about your background and how you ended up doing the work you are doing today?

Andrew: Thanks Kiera. It’s a pleasure to be here. I’ve taken something of an iterative approach to my career, to put it charitably. I have always been interested primarily in two things: social impact work and emerging technologies. Early in my career, I didn’t see a clear way to bring those interests together. So initially, I did political advocacy and social-impact work, with a bit of tech work on the side. Then I flipped those around and was primarily focusing on tech consulting while doing a little bit of non-profit work on the side. Finally, around 2017, I was consulting for blockchain companies during the big ICO craze and for the first time understood, personally, the dramatic impact that a technology can have on our social, political, and economic systems. Over the following year or so, I spent some time at IDEO CoLab, which is an experimental lab for imagining future technology from a human-centered perspective, and got in touch with David at All Tech is Human and joined the advisory board there. Since about 2018, my work has focused more and more on tech ethics.

Kiera: It’s so often the case that people with a history of exploring a range of different interests eventually bring them together to do such interesting work! To dive right in, technovirtuism is a project of yours that is focused on promoting good decisions around technology for the future. It is rooted in the idea that the future is a product of the decisions that we make every day — where we spend our money, the systems we design, how we spend our time. Making these decisions well can shape a good future. What are some of the main kinds of decisions that you think are so important right now, which need more thought or reflection than we currently give them?

Andrew: Technovirtuism is dedicated to the idea that the small, mundane decisions we make each day about the role that we grant technology in our lives — for example, which technological habits and systems we choose to participate in — actually matter a lot. Ultimately, each of us is largely the sum of small decisions like these. There’s a well-known quote within tech ethics circles by John Culkin, summarizing the ideas of Marshall McLuhan: “We shape our tools, and thereafter our tools shape us.” I think that sums up why our small decisions are so important. When we grant certain technology or device a place in our life, we’re choosing to allow that technology to shape us, to shape our virtues and vices, to shape our hopes and the way we think about the world, the future, and ourselves.

This is related to the idea of virtue that was pioneered by Aristotle and other ancient philosophers. One of the core ideas of virtue ethics is that our behaviors shape our habits and our habits shaped who we are. Today, more and more often, our devices and our technologies are shaping our habits. So, through the lens of virtue ethics, the habits that we form with our technologies turn out to be really, really important in shaping who we are. This isn’t about moralizing or dry academic talk. For philosophers within the virtue ethics tradition, virtue has always been about human flourishing. The way to become happy — the way to achieve the “good life,” as they called it — was to cultivate virtue. So really, at the end of the day, these little decisions and making them well has the aim of human flourishing and happiness.

Kiera: It’s interesting to see how ancient ideas are so relevant for modern technology — what a melding of old and new. Could you give an example of a decision of the type you are talking about? A decision that perhaps we don’t give enough thought to?

Andrew: Sure. A lot of people talk about how much time we spend on our smartphones, which is definitely a problem. But when it comes to smartphone use, I think we should be less concerned with the quantity of that use and more concerned with the quality: in what contexts and in what ways are we choosing to interact with our phones? For example, I used to pull out my iPhone and browse the internet whenever I was waiting in line anywhere. I found that this habit tended to make me feel anxious and frazzled, which wouldn’t be a surprise to anyone who studies the human brain. Neuroscience tells us that periods of mental downtime — which we often call boredom — are very important for our neurological and psychological wellbeing. If we’re using our smartphones to avoid any possible instance of boredom throughout the day, that’s actually going to do us real harm over time. In the end, I had to eliminate my web browser from my phone in order to break this habit.

Kiera: The way that our technologies are designed often makes it hard to break compulsive habits like this, even down to small things like the red colour of notifications.

Andrew: Right, that’s a great point. It’s important to realize that many technologies are designed to be addictive. But it’s also important to realize that we can exercise a lot of agency over our wellbeing based on which technologies we choose to participate in. For example, my wife used to use Instagram to keep in touch with her friends in other parts of the world. Even though she had good intentions in using the app, she found that it almost always made her feel bad; the app played into vices like envy and comparison, which is exactly what it’s meant to do by design. She recently switched to an app called Marco Polo, whose creators have made very different design decisions: videos and photos are exchanged privately between friends rather than displayed publicly, which incentivizes genuine interaction rather than performance. Now, with social distancing in place, she uses Marco Polo more than she ever used Instagram but feels that it makes her a better friend rather than a worse one. Friendship, by the way, was an important virtue to people like Aristotle and Cicero.

Kiera: What are the biggest challenges or barriers you see to changing the way we make decisions about our technology habits?

Andrew: I think the biggest challenge is getting outside of the systems, patterns, and habits of status quo and instead imagining a better way of living with technology. This is challenging because there are a lot of powerful interests that benefit from the status quo; many companies design products, advertising campaigns, and narratives that condition us to accept a certain narrow vision of consumption and of the good life that. When we’re inundated with that every day in the devices we use and the systems we’re participating in, it’s really, really hard to open our eyes to the fact that that systems around us aren’t inevitable, that the system isn’t this way by default but was actually designed by certain people with certain interests. It’s really a challenge to get outside of all of that and imagine a better future, an alternative way of living, and then to fight to manifest that future in our day-to-day decisions.

Kiera: It’s very much like being a fish in water: it’s hard to clearly see what is around you and what system you are swimming in.

Andrew: Right. David Foster Wallace makes great use of that analogy in his commencement address at Kenyon College, which is a wonderful speech about exercising agency and intentionality in small, mundane decisions. I’m actually reading his novel Infinite Jest right now, which has a really memorable conversation between two characters about choosing what you love. There’s this idea that the most important choice a person can make is the choice of what their ultimate love is because that choice will direct all other decisions. I think this is worth thinking about with regard to technology; what loves shape our technology use? Love of convenience, love of consumption, love of social affirmation? Or love of our community, of discovery, of knowledge or even wisdom?

Kiera: I find it really interesting that you bring up the idea of systems because it’s a point that you often hear in certain strands of economics: the fact that it’s important to remember that economic systems are not inevitable but rather created by humans, and that we have choices in how we shape them. It’s important to apply that to our technology as well.

Andrew: I think it’s really interesting to point that out because, as the tech ethics movement matures, we’re seeing just how much tech ethics and economic justice come together and relate to each other. It actually shouldn’t be surprising because these technologies manifest within the context of our economic system and the incentives that it produces. A lot of these tech ethics problems that we talk about actually have their roots and causes within aspects of the economic system.

Kiera: Here again, it’s interesting to see how old questions apply to new technologies — another fusion of old and new. This leads me perfectly to the next question. Tech ethics has become somewhat of a movement in our era. How would you define “tech ethics,” and what ethics are the most important? What ethical questions do we need to be asking?

Andrew: This is a great question. To cast it most broadly, “tech ethics” helps us understand whether a given technology is “good” or “bad.” But all of the interesting stuff happens when we try to define what we mean by good or bad, and here’s where we get into different schools of ethical thought.

The most common approach these days is to think about ethics in terms of consequences: to decide that a technology is ethically good if it creates ethically good outcomes in the world or that it’s bad if creates harmful outcomes and externalities. This comes from the ethical tradition of utilitarianism, which has become the favorite lens for companies and for the ethical-tech movement at large because of our collective obsession with empiricism and measurement. It’s important to consider consequences, but it’s also just one way of looking at ethics. There’s a really important weakness with utilitarian thinking, which is that consequences always take place in the future; usually, they can only be observed once a technology has already been implemented at scale. A lot of the really bad consequences of technologies don’t actually come up in early testing because they don’t occur when one or even a hundred people click a button; they occur when one a million people click that button. And by that point, it might be too late to rollback the technology.

So when talking about tech ethics, I think there’s a real need to broaden our understanding of “ethics” within the movement and look at other schools of though. One would be deontological ethics, which says that a technology is ethical if it conforms with accepted norms or rules about how the world ought to be. That is a more values-driven approach, which pays attention to things like human rights, freedom of expression, equity, and justice. Just this week, I was talking to the chief ethics officer at a company called Hypergiant, who has created a companywide ethical framework around Immanuel Kant’s deontological ethics. I think this is amazing, and we need to see more of this kind of creativity. Finally, getting back to Technovirtuism, I also think we need a revival of virtue ethics, of thinking about whether a technology is or is not conducive to human flourishing and the good life.

So when we talk about tech ethics, we really need to be pushing technologies to meet all three of these criteria: to bring about good outcomes rather than bad ones, to align with accepted values and norms, and to be conducive to human flourishing.

Kiera: What immediately comes to mind here for me is the problem of defining what virtues we want, or what the good life would look like, which are very tricky questions. Those are huge, collective questions. How might we go about coming up with the values, standards and/or virtues we want to pursue?

Andrew: This is such a great question. This is where tech ethics coalesces with not just economic questions but political questions and social questions. The modern, post-enlightenment liberal order has brought us a lot of really great things, but it has also undermined or even rejected entirely the notion that we can agree on certain objective values or shared visions of the good life. This might prove to be one of the shortcomings of liberalism that we’re now having to confront. We might need to learn to recover a shared sense of what we value, a shared sense of what we deem good and worth living for, in order to direct our technologies towards those ends.

To give a concrete example: William Davies, a professor at the University of London, has talked about how capitalism itself assumes the impossibility of this kind of deliberation. Twentieth century advocates of capitalism like Mises and Hayek argued that both economic and moral value are subjective, personal, and that we need the mechanism of the free market to mitigate between this plurality of values. This logic sets up a system where entrepreneurs and businesses sell competing visions of the future on the open market. The future we end up with, rather than being a product of thoughtful reflection and agreement, is just whichever future sells best. I think we have to find a better way, and being more intentional about directing our technological habits toward a positive vision of the future — of the good life–-is a good place to start.

Kiera: What do you think the role of designers is in tech ethics?

Andrew: I’ll share a quote by a designer named Cennydd Bowles, who I spoke with recently, which I think is perfect. He says: “All design involves futures and all futures involve ethics.” I think this captures the role that designers play in the tech ethics movement. Designers are the ones who are trained and given the mindsets, tools, and the responsibility to imagine possible futures. They are given the mandate to courageously, creatively, and boldly build the artifacts — physical and digital — that manifest those futures and move us towards one future as opposed to another. And when you cast the work of design in that light, you really see that every design decision is inherently ethical because every design decision involves making an intentional choice between one or another vision of the future.

Kiera: Looking ahead, on a final note, what do you think will be the biggest surprises in the realm of technology and human rights going forward?

Andrew: At the risk of being redundant, I think we’ll be most surprised to discover just how inadequate our current systems and institutions are for dealing with the challenges that we’re going to be facing in the near future. I think we’re going to be surprised when technologies like blockchain, quantum computing, Brain-Computer-Interface and AR/VR really do become ubiquitous and create fundamentally new challenges that our current educational, economic, and political systems were not designed to deal with. I think we will find that a lot of those problems are, in fact, products of our current systems, just as a lot of today’s ethical problems are.

But on the bright side, I think that we’ll also be surprised to discover just how un-inevitable our current systems and institutions are. They were designed by humans in a particular context, for a particular purpose, and worked well in some ways and poorly in other ways. But ultimately, they can always be changed and reimagined. I think we’ll be surprised to feel liberated to imagine futures that aren’t just extrapolations of our current systems or continuations of the track that we’re on. Rather, when our current institutions and systems fail in larger and larger ways, I think we’ll be surprised by the creativity that we discover we have for envisioning better ones. It’s going to be a very exciting time.

Andrew Sears is a tech ethicist and design-minded product strategist who is passionate about emerging technologies and the future we’re building with them. He’s driven innovation at companies like IBM, IDEO, & Wyre, with a focus on AI and cloud products. A thought leader in the tech ethics space, Andrew is the founder of Technovirtuism and serves as an Advisor at All Tech is Human. He has written about the intersection of technology, ethics, and society for YouTheData, the Center for the Study of Ethics and Technology, and has spoken to audiences of up to 800 on subjects like speculative design, social change, and technological virtue. You can keep up with his work at andrew-sears.com.
Kiera Schuller is a Research & Policy Analyst at ICTC, with a background in human rights and global governance. Kiera holds an MSc in Global Governance from the University of Oxford, and launched ICTC’s Human Rights Series in 2020 to explore the emerging ethical and human rights implications of new technologies, such as AI and robotics, in Canada and globally, particularly on issues such as privacy, equality and freedom of expression.

ICTC’s Tech & Human Rights Series:

Our Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

--

--

ICTC-CTIC
ICTC-CTIC

Information and Communications Technology Council (ICTC) - Conseil des technologies de l’information et des communications (CTIC)