How Design Designs Us: Part 3 | The Ethics of Design

The human brain is a complex mass of energy designed to respond to the world around it, and more specifically, designed to make effective decisions about how to survive. Our species has evolved in its entirety on this simple premise; from times of limited resources during the era of hunting and gathering to now, where there is obscene abundance available (to some anyway), the brain has endured the steadfast responsibility of moving humans forward by working to escape pain and move toward pleasure, or, still true in many cases, basic survival.

With all this rapid post-industrial revolution growth and technological advancement, we are beginning to see the fall-out of the avoidance of a singular question: how does what we design, design us? I’m thoroughly fascinated by the cognitive implications of what we create. It’s no doubt that design and technology have radically changed humanity in extremely positive ways — we live in one of the safest times in human history. But, is the rapid rate of technological growth superseding our ability to cognitively understand the implications?

In part three of this series of reflections on how design designs us, I am exploring some brain-provoking arenas of design and technology ethics as well as exposing the conundrum that we currently face as a species.

In case you missed parts one and two, you may want to circle back: read (1) The Silent Social Scriptor and (2) Cognitive Activation Design.

We make decisions every day based on our personal and professional ethical frameworks, be it conscious or not. Do you know what conditions you use to decipher between what is right and what is wrong? Consider, for a moment, where you draw your ethical knowledge set from and what the life experiences and conditions are that have created the scaffolding that makes you — you. There is no collective moral compass, and the dichotomy of right/wrong differs dramatically across generations, cultures, nationalities, and professions. I’m fascinated by the ways in which the human brain has evolved to avoid the complex and often messy arena of ethical debate, as well as how we increasingly opt to deflect the responsibility of the ramifications of our actions to external forces.

Humans have an incredible resilience in adapting to new ways of avoiding the “ontological anxiety” caused by the unknown.

A few years ago, I was doing PhD research and interviewing designers around the world to identify the limiting factors in designers integrating sustainability into their work. Nearly everyone I interviewed had, at some point, learnt about the systemic implications of rapid innovation and how to make better decisions; yet, most of them still passed off the responsibility of ‘right’ decision making to someone else. It was the boss’s, client’s, manufacturer’s, government’s, or consumer’s choice that would solve the problem that their production would participate in. When everyone within a system plays this hands-off, ‘that’s not my problem’ game, the system is very quickly riddled with externalities… and a shit load of problems! This appears to be the case with the complex debate around the ethics of design and technology.

Who is taking responsibility for the outcomes, externalities, and downright damaging impacts of our hyper-consumer, ever-changing landscape of new gadgets and virtual arenas that are coming on board at a lighting speed pace?

Everything that is created requires something else to be changed, destroyed, or depleted. Under the current linear systems of design and production, much of what we consume is at the expense of something or someone else — the cost of an ecosystem, a culture, or a human’s well-being. The ramifications are no secret; we see the dynamic feedback loops of our individual and collective decisions everywhere (see here, here, here, and here). But to be completely clear- I’m not interested in shaming or blaming (total waste of time and energy!)

I’m interested in debate, dialogue, and activated decision making that helps to resolve these obvious byproducts that modern technological advancement has given us.

And don’t get me wrong — I’m no luddite, and I’m not advocating a reversal of progress. I am 100% insisting on a deeper, more intentional, industry-wide and collective dialogue about the role that the ‘creators’ play in dictating where we all end up as a society of consumers. I’m agitating for the personal responsibility set to be more refined, and for the deflection of agency to stop — especially for those in the producer roles. You want to live in this world, create stuff, have fun, and live a healthy happy life? Then you can’t avoid ethics! It is fundamental to how we got here, to this point in time as a species.

Ethics is not an option; it’s the imperative that has made society possible, and we can’t lose the framework that keeps this ship a sail.

In the 1970’s when the co-founder of Hewlett Packard, Gordon Moore, first proposed that the number of transistors per square inch would double every year, individual computers cost around $28K and took up entire rooms. Data density actually doubles every 18 months now, about as much as people replace their cell phones (interestingly, people recently started to keep phones around 21 months — check this curious graphic). But gone are the days of the 2-bit computer games of snake and solitaire; we live in an incredible time in human history, where we are literally redesigning the way the world works.

In our hyper-globalized world, the lines of what is ethical and what is not are becoming increasingly blurred — if not disappearing altogether. Cultural conditions, social factors, and, more increasingly, political climates are dramatically shifting the status quo of what is deemed ethical and what has simply become normalized. For example, consider what you think is ethical and what is not; I bet you have different ethical frameworks now than you did 10 or 20 years ago.

But, who gets to choose our collective ethical frameworks of the day? The answer should be — we all do!

Ethical conventions are the product of social dialogue and debate resulting in a normalized collective agreement of approaches that fit the most humans for the most benefit. As societies shift and evolve the ethical frameworks that form and regulate the agents within it, they require constant readdressing, to check that they are in line with the shifts in the technology and changing social conditions. It was not too long ago that sexism in the workplace was not even a concept being discussed; women having their asses pinched by their male boss or co-worker was normal, accepted, and even applauded. Now, thanks to a series of public cases, debates and a collective agreement that women should not be sexually exploited or discriminated against, ass-pinching is no longer a socially acceptable act. We would say it is unethical for a co-worker to do such a thing and speak out against it. And certainly, that person could be sued and legally penalised for the act. We wouldn’t be here without ethics; that is why it is so critical that we continue to evaluate, discuss, and construct ethical frameworks that fit the rapidly changing rate of social progress and technological advancement.

But here is the complex thing about ethics — there is no universal truth or governing paradigm that dictates what is ethical and what is not. Instead, ethics is bred out of the act of making conscious, deliberate decisions that take into account many of the conditions that are at play in the arena that one is deciding upon. Take the death penalty or abortion — some cultural conditions bred collective opinions that one is right and the other, morally wrong; the foundations for which these decisions are based on, though, are often diabolically different.

As we become more complacent about the type of technological interventions we accept into our lives, we become more placid about what is normal and what should be debated in wider context. And, it is true that ethics is having a hard time keeping up with the hyper-speed development of technological advancement. While Google, for example, has an ethics committee that provides guidelines around its experimentations into Artificial Intelligence (AI), they won’t publicly declare who is on the committee or what they are working on. History has shown us that unchecked corporate actions can have detrimental impacts on the rest of us.


Have you seen the infamous trolley car dilemma? If not, take 2 minutes to watch it before reading on.

Here’s the short of it: the trolley car dilemma is an ethical thought experiment in which you have to decide if you will intervene to save five lives by having to willingly kill one in exchange, by moving a track lever that changes the track of a train hurtling towards 5 people tied to tracks, towards one person on another track. In the area of consequential ethics, this same type of ethical decision-making conundrum is proposed like this: you are a doctor and you can save five lives with the rations of medicine you have. Or, you can give all of the medicine to save just one life — that of a person who is considered more important socially than the five others. What do you do? A consequential ethics approach would say that one should maximise the best outcome for the most people, whereas deontological ethics says you would have to do the right thing– using whatever ‘rightness’ is considered to be on that day.

So herein lies the conundrum. Ethics is not a given; it has to be constructed by the participations of society. What we get and what we don’t get is decided by implicit and explicit actions to agree or disagree with the status quo of the day. Not participating in the debate is just as much an implicit agreement to maintain the status quo as it is for a person to actively participate in challenging it. So, if you don’t vote, if you say “that’s just the way it is,” if you accept the idea that the future is inevitable, if you deflect responsibility to someone else “out there,” then you are, by default, participating in the maintenance of the system — even if you don’t agree with it!

And here we are: in an enigma that sees voter apathy on the rise, self-censorship and a softening of debate in our discussions, and good old arguments that help to create the cognitive conditions that allow you to form your own opinion. It also allows, albeit less pronounced, for the code of moral conditions that we must all collectively abide by for a functional society to take a back seat to the ever-constant technological offerings of giant ecosystems called… corporations.

When it comes to designing the material and technological world around us, we must demand public ethical considerations by the shapers of our lived experience. If you graduate from a computer science degree in the US and decide to be a member of the Association for Computing Machinery, ACM, then you are bound by this ethical code of conduct and the national body for designers in the US, the AIGA (disclosure: they are a client of mine). In fact, many of the other design industry associations (IDSA, AGDA, IGDA, IDIBC) have ethical codes of conduct that members must abide by; they entail doing no harm, being good to your client, protecting rights such as privacy, etc. These are cornerstones of most industry ethical frameworks, and rightfully so– making sure that other members of your profession don’t mess it up for the rest of you makes good business sense. However, they are not good enough. Cognitively, agreeing to something you didn’t decide upon has very little weight in broader decision making contexts.

Something like the Hippocratic oath taken by doctors has a different weight to it than a code of professional conduct, and Google’s ex-Design Ethicist, Tristan Harris, agrees (there are many calls for ethics in design and tech, such as the designers oath, and see here, here, here and here for a few samples of the many). But no number of ethical guidelines can make the decision for you in the moment. As humans, we develop complex neurological conditions and social experiences that form the foundations for how we see and behave in the world. Ethics is such a foundational element to the individual person that it’s hard to distinguish between the external narratives of right and wrong and the internal justifications of what we end up doing. (The Stanford Prison Experiment and the torture at Abu Gharib give us sobering perspectives of human nature and one’s ethical decisions.)

We have come to realize that humans are irrational, emotional beings, riddled with cognitive biases, and challenged by our own cognitive dissonance. This realization raises the question: how can one actually be ethical if they don’t know how they will respond to a situation until they are being challenged by it?


When I was giving my TED Talk, I had the pleasure of meeting the famed Ethicist Peter Singer (who gave this incredible talk of effective altruism whilst I was sitting in the front row, mouth a gap from the mental provocations that he provided). Singer is no stranger to pushing for complex and hard debates; from animal rights to stem-cell research and euthanasia, he takes his ethical lens and shows us all the giant holes that we have created through our collective avoidance of the arenas of ethics.

I asked Peter about design ethics — actually, I asked him to write about it after ranting at him about how designers create the world but have limited responsibility matrixes outside of making money and not killing people (intentionally). He very politely told me to write it myself. So, three years later, I am. It’s taken me many dives into mining the complexity, and by no means am I there yet, but this arena desperately needs attention, conversation, dialogue, and discourse. This is a provocation for debate as much as it is a reflection on our industry and the opportunities that we all possess, within our creative and technologically brilliant minds, to proactively participate in creating the kind of world we want to live in.


As I discuss in the previous articles in this series, design is an incredibly powerful social scriptor. The designed world creates us, and so if we create designed artifacts, cities, communities and the technologies that guide it all (increasing in more and more artificial ways), then why aren’t we having bigger, louder, stronger and more significant public debates about the ethics of our collective choices? Who gets to decide what we end up with and what is not a good collective decision to invest in? I mean, do we even want AI?

I recently experienced Virtual Reality for the first time. I was challenged to escape from a room at a trade show as a magician and failed, without knowing what the consequences would be. After two minutes, an axe was thrown at my head in full 3D, making blood trickle in front of my VR headset in a very real way. The cognitive reaction was incredibly visceral — I screamed and was instantly filled with an extreme rush of cortisol and endorphins trying to make my body run from the danger! Only I couldn’t, because I had been loosely chained in to prevent me from pulling the headset out of its connector. The experience was so real– a testament to the incredible technological advances we have made– but also worrying for me, as I considered the neurological impacts of such realistic but completely made-up scenarios. My brain could not distinguish between the two experiences: the real world I actually exist within and the virtual world that is designed to simulate the real world, in which anything can happen, including having an axe thrown at your head.


This provocation for a wider conversation on the ethics of the things we design is all framed through the worldview that I constantly approach the world through — one of sustainability and the social, economic, and environmental choices that we can make in supporting the life-giving systems which maintain our species on this planet. So, my provocation is very much about agitating a collective conversation that is greater than the purely economic gains that are the result of many of the technological advancements we currently have and are investing in launching into the world.

Ethics is not an option; the conversations need to transpire out in the open, not behind locked doors or protected by the privileged and perverse legal structures of proprietary information. Things that will dramatically impact the planet, and all of us on it, need to be collectively agreed upon; otherwise, we will lose much of the incredible beauty our species has created over the last millennium of complex evolution, development, and negotiation around what it means to be a human living on this planet.

— — — —

I’m a designer and sociologist fascinated by the way the word works and how we can activate creative capacity to redesign it so it works better for all of us. I founded the UnSchool of Disruptive Design a global experimental knowledge lab that pops-up around the world to active and agitate for positive social change through design.