Are you Post Normal? A primer for designing ethically
2018 was a crazy year for tech. Whether it was Facebook’s role in propogating violence that led to genocide in Myanmar, Cambridge Analytica leveraging the ad industry to overturn US and British democracy, or the displacement effects of Airbnb on local communities to name a few; we began to realise the extent of the impact technology has and can have on the world we live in.
I don’t believe that technology is bad, nor that the vast majority of the people (myself included) who design and develop technology based solutions have bad intentions. So what’s going on?
A lot of the social and societal problems we’re seeing at the moment in tech stem from a growing realisation that we either cannot foresee or don’t always fully understand and react to the secondary, or tertiary impacts (or as economists call them, externalities) of what we’re designing.
New regulations, such as GDPR, are shining a spotlight onto the issues and helping to bring how we treat people and their privacy to the forefront of the conversation.
When any discipline reaches a level of sophistication, then the subject of ethics becomes more relevant. There is certainly a lot of discussion around ethics and how we might design products and services more ethically at the moment, which is great to see.
When you start to dig into the how of designing ethically, it becomes complex quite quickly. Traditional methods of ethical evaluation (such as review boards in academia) take a long time and aren’t compatible with the timescales that designers and digital teams are working to.
Additionally, this is not purely an ethical problem. Focusing solely on ethics, without diagnosing the real issue only gets us so far. Ethical considerations are only as good as the knowledge you have from which to consider them.
Ethical considerations are only as good as the knowledge you have from which to consider them
If we‘re not taking more active steps to understand the complexity of the systems and people we’re designing for, then any kind of ethical consideration applied will be lacking the depth needed to better assess the potential futures we’re creating.
It’s also worth noting that within design itself there are multiple disciplines; each of which are concerned with different levels of a given problem. For example; an Architect thinks on a city / societal level, whereas an interaction designer is concerned with specific interactions within a product or service. I think this is important, because you need to recognise what is and isn’t within your control and understand that there are limits to what you can do within your chosen discipline (for your own sanity, but that doesn’t mean you’re stuck there if you don’t want to be). The amazing Dan Hill (this should be on your reading list) recently wrote a great piece about the different levels of thought in solving complex issues.
As with everything there are no silver bullets. But that doesn’t mean it’s impossible. Since founding Ethics Kit, I’ve been curating and creating tools and methods to help teams work in this way.
I believe doing something is better than doing nothing. Tiny incremental changes are extremely powerful drivers for change.
Here are some of my key learnings so far that underpin my personal approach and I think might be useful for others. Let’s start with how we might address complexity through adapting our culture and values.
Culture and values
So, how do we understand the mind bending complexity of the impact of our work within a global economy in the internet-era? It turns out, there are limits to the level of complexity that our processes for acquiring knowledge can adequately deal with.
Applied science is great at dealing with extremely focused problems, but within a highly controlled environment where variables are set and assumed to remain constant. In reality, we know this is not true and all actors within a system are constantly in flux.
Professional consultancy emerged in recognition of this fact, but for the purposes of our needs, still fall short. Too heavy a reliance on data in decision making means you’re looking backwards, and although past human behaviour is a good barometer of future behaviour, it is not indicative when predicting futures that have not yet existed.
But what does this have to do with Culture and Values? Enter stage left: the wonderfully named: “Post-Normal Science”.
This is a fundamental shift in the way most of us work. People and organisations have been evangelising this way of working for some time, in particular: Service Design and Co-Design or Participatory Design are very good at working in this way.
So, if changing our working culture and values to be more post normal is going to help us understand and respond to the levels of complexity we’re dealing with; what else could potentially trip us up in our humble quest for more responsible innovation?
Let’s take a step back and look at the (massively simplified) process a team or project might go through when working on a problem. I’ve loosely based this on Dan Nessler’s explanation of the double diamond.
As mentioned earlier, one of the foundations to this process is this post normal culture and values. This is the way we work and reflects the necessary shift in thinking about value and where and how value is created; in particular, doing things with people and not to or for them. Underpinning that are our unconscious biases; They are harder to recognise, but they effect absolutely everything.
As we’re all human (apart from the bots 👋) we’re also subject to unconscious biases. Thankfully, there are ways to help identify or mitigate some of these. Working with the people that you’re designing for and increasing the variance of actors you introduce to the project will go a long way to addressing issues of unconscious bias.
Additionally, having diverse teams has been shown to mitigate issues with unconscious biases. Although, the case for increasing diversity in teams certainly does not stop there.
It’s also worth noting that we have these biases for good reason. They are our own built in mechanism for managing complexity when navigating the world around us and making decisions within it. Without them, we would be incapable of making decisions at all.
While it is impossible to get rid of them, being aware that they are there and familiarising yourself with sets of biases that are applicable to what you’re trying to do (typical biases that you might see in a meeting for example, and then watching out for them) will serve you well.
A simple barometer for the risks that unconscious biases might have on your work is this: Think about how the make up of individuals in your team is different from that of the people you’re designing for. Are there any gaps? If so, that presents a risk that you need to understand. You and the people in your team will likely have different world views, capabilities and think in very different ways in comparison with the people you’re designing for. These differences will catch up with you.
Understanding the context
We started out by talking about the complexity of the world we’re designing for and how traditional approaches fell short on understanding and managing that complexity.
Human Centred Design or Design Thinking has focused our attentions onto the “user”. What are their needs? Or in the case of Jobs to be done — What jobs are they trying to do? While powerful, when thinking about the wider impact of our work, this is quite a reductionist perspective to take. Not only on an individual / society dimension but also on a level that reduces people to groups of goals, needs and problems.
Shifting our focus on needs and behaviour has been incredibly powerful, but raises an interesting deontological question: Are we treating people as means to our ends, or as ends in their own right?
Only knowing what people need or what it is that they are trying to do doesn’t really help us to understand how they might benefit or suffer from the myriad of potential futures we could offer. Amartya Sen and Martha Nussbaum’s Capabilities Approach is one approach we could adopt to consider people and their welfare beyond needs. It was adopted by the UN in their Human Development Index, but how this could be applied to augment Design Thinking approaches (as far as I know) has yet to be determined.
Personally, it’s on my radar and is an approach I’d like to prototype and test. There is a risk associated with front loading so much information that also needs to be considered. One thing that constantly echoes in my mind is a conversation I had with a product owner recently. They argued that too much information had led to “analysis paralysis” and an inability to make decisions in their designers on how to act.
Thinking back to the post-normal explanation, maybe the best thing is to launch small solutions and constantly watch and learn? I’d be interested to hear more case studies about working with this amount of complexity and find out how people deal with decision making at this level.
However we approach understanding the context, the outcome should be this: Do we understand enough about what is possible for people, what they need, what they value and what they depend on to make reasonable considerations about how the changes we make could affect them?
Although being mindful throughout the design process is obviously important, I think it’s helpful to be a bit more specific. Building on our simplified process outlined earlier, there are a few places where it seems natural to pause and consider our actions.
First stage considerations
Before we go out and involve any third parties, there are several things that we’d normally consider. Namely:
- Getting a shared understanding of what it is we’re doing and why
- Who are the people we need to learn from and how best to approach them
- Creating and managing how we will get their consent
Ethics Kit has some tools that are helpful for guiding individuals and teams through these considerations. The Team Canvas is a powerful way to quickly get everyone on the same page and establish how you’re going to be working together. Also, using a combination of the Ethics Cards and the Kick-off Canvas to record the conversation can help define your project and approach.
Bear in mind that the context and the scope of your project will determine the extent of the considerations you should be making. The cards in particular are intended to be flexible enough to use that you can adjust the context as you see fit.
Informed consent is the next thing you need to do. Getting this right goes way beyond compliance and data protection — it is your participants first impression of you and your project. Informed consent is fundamental in developing trust with the people you’re trying to learn from and empowers them in the design process. Empowered participants who trust you = significantly better insights. Doing the right thing in the right way is a virtuous cycle and not something to be taken lightly 💪
Empowered participants who trust you = significantly better insights.
I wrote an article a while ago about how I design informed consent, but GDS published a great article and the lovely ProjectsByIf have a repo on GitHub that they have open sourced which I recommend checking out.
Second stage considerations
After we’ve done our initial rounds of research we start to get a clearer picture of what is happening and typically begin some form of ideation; focusing on “How Might We” questions gleaned from opportunity areas and insights from our research.
At this point, we should have a broader understanding of the people we’re designing for and their capabilities, but also beginning to understand potential directions the project might go in.
Once you’ve begun ideation and started to narrow down your potential solutions, you’ll start to have a more specific idea of the ‘thing’ that you might want to explore further.
Jermone Glen of The Millenium Project has created a suite of tools called The Futures Research Methodology. These tools have been developed to help us consider potential futures and the consequences they might create and there is a wealth of practical knowledge in there. In particular, I’ve found that The Futures Wheel is a great way to explore this complexity within a single workshop and help to frame and explore this complexity.
If you’re building something in technology, and your idea is sufficiently mature The Data Ethics Canvas from the Open Data Institute (ODI) is a comprehensive way to explore how you handle and treat people’s information. The Data Ethics Canvas really helps to focus on where the potential issues might be ahead of time and design into them rather than around them — reassuring in the GDPR era.
Measuring the impact
Metrics are key drivers for change within business and powerful tools for observing behaviours across large populations or systems. Alongside this, they can also create new behaviours in us; the singular pursuit of ‘moving the needle’ on a particular number and being solely driven by quantitative data can lead us to forget what’s best for the people we’re serving with our product or service.
Whenever I present this as an idea, I’m often met with a Quantitative vs Qualitative argument. I’m not saying that quant is bad, that would be ridiculous. What I am saying is that quant without qual is like one of those comedy puddles that’s actually a sinkhole.
Qualitative data adds depth to the numbers and your understanding of what is happening, or might be about to happen. This is important. Thinking back on the shift in thinking towards a Post Normal approach; the speed in which we can understand and then react will ultimately determine the scale of any problems we might encounter.
Getting qualitative data into the hands of key stakeholders and making it an essential part of the decision making process in itself is wrought with challenges. Communities such as ResearchOps, (led by many outstanding individuals such as Kate Towsey et al) are making great strides towards establishing design research within organisations (and Ethics Kit are currently building something that might help with this too! Follow me on here or get in touch on Twitter if you’re interested.)
In his excellent book Future Ethics, Cennydd Bowles talks about the idea of the ‘mutually destructive’ target — metrics chosen in pairs such that one will suffer if we simply game the other. The idea is simple, but brilliant. Mutually destructive metrics could give us early insights into the consequences of redlining one metric at the expense of our users. For example: “Dark patterns may extract more revenue per user, but they’ll also harm retention if users feel duped. Choosing both revenue and retention as mutually destructive targets provides a minor safeguard against abuse.”
We’ve covered quite a lot of ground here, but there is still more that we could talk about. I’m still on this journey myself, so I have no doubts that there’s a whole bunch of things I’ve not heard about yet. Hopefully some of this is useful or helpful in some way and that you can see how some of these tools could be implemented into how you’re already working.
What’s mentioned here isn’t conclusive by any means; like I said earlier, there are no silver bullets here. This isn’t easy to do and we won’t get it right first time. What’s important is that we realise that and we do the best we can to identify issues before they arise and be able to respond quickly and appropriately if or when they do.
We are adding and developing this set of tools all of the time and will continue to share them openly. If you have something that you think is interesting or appropriate, feel free to share it. If you don’t understand how something on there works or you have an idea for how a workshop could be improved then tell us! We’d love to hear from you, and work with you!
There are more tools to discover already over at Ethics Kit; fundamental team development skills like reflection, active listening and developing an understanding of yourself and others haven’t been mentioned here, but will help us overcome a lot of these challenges and navigate this journey together.
We want to share this platform and work collaboratively with the people we’re designing for. That is, after all, the most Post Normal thing to do.