Data Ethics in tech; Here’s why it’s so hard

Nathan Kinch
Greater Than Experience Design
6 min readJan 3, 2019

Late last year our team was running a workshop for a client. They’re working on something new. Something that’s currently unprecedented in their industry and geography. Because of this they have hard decisions to make. They need to be explicit about what they will and won’t do. They need to consistently execute verifiably ethical behaviour. They need to re-earn the trust of the market.

Among other things, they hired us to help design a Data Ethics Framework that could be operationalised collaboratively and effectively. This framework enables;

  1. The client to give evidence of how proposed data processing activities satisfy ethical requirements
  2. The evidence to be checked and assured by an independent third party, and
  3. Assured proposals to be ‘trust-marked’ and released to market

There’s a bit more depth to it, like;

  • Diverse and inclusive stakeholder collaboration
  • Social preferability testing
  • Experiments within a controlled environment
  • Direct impacts on training, incentive structures
  • Policies and procedures
  • Systems architecture and engineering
  • Vendor assessments, and
  • Customer experience design.

That’s the basic theory. We never really know until we put it to the test. That’s happening now.

Back to the workshop. A little after lunch, the project sponsor put a question to the group, “Why have we not done this before?”

The question seemed simple enough, yet an answer took a while to emerge. I eventually chimed in to break the silence, “Because it’s bloody hard…”

This article is about data ethics in tech. It’s about why data ethics in tech is so tough. It’ll give you some clarity on how to overcome the challenges and inertia to get started.

You should read this post if you;

  1. Work at a company that processes a lot of data about people
  2. Are tackling an ‘ethically grey’ challenge or opportunity
  3. You believe technology should be deliberately designed to maximise good and minimise bad, and/or
  4. You believe companies have a responsibility to take charge, lead from the front and place strong ethics at the heart of their business

What is data ethics?

Data Ethics, is a new branch of ethics. It studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data ethics builds on the foundation provided by computer and information ethics but, at the same time, it refines the approach by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric.

That’s a bloody mouthful.

Put more simply, we see data ethics as a significant competitive advantage. It’s about clarity. Data Ethics define what we will and won’t do. They help hold us accountable. They help maintain alignment to customer outcomes. They increase the likelihood our conduct is positive.

A Data Ethics Framework is the consistent process we execute to decide, document and verify how our data processing activities are socially preferable.

It’s our view that data ethics are the difference between trustworthy services and trust hacks.

Tech has tried the latter. It now needs the former.

Why are data ethics so bloody hard?

Take Google’s AI Principles for example. What do you get from reading it? Probably very little. There’s no specificity. They have not defined what they will and won’t do.

I’m not having a go. I empathise with how hard it must be for an organisation with Google’s business model (which, is really a post for another time) to be explicit about anything related to their data processing. They view this clarity as a constraint.

But this is zero sum thinking. This assumes people’s fundamental right to privacy inhibits innovation.

Let me qualify.

5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  1. How google? How will you incorporate privacy design principles?
  2. What privacy design principles will you incorporate?
  3. How will they be audited?
  4. Which independent parties will conduct the audits?
  5. How will people’s rights be preserved?
  6. How can people enact these rights?
  7. Are your actual customers engaged in this process?
  8. How are these principles embedded into product development workflows?
  9. Do the people building these technologies have appropriate knowledge of Privacy and Security by Design?
  10. Are they practicing Privacy Engineering?
  11. Are they aware of the privacy enhancing technology movement?
  12. What is ‘appropriate notice and consent’?

The questioning could continue. I digress.

Do you see how much extra work I just created for them? A fairly simple, high-level, principles driven piece of content has just started becoming a much larger piece of work. This is actually good. If the work is done, it can become a framework. If the framework is supported, communicated effectively and built into the core operating and business model, positive behavioural change may occur; if it’s designed.

It’s for this very reason I LOVE The Ethics Centre’s Ethical by Design. Please download a copy and get to reading.

Approaches like this force the people leading data ethics initiatives to be explicit. Specifically, they help us get clear on our foundation; purpose, values and principles.

Purpose(ing) is your reason for being. It helps to explain your choice of core values and principles.

Values identify what is good. They are the things you strive for, desire and seek to protect. Some values are explicit. You need to call these out. Others are implicit. You need to be considerate of their impact.

Principles identify what is right (and wrong). They help you to start outlining how you will achieve what is good.

By being explicit from the outset, we can begin building real detail into the operational and behaviour change framework.

But again, this is work.

How many organisations today have significant budget for data ethics? How many organisations are putting their money where their mouth is and investing heavily in this critical area? Which really begs the question…

Why are data ethics so important?

A colleague of mine, Joe Toscano, recently published an incredible book, Automating Humanity. In it he does a great job of broadly articulating the challenges the technology industry faces. He also clearly showcases the demonstrable impact decisions made by industry have on individuals, communities and society at large.

In essence, the data we process now has real world impact. It can inhibit people’s access. It can shift or reinforce opinion. In some cases, it might be the difference between life and death.

Because of this, we have a responsibility. We need to maximise good and minimise bad. We need to make data about people. We need the tech industry to optimise their organisational structures, their business models and their products and services for real world outcomes. And we need to do it pretty bloody fast.

What can you do to get started?

This might seem overwhelming. In many ways it is. It’s bordering on existential. But there are some simple things you can do to get started.

The first is the Open Data Institute’s Data Ethics Canvas.

  1. Read about it
  2. Conduct a workshop
  3. Collaboratively complete the canvas, and
  4. Present the results and a proposed course of action to your stakeholders

The second is simple too; engage your stakeholders. Deeply engage your customers. Put your hypotheses to the test and aim for socially preferable. Passing for acceptable isn’t good enough. People should be stoked about what you’re doing, how you’re doing it and how they’re treated throughout the process.

The third is harder. It has to come after step 1 and 2. You need to build a body of evidence and attain buy in. A transactional relationship with data ethics won’t cut it. This has to be cross-organisational. You have to live and breathe it. It needs to be embedded.

At >X we work on this stuff daily. If you wanna chat, get in touch. This is a conversation that needs to progress. We’ll do whatever we can to be part of that progress.

--

--

Greater Than Experience Design
Greater Than Experience Design

Published in Greater Than Experience Design

Insights on the intersection of data ethics, privacy and design from the team at greaterthanexperience.design

Nathan Kinch
Nathan Kinch

Written by Nathan Kinch

A confluence of Happy Gilmore, Conor McGregor and the Dalai Lama.

Responses (2)