Let’s talk consent

Trauma-Informed Design Reflections #17

kon syrokostas
the Trauma-Informed Design blog
5 min readJun 18, 2024

--

Black logo text on light pink background saying “TID Reflection 17, 10–16 June”

Lately, I’ve been seeing some posts on my LinkedIn feed about companies changing their terms and conditions. They added clauses that allow them to use user-generated data to train AI systems. Many people weren’t happy about that.

Unfortunately, the current hype with all-things-AI has suddenly created a huge demand for data. The popular way of building AI (LLMs) requires huge datasets that are hard to find. Not only that, but an AI model’s performance is directly linked to the data used to train it. This means that a company’s access to user-generated data can provide a significant competitive advantage to it.

And even though one could spot many problems with the contemporary approach to AI, the one I’d like to talk about today is consent (or the lack of it). And even though consent is an issue when looking at data used to train AI, it expands beyond that. Consent is not AI-specific and I will not be discussing it as such.

The conversation on data is inextricably connected with the conversation on consent.

When I’m referring to consent in the context of technology, I mean organizations requesting permission to use personal information and user-generated data. Nowadays, this is often obtained using terms of service, but I believe that consent goes beyond checking a button on a registration form.

I have found that the Consentful Tech Project has done great work in this area. Its creators have drawn inspiration from Planned Parenthood and adapted its five elements of consent to tech. The acronym is FRIES. 🍟

Five shapes representing the five elements of consent: Freely given, Reversible, Informed, Enthusiastic, and Specific as described in the Consentful Techk Project
The five elements of consent. Illustrations designed by the Consentful Tech Project.

Consent should be freely given. In the context of tech this means that a design shouldn’t mislead us into doing something we wouldn’t normally do. A violation to that can be seen in designs where a checkbox is pre-filled or where a checkbox uses negative language (e.g: “I do not want to receive marketing communication emails.”) These cases are misleading partly because they assume consent before it is given and partly because they deviate from the expected design.

Consent should be reversible. This means that even if we initially agree to something we should be able to change our mind about that. A great example here is the unsubscribe button in most email lists. This element is also emphasized a lot in trauma-informed design, often in the context of trauma-informed user research. Here the questions that arise are: Can someone request the removal of their data from our research after an interview has happened? How easy can they do it? Is it communicated in advance?

Consent should be informed. This has to do with clearly communicating important information instead of hiding it under piles of legal text. Unnecessarily large terms of service not only make informed consent significantly harder, but also fuel inequity by obscuring information from people who didn’t have access to higher education and from non-native speakers. Open Terms Archive is doing some important work to resolve this issue.

Consent should be enthusiastic. This is my favourite one when it comes to tech, it refers to us wanting to give consent instead of being forced to do so. If our workplace uses Slack or all of our friends use Facebook, it’s unlikely that we’ll be able to avoid agreeing to their terms of service. But it’s unfair to assume that we consent to our data being used for targeted ads or to train AI models just because we need to use a service. How different would a work be if the only organizations that could use our data were the ones that did work we were enthusiastic to contribute to?

Consent should be specific. Agreeing to our data being used in one way doesn’t mean that we agree to them being used in any way. This is another common problem with most terms of service, they don’t allow for specificity in what we are agreeing with. Contrary to that, many cookie notices, ensure this by providing options around which cookies are accepted and which ones aren’t.

Unfortunately, when looking at those elements I can’t help but notice their absence much more than their presence in tech. But I remain hopeful. After all, in the past, we have seen steps being taken towards the right direction.

Most notably in Europe, GDPR created the necessary guidelines to ensure that email subscriptions are operating in a consentful way. Thanks to it nowadays consent is required to subscribe to a newsletter (consent is freely given), it’s always possible to unsubscribe from it (consent is reversible), and mailing lists cannot be sold (consent is specific).

Of course, email is not a perfect system, there are still a lot of spams and dark patterns. But it’s a good case-study that helps showcase how impactful regulations can be in enforcing consent.

Impactful, but potentially not enough. The consentful tech project writes in their zine:

Some people have called for police departments to become more knowledgeable about current technology, and for lawmakers to create harsher punishments for people who are committing violence online. But the problems with this approach mirror those that are rampant in enforcement of sexual assault laws. Often it is the person who experienced the harm who is blamed — why did you send nude photos to your ex, or why didn’t you just ignore that troll? And, for Black and Indigenous people, racialized immigrants, LGBTQ people and more, police and prisons are key vectors of violence in daily life.

What if we built community-based responses to harm and violence into our
technologies?

I believe that we need a combination of regulations, community-based approaches, and education to really achieve consent in tech. But I believe that it’s possible to get there.

The bottom line is that consent matters, because of how personal our online data are nowadays. Misusing them can cause harm, break trust, and result in people feeling unsafe. Designing in a trauma-informed way is all about minimizing the chance of those things happening. So, the only way to be trauma-informed is to do consent right.

PS: You can read about how to stop your data from being used to train AI here.

--

--

kon syrokostas
the Trauma-Informed Design blog

Software engineer & trauma recovery coach. Exploring trauma-informed design.