Data Trust, by Design: Principles, Patterns and Best Practices (Part 3 — Consent)

Nathan Kinch
May 7, 2018 · 27 min read

In Part 1 of this series we introduced you to the principles of Data Trust by Design (DTbD). In Part 2 we proposed a pattern for Upfront Terms and Conditions that brought to life the principles described in the previous post. This post, Part 3 of the series, is all about design patterns for consent-based data sharing.

Throughout the post James Harvey and I will dive deep into consent-based data sharing design considerations, showcase examples of how to bring these considerations to life and feature some additional commentary from some of the leading thinkers and practitioners in this space.

If you want to skip some upfront context and get straight to the pattern, design considerations and worked example, scroll until you start seeing screenshots and flows. Otherwise, keep reading. We trust you’ll gain something of value from doing so.

Why consent?

“Privacy is all about control — personal control over the use and disclosure of one’s personal data. This is predicated upon the positive consent of the data subject. Look to emerging AI technologies such as SmartData, which reflect the permissible uses of one’s data according to the wishes of the data subject, to streamline the consent process and make it effortless.” Dr. Ann Cavoukian, Ph.D. and Inventor, Privacy by Design

In the spirit of aligning to Ann, the reason we like consent-based data sharing in the short to mid-term is that it has the potential to put people in the driver’s seat. Effective consent design can help give people power; the power to control what data they share, why, when, with whom and for what. Giving people this control is one way to support their agency in the digital world. It may even assist in shifting the power imbalance in the personal information economy.

However, it’s not all dandy. There’s serious risk of consent fatigue if we don’t nail this. Consent, like many legal constructs, has its limitations.

In the longer term we’re hoping personal information management services and other capabilities (like personal AI) help us design and operationalise a trust based data ecosystem. Through this we might realise the massive value potential of data sharing. It’s in this type of ecosystem, where people are supported by technologies working for them, consent or whatever consent becomes might actually work at a scale it simply cannot today. Although this sounds awesome, it’s some way off being a reality. This article will push that potential future to the side and focus as close to the here and now as possible.

Full disclosure

Before getting into it, let’s set the stage. We are not lawyers. We are not providing legal advice. Nothing we propose will enable you to bypass deep and meaningful collaboration with your legal and data protection advisors. Data protection, privacy and security, auditability and accountability, ethics and trust are all serious things. They should be treated as such.

Moving back to our focus.

What is consent-based data sharing?

Consent is probably the most widely debated and may become the most heavily relied upon justification for a variety of data processing activities.

Getting specific, article 4(11) of the GDPR defines consent as: “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

Article 4(11) of the GDPR stipulates that consent of the data subject means any:

  • freely given,
  • specific,
  • informed and
  • unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.

For consent to be informed, it’s necessary to inform the data subject (the person you’re seeking consent from) of specific things they need to know to help them make a choice about whether or not to share their data. The WP29’s consent guidance described the minimum information required for obtaining valid consent as:

  1. the controller’s identity,
  2. the purpose of each of the processing operations for which consent is sought,
  3. what (type of) data will be collected and used,
  4. the existence of the right to withdraw consent,
  5. information about the use of the data for automated decision-making in accordance with Article 22 (2)© where relevant, and
  6. on the possible risks of data transfers due to absence of an adequacy decision and of appropriate safeguards as described in Article 46.

“Generally, consent can only be an appropriate lawful basis if a data subject is offered control and is offered a genuine choice with regard to accepting or declining the terms offered or declining them without detriment.”

For more on the nitty gritty of consent in relation to the General Data Protection Regulation (GDPR) check out Article 29 Data Protection Working Party’s most recent consent guidelines (opens as PDF). This document has all the depth you need and more to really ‘get’ consent. I’d suggest reading it as part of the process to help you understand what constitutes valid consent in the context of the GDPR.

But this isn’t just a technical, legal, operational or behavioural thing. Consent ethics matter too. Because of this it’s critical we evolve design patterns for actual people. Today’s patterns, even those we’ve recently seen in preparation for the GDPR, simply do not work. They don’t provide the clarity and freedom of choice they’re meant to.

We need to make consent design patterns clear, meaningful, unambiguous and valuable for the people making decisions based on them. It hopefully goes without saying, but dark patterns for consent are a very serious no-no.

By writing this article we are not proposing that consent be relied upon as the legal basis for data processing. Consent is one of 6 legal grounds for processing data. Ultimately this decision must be made by your organisation and expert legal counsel. You should be collaborating with them. What we are proposing is that, if you’ve chosen to rely on consent as the, or one of the legal bases for processing, you cannot rely on the design patterns of old. You need to design the new by informing, empowering and enabling the people you serve to make choices.

Breaking it down

To support you in achieving that outcome we’ve broken down consent-based data sharing into;

  1. Upfront consent: We view upfront consent as a consent notice that surfaces itself very early in the relationship you have with a new customer. It’s decoupled from upfront T&Cs and focuses on specific processing activities. We’ll explain more throughout the post and highlight this with a practical example.
  2. Just in time consent: We view just in time consent as a consent notice made relevant by a specific situational context. In essence, just in time consent is event driven. For example; you’ve walked into a building, you’ve clicked on something or you’ve asked a specific question. Again, we’ll dive deeper into this by showcasing a practical example.
  3. Consent revocation: This isn’t unique, but consent revocation in the context of this post is the ability for people to dynamically engage with what they have consented to, meaning they can revoke your right to further access/process the data they’ve previously shared.

This post will not cover consent management (from the person or ‘data subject’s’ perspective) design patterns. There are already many emerging capabilities focused on this. It’s likely we’ll come back to this pattern and give it the attention it deserves. For now, the consent revocation pattern should be enough to provide some inspiration for how you might support people in managing their consents with your brand (and perhaps ecosystem partners) more broadly.

This post will also not cover consent receipts in depth. As we discussed in Part 2, there is a vibrant community working towards a consent receipt standard. It is however very likely a design pattern for consent receipts will make its way into the Data Trust Design System we reference in Part 1.

We will tie all of this together by highlighting an example of an end to end flow that showcases how a consent experience may unfold. This example serves the purpose of showcasing how the DTbD principles can be operationalised in the context of consent. Rather than guidance, this post should serve as a frame of reference and point of inspiration.

Let’s get started.

Upfront consent

Upfront consent is something we expect to see a lot over the coming months. You’ve probably already noticed a plethora of GDPR related emails that seek your renewed, or valid consent for services you use. This is not what we mean by upfront consent (detailed explanation below), so let’s push inbox madness to the side and focus on the early interactions someone might have with your brand, your product or your service.

If consent is your justification for certain data processing, gaining consent early in the relationship might make sense. However, we’d caution overdoing it. We advocate actively practicing data minimisation and progressively gaining access to data as your customer relationships deepen. This is the best way to bring the DTbD principles to life. It’s also the best way to showcase your trustworthiness by giving before you get in return. Effectively managing this give-to-get ratio is key to earning and sustaining trust.

In the example you see here we’ve focused on bringing consent to life via a personal financial management service, or something that more closely represents a challenger or “Neo” bank than a traditional bank.

We did this primarily because Open Banking regime’s are making their way across the globe and it seems consent is likely to be very important. We also figured it was a fairly relatable example.

To bring the DTbD Principles to life we focused on ensuring our consent requests were driven by explicit value proposals. myBank only asks for access to new data when it makes sense and when a new increment of value is likely to be delivered to their customer.

As I mentioned earlier, we aren’t lawyers, but for those who are interested, here’s a simple explanation of the rationale of this example.

Our worked example assumes myBank has defined a clear legal justification for their data processing (for the sake of this example let’s just call it contract). This justification covers data processing related to the core functionality of the mobile banking application the new customer, in this case Jen, has just signed up to. It’s clearly articulated in their upfront T&Cs.

But the bank has designed some ancillary, or rather, ‘optional’ features within their app. These features have the potential to be really useful as they more deeply personalise people’s experience and hopefully enable people to start developing a more effective relationship with their saving and spending behaviours.

As part of the bank’s action oriented, progressive onboarding experience, they introduce myLife (an optional feature). If the value proposition is compelling, new user’s will then be asked for their consent for the bank to process data they haven’t previously asked for and don’t yet have the right to process. The idea is that this additional processing will deliver a new and optional increment of value.

This is just one example of how an upfront consent experience might unfold. We focused on an ancillary proposition as it’s a bit more compelling than consenting to receiving marketing content. We also wanted to seperate it from upfront T&Cs to showcase how valid consent (as the legal basis from which data processing is justified) might differ from other processing justifications (say contract or legitimate interest).

If you want to chat about this example further, get in touch after checking out the fully worked example towards the end of the post. For now let’s get back to our ‘in focus’ design considerations. These are the things we’re consciously thinking about — the things we need to find a way to meaningfully surface — when designing a data sharing request.

Value Proposal

You’re asking for data for a reason. It might seem somewhat counterintuitive, but don’t start by asking for data. Lead with a value proposal, specify your purpose, connect the dots and give people the ability to drill into the consequence.

People will only share if there’s value in doing so.

Respecting, preserving and enabling people’s digital rights should be the status quo, not something we do to differentiate. This is our frame of reference. Once this approach is accepted more broadly we can focus on what matters; the value of the outcomes our data processing can help enable. It’s been incredible for us to observe how this shift in mental model changes the way data sharing experiences are designed.


People engage with information in different ways. People learn in different ways and at different paces. Layering is an approach to the design of data sharing requests that supports this. It gives us the opportunity to present different layers of detail, different types of information or different presentations of the same information so that the people we serve are more likely to actually comprehend what we’re proposing.

Although there’s complexity and nuance to it, it’s arguable that consent isn’t valid if people don’t understand what they’re consenting to. It’s got to be our job to help people understand this as simply, quickly and effectively as possible. Consent is more than ticking the compliance box, it’s about doing the right thing for people by respecting and enabling their agency.

Consentua has a concept they call Linear Consent. This enables people to progressively consent to sharing information as the perceived value of the exchange increases. has their own version of this with their consent access API. Many other examples exist.

A couple of years back I also worked with an IDEO designer on some early concepts to visualise consent and a variety of different data flows. Interestingly we both observed people’s comprehension was far higher (measured by their ability to accurately articulate what they were agreeing to) in most cases when a data flow was visualised. I’ve seen examples of this since, but the pattern is yet to make its way into the world at any meaningful scale. If you’ve got bandwidth and want to give this a crack, please try it out. Then put it to the test, refine and ship a better consent experience than you’re offering today.

If you want other examples of how you might visualise consent, check out this research example and the series of visualisations MyData have produced. Although neither example is directly relevant, they should serve as further inspiration.


Our experience leads us to believe it’s best to group the data you’re asking for by purpose, or rather the outcome you’re promising to deliver. We do this because it’s easier for people to understand how the data you’ve asked for helps to enable the outcome they seek.

An example might be a value proposal of deep personalisation. To achieve that outcome — a deeply personalised experience — a variety of data attributes may be requested. This data may then be grouped within the same request, with the logic of the grouping driven by the outcome your data processing is promising to deliver.

You’ll note this approach to grouping doesn’t feature heavily in our worked example. If you want to dive deeper into our work on grouping, send us a note and we can talk further.

Consequence Clarification

It’s hard to truly consent to something if you’ve got no view of potential consequences — positive or negative. This is by no means easy, and the reality is we’re experimenting and learning as we go. What we have already learned is that consequence matters. Mapping potential consequences not only helps you answer the deeper philosophical and ethical questions you should be asking, it helps people make more active, conscious decisions about what they’re agreeing to.

This is an area of consent design that deserves a lot more attention. For now we’re delaying that focus. Consequence clarification will have its very own post, so stay tuned!


Consent receipts (opens as PDF) are a common format for linking consent notice requirements into a usable format. They focus on legal requirements for consent notices’, provide infrastructure that enables effective governance and intend to open the market for privacy enabled innovation. You can learn more here and play around with the consent receipt generator here.

We absolutely agree a standards based approach is the way to go for such a capability. From this functional basis, however, the way in which a person experiences a consent receipt and the ways in which you enable them to dynamically engage with this receipt have to be consciously designed. Feel free to throw some brand flare at this. Remember, a request for someone to share their data is a value proposal. There’s no reason the receipt you give them shouldn’t be designed with the same principles in mind.

Just in time consent

Although the functional requirements (i.e. your legal obligation and ability to justify ‘valid consent’) of just in time consent are very similar to upfront consent, the experience people have will likely need to differ. Why? Because the situational context is different. Someone might be mid-action, needing to get something done quickly. A couple of minutes might be 10x too long for the person to spend making a choice.

Designing a clear, meaningful and unambiguous just in time consent experience someone can take action on in less than ten seconds is tough, there’s no doubt. Yet this is exactly the type of design constraint we need to challenge ourselves to work with. Whether or not these types of experiences work at the scale they need to is yet to be proven. What matters most is that we’re investing time trying to make what we do tomorrow fundamentally better than what we do today.

Consent revocation

A fancy way of saying you changed your mind, consent revocation gives recognition to the fact that what we decide today may not be what we wished we had decided whilst looking back from tomorrow. In hindsight we are often wiser, and consent revocation gives us the ability to take action in the context of our data.

Before breaking down the design considerations below, it’s important to note that, “The controller needs to demonstrate that it is possible to refuse or withdraw consent without detriment (recital 42). For example, the controller needs to prove that withdrawing consent does not lead to any costs for the data subject and thus no clear disadvantage for those withdrawing consent.”

We’ve highlighted this from W29’s consent guidance to keep the law top of mind. Although not the explicit focus of this post, we as designers, product makers, researchers and the people generally responsible for operationalising customer experiences, must be cognisant of such parameters. We’ve found the easiest way to do this is through ongoing cross-functional collaboration. Pair design activities with a designer and a lawyer might seem odd if you haven’t tried it, but it really works for this type of challenge.

In our example it’s important to note that, although revoking consent to location data as part of the myLife service alters Jen’s experience, myLife can continue to be used and Jen can continue to receive offers based on her core use of myBank’s service (income, expenditure etc.). Only the permissible incentive (location specific offers) has been lost. The reality is that bringing something like this (myBank) to life is a complex and nuanced endeavour, particularly in a highly regulated market such as financial services.

What this means is our proposition, the way we design data sharing experiences and the parameters of our data processing would be driven by deep collaboration with multiple stakeholders, both internal and external.

Please don’t dive too deep into the specifics when reviewing our worked example. That’s not what this is about. Use the example to challenge your thinking and inspire your practice.

Moving on…

So, other than making use of the collective skills, experiences and expertise across our broader teams, when designing for consent revocation, we think about 5 specific considerations;


If revoking consent means diving 27 layers deep into a series of hidden or hard to find links, only to be met with a pitch supporting why we shouldn’t revoke, our design has failed. Time to Value is really something to consider here, and the value or outcome someone seeks is exercising their change of mind or heart. Making it easy and decreasing the number of things someone has to do to make this so is critical.

Ease of revocation

Directly aligned to our first consideration is the ease with which someone can actually take the action. Accessibility is about getting someone to the action, ease is about the process of taking the action itself. We’ll touch on this again below, but this means simplifying the navigation pathway, decreasing the steps to revocation and enabling people to revoke both by purpose and by data source or individual attribute.

Time to revocation

This is more of a metric we consider that results from operationalising the two considerations above. If it takes someone a minute to consent, but ten minutes to revoke consent then we’ve screwed up, badly. If we get accessibility and ease right, we should enable people to revoke consent more quickly and seamlessly than they expect.

Confirmation of revocation

This is something seriously cool to play around with, especially if you’re an interaction design aficionado. It’s about giving people a clear indication that the action they’ve taken has been successful. In fact, it’s about giving them visibility of tangible progress between them executing the action and you confirming it was successful.

In our example what we’re showcasing is an interaction that visually showcases, and then confirms, that user’s data has been deleted. This is simply one interpretation of how you can visualise confirmation of revocation in a clear, and potentially, engaging manner.

Positive sum

Let’s say someone reads an article or has a conversation about a fundamental flaw in the security of biometrics. They feel almost immediate discomfort knowing that’s something they recently consented to you using for the purpose you defined. By applying the 4 design considerations above, you’ll increase the likelihood someone is able to exercise their control rather seamlessly. But, they might still want to share a bunch of other data with you. These types of situations are where zero sum thinking doesn’t cut it. It’s not share it all or nothing. We need to give people choices, and consent revocation is no different.

Practically this means similar patterns, like grouping, layering and consequence clarification need to make their way into consent revocation design. Enable people to revoke by purpose, but also enable them to revoke by specific data attribute. This gives them real granularity, and although most people don’t currently engage with their data to this level of depth, this is the most effective way to give people the power of choice.

Additional considerations

We’re the first to admit we’re a long way off nailing this. We are, however, confident we’re making progress. We’re also confident that moving away from the old likely means moving in a better direction. With this in mind, here are some additional things worth considering when designing for meaningful consent;

The f word

In Part 2 of this series one of the questions we raised was the balance between valuable and value less friction. When it comes to consent, this again is a question worth asking.

As an example, let’s say you wish to gain consent to process what the GDPR defines as “sensitive personal data” or “special category data” in order to deliver a deeply contextual outcome. In this case you might design specific friction into the consent experience to achieve a higher level of assurance that the person providing their consent is a) the person they say they are and b) actually understands what you are proposing. This may not be your ‘normal’ consent design pattern, but may be a valuable addition in specific circumstances.

“There’s a definite consideration regarding how we assure that we’re dealing with the correct individual — consent is meaningless without knowing who it is that has given (or revoked) it. We should provide risk based assessment of the assurance/authentication we apply. Much like banks do with payment authority — customers should reasonably expect to “step-up” as the value of the data increases. In fact, we’ve seen in user testing that adding some friction in is welcomed. This will likely cause a disparity of assurance between giving and revoking consent, as the risk of revoking will generally be lower.” Bryn Robinson-Morgan, Digital Identity and Digital Transformation Leader

For an external reference, Johnny Ryan from PageFair gives the following example of how to enact a design pattern for “explicit consent”. His whole series is worth reading. It’s good stuff that really gets you thinking.

Usage transparency

Wouldn’t it be great to know how data is used, whether it has actually been used and what the outcome of that usage was? I mean, if I’m sharing something as part of a value exchange I want to ensure I’m delivered the value I’m promised. Granted this is easier said than done, but this factors into our thinking regularly.

There’s a spectrum to this. There are also questions surrounding its value, but our thinking is that truly giving people control means a lot more than most people think it does. Whether people manage this themselves or rely on a personal (not personalised) technology to do this for them is probably less important than beginning to think about how we provide auditability and other verifications that support our use of people’s information and the impact that use may have had.


This is less of an additional consideration and more of a, ‘this is everything’ type of consideration. The theme has factored into much of what you’ve read and viewed above, but it’s worth diving into deeper.

Comprehension is the key metric of effectively designed consent experiences. The difficulty is that it’s only partly within your control. Our experience leads us to believe this is less about output and more about process. This means testing your design early and testing often.

There are different ways to do this, but perhaps the simplest is to run usability sessions supported by contextual inquiry. In these sessions you can test your design output for things like average time to consent, but more importantly you can focus on how effectively someone can describe what they’re consenting to. This will give you a more accurate determination of how effective your consent requests are.

In our Designing for Trust Playbook, we guide you through a process to do this. Check it out and get in touch if you want to learn more about these practices.

Form factor

The working parties and regulatory bodies pushing the GDPR forward are open to consent design interpretation (hence this post), meaning a consent notice could be a layered interaction within a mobile app, an audio experience, a video or perhaps, if you’re really ambitious, an augmented experience.

Our experience leads us to believe the most effective way to inform, empower and enable the people you serve to make choices is through a form factor that limits the cognitive load of switching and relies on a similar experience to their current context. So if you’re an audio streaming service, the most effective way to inform, empower and enable people to choose what they do and don’t share with you might be via an audio notice. You may then send a written and visual consent receipt after the fact.

Ultimately this will depend on the context of your proposition. It’ll require experimentation and like we referenced early on, means you’re likely to design new patterns rather than relying on the old.

If you do anything ambitious, put it to the test and figure it works really effectively, please get in touch. We’d love to hear about your experience.


Data, how it’s used and what that usage might mean can be a really complicated thing to understand. One of the biggest challenges we face today is simplifying this so people get it to the point at which they can actually make informed choices.

There are a bunch of ways we might be able to do this. One of them is helping to create simple, visual associations with data through the use of icons.

As an example, let’s say you’d implemented the COEL Standard. You might create an icon for each of the 32 clusters. From there there might be an adaptation of each of the 32 icons for the class level and so on.

Using the Classification of Everyday Living is just one practical example of how iconography might help simplify complex data structures. We’re still at the asking questions stage, and as is the case with a lot of this work, rely on experimentation to determine how we might move forward in meaningful ways.


This came up in a post about the design implications of Open Banking. Consent Efficacy is going to become an organisational asset. It’s not just a compliance thing. It’s not just an ethics thing. Consent has business value.

Specific metrics will almost certainly differ slightly by organisation. Broadly, however, it’s pretty safe the say if you’re asking for someone’s consent, you’d really like them to give it to you.

Here’s an interesting example from a media group testing different consent notices and their efficacy. After reading the entirety of this post and familiarising yourself with the regulatory guidance (if you’re not already) you may note this brand has some work to do…

By taking the considerations from our post and engaging in an experiment-driven process, you will increase the likelihood consent becomes your organisational asset. How you measure that is up to you.


A heap of focused guidance from organisations like the ICO and Article 29 Working party now exist. They’re excellent resources and certainly assist in our cause to inform, empower and enable the people we serve to make choices about how they interact with digital services. However, our experience leads us to believe that, although much of this guidance is rather explicit, organisational interpretations will drive implementation decisions. These interpretations will likely define the constraints we acknowledge as part of our design practice.

The thing is, there’s inherent complexity to this. To highlight an explicit example, WP29’s consent guidelines are 31 pages (30 pages of actual content). The question is how do you read, interpret and establish a clear point of view that informs how you design consent-based data sharing experiences? This is something we think about regularly and it’s pretty clear no silver bullet currently exists.

Our reality is one of focused effort, collaboration and ongoing experimentation. So just like the challenge of comprehension referenced above, distilling complexity into actionable practice is more about who we work with and how we work than the work we actually produce. I might sound like a broken record right now, but I can’t stress enough the value of cross-functional collaboration when tackling these design challenges.

In addition, you may want to look into cartooning :)


The notion that “culture eats strategy for breakfast” probably couldn’t be more true than it is here.

“No one cares about what we do with their data…”

“Look, people bypass the terms and conditions in less than a second…”

“We had a total of three views on our privacy policy last year and two of them were me!”

“I get this is important. I get it’s the right thing to do. But the thing is, this isn’t within our remit… Maybe next year”.

These are not acceptable arguments to stick to current consent design practices. People aren’t happy about the way their data is used. They feel powerless. They feel as though they’ve lost control. Most of all, doing anything other than bypassing an agreement is too hard and takes too much time for it to be something people actually consider. After all, the cost of reading privacy policies is 76 working days of your time.

The last point is why we all tick, forget and move onto the thing we really care about; the outcome. Until now it’s been our only option.

If your current mental model still aligns to the comments above I am not condemning you. But I am asking you to please do some research. There’s a heap of it covering this very subject. Once you’ve done that push forward and consider the ethical point of view. Then consider the economics.

Regardless of how you look at it, doing the right thing for people — effectively respecting and enabling their digital agency — is good for the entire ecosystem (in the long run). But these, our collective attitudes and behaviours, are perhaps the biggest challenge we face. We have to evolve deeply ingrained corporate and consumer behaviours if we’re to make progress and design the inherently trustworthy ecosystem referenced at the beginning of this post.

“Consent is fundamentally about the respect that you have for your customers. For that reason, it can be an important test of your culture. I knew that we were in the right place when our Chief Engineer started to bang the table, insisting that we fixed a bug because it was ‘vital to maintaining the trust of our customers’.” Oliver Smith, Strategy Director at Telefónica Alpha Health

I’m highlighting this as an additional consideration because stakeholder pushback is almost guaranteed. Gaining appropriate investment to get closer to your customers, design, test and learn how you can best inform, empower and enable them to make choices, is not a given. Yet we all need to work towards this. We need to showcase the business value of effective, person-centric consent patterns. We need to align incentives in such a way that doing the right thing is the only thing to do.

A big chunk of our work focuses on this. It’s why we often start with culture; supporting people in learning about the behavioral, technological and regulatory drivers pushing this change. Only then do we progress onto workflows, practices and the products, services and business models organisations produce as outputs.

So although a huge chunk of this post has focused on pretty practical stuff, please don’t forget culture when embarking on this journey. It really matters.

The consent experience (hypothetical of course)

So with that all out of the way, here’s how an end to end consent experience with multiple touch points might come together, starting with a simple sign up process.

It then progresses into a layered terms and conditions experience. For more on this see Part 2.

Moves onto identity verification and account confirmation so that basic KYC and AML requirements are met.

And then, as part of a progressive onboarding experience, Jen (myBank’s new customer) is introduced to an ancillary feature, myLife. It’s at this point consent is relied upon and the proposed legal grounds for processing data.

This is Jen’s upfront consent-based data sharing experience. It’s actually a key part of onboarding her to the new ancillary feature she would like to try out.

Some time later Jen’s on her way to work. She’s sent a notification about a cafe (remember her coffee preference above) that’s just opened up close to her work. They’ve got great coffee at a great price.

Jen wants in on the action and consents to myBank on-sharing some of her data with the cafe so they can produce a tailored caffeinated experience.

When consent is the grounds for processing, Jen is never locked in. She can revoke consent at any time, with minimal disruption to her service. There’s even a cool interaction visually showcasing her data being deleted :)

And lastly, whenever Jen does consent to something she receives a consent receipt. These receipts are recorded based on the time consent was given, they’re always available and Jen can even revoke consent directly from any given receipt.

Notice how consent-based data sharing makes its way into the overall experience? It’s our view that consent (and better person-centric data sharing practices more broadly), even though it may not be the absolute ideal mechanism to support people’s digital rights, can be a valuable part of the experience people have with the products and services they choose to bring into their lives. It’s our view that effectively designed consent-based data sharing experiences can help us realise new increments of customer, business and perhaps even societal value.


This article, the patterns and the considerations we’ve highlighted should serve the purpose of challenging current consent design practices. We trust they inspire you to think differently.

Having said that, much of how we make consent valuable, meaningful and perhaps even an engaging experience is yet to be figured out. Regardless of how we make progress, we must do it together. The more people dedicating time to re-shaping the digital economy (yes that’s bold, but these things can help us make progress) the greater likelihood we actually achieve it.

There are a variety of other resources out there to help you. There’s the great work we already mentioned from Johnny Ryan at PageFair, the ICO’s consent guidance, OpenGDPR, MyData, Alessandro Carelli’s Consent Journey Canvas as well as Projects by IF’s data sharing patterns and data licensing. At F8 Facebook recently announced TTC Labs. The patterns on their site, developed during their industry Design Jams, are worth checking out. There ‘s a variety of blogs covering the topic too. Piwik’s is worth reading, especially in the context of analytics. There’s even an emerging ecosystem of capabilities to help with the more systemic digital challenges we face called Personal Information Management Services. PIMS are supported by the broader move towards Self Sovereign Identity, which, although a vision for quite some time, has recently gained a lot of traction globally.

I’d suggest checking these links out. Learn from them if you can. Then do what you can to inform, empower and enable the people you serve to make choices about how they interact with the digital world. If you ask Jaron Lanier, this may even be a matter of life and death.

What to do now?

Other than being blown away by the realisations of Lanier’s TED Talk, here are three practical things to keep you moving forward;

  1. Conduct a collaborative, time-boxed consent design session with a group of colleagues. Make sure a lawyer is present. This will help more than you might currently appreciate. Produce some output and put it to the test
  2. Critique our work. Try figure out how you might immediately improve what we’ve proposed. Then, get started on step 1
  3. Talk to us. We’d love to hear about your experiences grappling with these types of challenges. It’s our view the more people dedicating chunks of time to this type of work, the better the outcomes for the entire market. So let’s collaborate!

Greater Than Experience Design

Insights on the intersection of data ethics, privacy and…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store