Security Theatre, dark patterns and the ethics of design

What ethical responsibilities do designers have to their users?

Let me explain. I’m currently working on a project that, at its simplest, aims to design a service that protects its users online. To do this, the service needs users to hand over some of their personal data — and the more data the users give it, the more effectively it can protect them. (This project is still in ‘stealth mode’, so please forgive the lack of detail.)

Through talking to our target users, we’ve learnt that people have strong concerns about the security ramifications of a service like this holding lots of their data. But we’ve also learnt that the signals of security they’re seeking are superficial. These are ordinary people with an ordinary level of technical knowledge, so they’re (usually) not looking for https or ssl, for a product that salts and hashes sensitive data instead of storing it in the clear. They’re looking for padlock icons, for the colour blue and for reassuring microcopy like “bank-level security”.

They’re looking for ‘security theatre’. This term was coined bysecurity expert Bruce Schneier to refer to highly visible ‘countermeasures’ to a particular threat that make people feel more secure without actually making them more secure. The best example is probably airport security, in which we are required to remove nail scissors or fluids from cabin baggage, even though these are unlikely to be used in an attack.

As we refine our prototypes of this service, we’ve added in more and more elements of security theatre, and we’re seeing much more trust and confidence from the people who test-drive these prototypes for us.

But we haven’t even started thinking about how we might actually securely store and transmit the users’ data. This is one of the dangers Bruce Schneier associates with security theatre: that the performance of security distracts from improvements in actual security. In this case, we will certainly address the real security of the service later on in this project, so it’s not this issue that’s concerning me.

Instead, I’m concerned that our use of security theatre is contributing to a false understanding of the markers of security.

In our case, we’re matching signifiers of security with a real commitment to security behind the scenes. But every time a product uses security theatre as a promise to users that they’re really making things secure, it reinforces the false understanding that security theatre is real security, or is a reliable indicator of security. This, in turn, makes it possible for unscrupulous or lazy product-makers to trick people into believing a product is secure when the product’s creators haven’t bothered to make it secure behind the scenes.

In essence, security theatre will become a dark pattern: an interface that’s designed to trick its users?

The service we’re designing includes a strong educational element in which we help people understand the risks they’re exposed to online and how to effectively combat them. But, being realistic, we can’t expect that all our users will listen or care if we try to educate them on the difference between real security and padlock icons.

We’re left with a choice between a service with a greater chance of success and a greater usefulness to its users, that relies on and reinforces this misunderstanding; or a service that refuses to use the expected markers of security and consequently confuses or worries users, if it can get anyone to use it in the first place.

There’s no easy answer, in this case, and perhaps that’s a good thing. Instead of ticking off the ethical concerns, we have to continually engage with them throughout the design and build process, and over the lifetime of our users’ interaction with the system. Beyond this project, I’ll be continuing to read and explore ways in which we can incorporate consideration of ethical concerns into our design process.