We’re terrible at giving security advice

We can make it better though!

Łukasz
5 min readJun 9, 2022

Some of the security advice we give to the average users is terrible. I won’t be attributing any of the statements below, because it’s not about naming and shaming, but I promise you all of this is real and is directed at average users.

When leaving the hotel room, sprinkle your keyboard with crushed crisps and take a picture. Then when you come back compare the crushed crisps to the picture you’ve taken before.

In order to avoid being infected with Pegasus stop using iMessage.

Avoid apps with titles or descriptions in broken English

In order to avoid getting infected with Pegasus buy an unpopular Android phone

If you have sensitive data on your mobile device, make sure it’s encrypted. It will then remain secure, even if malware steals it.

These are all obviously very terrible (and slightly funny), but there’s a bigger (and slightly less funny) problem. The bad advice that looks good. Like these ones.

avoid open WiFi networks, in case someone eavesdrops on unencrypted communication

avoid contactless payments because of the possibility of a relay attack

do not click on suspicious links, because they may lead to malware

do not scan QR codes, because someone might’ve replaced it with a malicious link

do not charge your phone through an airport USB port as someone may hack it

All of these seem like they make sense, and for some very specific scenarios they do make sense, but they do not matter for an average user. By giving this advice we are creating an information overload for the regular user. The user has to remember all of this and no one is able to remember all of this without making computer security their whole life.

Also, just an aside, no one ever clicks on suspicious links. There is no person in the world who goes “oh, this link looks suspicious, let me click on it!”.

Why is this bad advice?

Average user cannot figure out which of the threat scenarios are most likely to happen to them. We have to do it for them. We have to prioritise our advice. What is the likelihood that an average user will get targeted by a malicious charger at the airport? What is the likelihood that some will phish the user using a malicious QR code? Is it more likely that an average user will be targeted by an old and known vulnerability, because they didn’t update their device?

Coming up with a new term, like “qishing” (phishing using links encoded in QR codes), will get more clicks and is more newsworthy because it’s new and creative — even if it doesn’t matter to an average user. Instead of following the important, but boring, advice — e.g. install security updates — users will follow an interesting advice, since it tickles their brain. They cannot prioritise this advice and they won’t remember all of it, so they will just follow a random subset of the interesting advice we give them.

I once attended a workshop for “average users” where the presenter was explaining how to make an Android phone more secure. It started with the explanation of network infrastructure of Pegasus, because Pegasus is interesting, even if an average user will never get targeted by it. The average users takeaway from that training will be how to defend against Pegasus, a threat which will never be in their threat model.

This also shows another problem — we love new, shiny, technically complex threats and we love to talk about them. To give you another example, I have once did a lot of research into NFC (contactless) payments. I did a talk at the conference. My non-technical friends have watched this talk and their takeaway was: contactless payments are insecure and you should never use it. This talk wasn’t meant for them, it was technical talk to highlight some problems, which should not matter to regular users.

I enjoyed doing the research, presenting it and the audience seemed to like it, but if someone were to ask me “what should be a takeaway for an average user?” I wouldn’t be able to answer that. That was not the point of this talk. However, it has become clear to me that, for an average user, I have presented too many problems, they couldn’t choose which ones were important and the attack scenarios I’ve presented were taken as a real-world threats and not part of my research work.

So we love to talk about small-scale bespoke technically complex threats and we are always looking for new creative advice, even if the boring advice gives better results in the long term. On the other hand that new creative advice, unlike the useful boring one, makes users more interested in thinking about their own security. How to solve this paradox? How to give the users useful, meaningful security advice while keeping them engaged? The answer may be hidden in somewhere in the UK postal system.

Postal codes and cognitive psychology

There is something weird about the UK postal codes. I lived here for a little over 6 years, used 3 different postcodes and I remember them all. I even remember the postcode for my office. However, I only remember one postcode form Poland, where I lived for most of my life, and that postcode is the one I used for 18 years of my life. Why is it so easy to remember the UK postcodes? What makes them different? Turns out it’s all because they were created this way with a help from a cognitive psychologist from Cambridge University.

Imagine this: a purely technical task, which is sorting letters into areas to make the delivery easier, was done with a help from cognitive psychologists. They could’ve gone with the Polish or the US model: assign each district a few random numbers. It works just as well and it’s easier to process. Yet, they have decided to include cognitive psychologists (“non-tech people”) into this process to make it much better, possibly sacrificing a bit easier processing of addresses.

What should we do?

Now the solution seems simple. If we try to reach a lot of users, we need help from people who know the users. We do not know the users. We need help from sociologists, cognitive psychologists and we need lots of user studies. I know, at the end we probably have to give up alerting the users about that new exciting research into how the ultrasonic sound the monitor makes can reveal the websites they visit and how they should always make sure to have a soundproof barrier around their monitor, but maybe it’s the price we have to pay. If you ask me, that’s a very small price to pay.

This approach does have a downside though. What if, in the end, we will learn that the best advice is what we all secretly think it is — boring, relentless repetition of “update your devices, use 2FA and use a password manager” (or something similar)? What will we do with “qishing”? At least we can eat all these packets of crisps instead of crushing them and sprinkling them on our keyboards.

--

--