A Tale of Three Doors — Security UX Design

Jeff Axup, Ph.D.
NYC Design
Published in
18 min readOct 9, 2018

We have collectively been focusing too much on trying to make things “completely secure,” and not thinking enough about designing convenient safety. It is common for security technologies and workflows to reduce user productivity, hinder business goals, infringe on personal rights, and provide unusable security. Security design needs to pay more attention to realistic user requirements, ease of use, and satisfaction levels. It needs to be driven less by fear of the theoretically possible, and more by streamlining typical usage and creating risk-reduction strategies that don’t do more harm than good.

This article uses the analogy of physical house doors to show the pros and cons of different kinds of security, and addresses some of the common assertions made by security professionals. Open doors are not inherently insecure, insecure doors can provide the right safety/usability balance, and high-security doors can be the wrong solution to the problem. We need to focus first on how legitimate users want to access resources (e.g. houses) and secondarily on how to ensure they are sufficiently safe while doing so (e.g. keys and security cameras).

Three doors: unlocked, simple lock, complex lock

The First Door

This door is currently open, inviting potential visitors into the house. Even when it is closed, there are glass windows next to it which could easily be broken to allow unlocking the door from the inside. The usability of this door is high, because you don’t need to open it to enter the house. However, security of this door is low. If you drove by a house with this door you would immediately know that it is a low-crime neighborhood, that this family would probably be friendly if you knocked on the door, and that the owners might not even bother to carry a key most of the time. Contrary to expectations, the family that lives here probably does not get a lot of strangers unexpectedly inside their house. This is due to ‘soft security’ which relies on social norms and unobtrusive methods to regulate behavior. Nearly everyone knocks on the open front door and waits to be invited in, because that is how the majority of people act most of the time. A security pen-tester would walk through the door and pronounce the house insecure, while the rest of us would nod and ask why he was being so rude. Putting up a metal security gate on this door would not be appropriate for the level of risk in the neighborhood, and the owner would be embarrassed to be seen with it.

The Second Door

This door is made of wood, has a deadbolt, a door bell, and no windows. It requires a key to enter the house. This is the kind of house that a basic burglar might possibly be intimidated by and move on to an easier target. It’s also the kind of house that you might lock yourself out of sometime due to misplacing the key. It’s also the kind of house where you ring the doorbell and wait uncomfortably, not sure if anyone is home and not able to shout “Hey Mary, I brought over some fresh-baked cookies” through the front door. It’s also the kind of house that stays locked when you want to remotely permit an airbnb guest to enter. It’s also subject to the problem of extra keys being made, distributed, and forgotten, leaving long-term unknown security risks. Typically these houses have a spare key tucked under a flower pot near the door due to these usability deficiencies in their security design. It’s also the kind of house that has been insecure for the last 100 years, because most house locks can be picked with a little practice and training. So people think their houses are secure, but really it only keeps out the untrained and opportunistic criminals — and that is exactly the point.

The Third Door

This door may look more like a bank vault than a front door to a house, but I know people from third-world countries who actually had them installed for their houses. Seeing one of these on a house means not only that it is a bad neighborhood, but that society has broken down as a whole. It means police are ineffective and that serious crimes such as kidnapping and home invasion have happened nearby. Entering one of these doors takes a long time and there will be considerable risk of forgetting codes or misplacing keys. This door is purchased at considerable cost, by someone who is afraid and willing to go through considerable daily difficulty to have a higher level of safety. Since doors are not the only way to enter a house, you would find that this house also has few windows on the first story, windows might have security shutters on them, and the house might look a bit like a prison. Neighborhoods with lots of steel bars on the windows and doors probably have them there for a good reason — and that tends to depress property values significantly.

Common Comments From Security Professionals

“We need to be as secure as possible,” or “Security should be our primary concern.”

A door with too much security.

There is such a thing as too much security. It means that you don’t have freedom of movement, and that you can’t react quickly. The primary goal of a business is not to be secure — it is instead, to produce a product or a service for customers — and security may be an aspect of that. A product without security may still be a useful product. Security without a product is not a business unless you’re in the business of providing security. Startups have a joke about using lawyers — they will try to protect you from everything that could go wrong, even things that aren’t at risk for and won’t be relevant until you’re a much bigger company. Investing too much in legal protection early in a startup’s growth is a sure way to ensure that your startup never gets big enough to matter.

“This information is obscure, so it increases our security level.”

Security questions asking questions that can be easily answered with a little personal research.

Obscurity only increases security until it doesn’t — and that will always happen. Adding an additional factor to your security only increases security if the new element is actually hard to locate. Back in the early 2000s I argued with engineers about using email addresses on login forms instead of usernames. I argued that users would know their own email address, and hence would be able to log in more effectively. They argued that a username was more secure because the email address was public and the username had to be privately created by the user. The flaw in that argument is that people tend to use the same username across multiple services, and it tends to be close to their real name, and it is often visible during form entry. It is also easily forgotten if you have different logins, and leads to multiple valid combinations of usernames and passwords for different systems.

Consequently usernames are not really obscure, and not worth the cognitive load increase for users — instead it is better to let them identify themselves with an ID they can easily remember, and do your security check using another method. The industry has now come to this conclusion, but it took them 15 years to realize that happier customers and decreased support desk calls were more important than “possibly increasing security.” Asking users to remember essentially two passwords — and hoping hackers don’t bother to research one of them — is not good security design. The California DMV asks users to recite their street address to log into confidential data. This is an example of hoping hackers can’t pick up a phone book to access “obscure” data. It is a huge usability pain for legitimate users and adds negligible security.

“The more complex, long, and randomized we make this security feature, the better the security of the system.”

A “simple” secret key the user should remember and use to withdraw crypto — that can not be replaced if lost.

Security and usability are part of the same spectrum — typically the more you have of one, the less you have of the other. The cryptocurrency ecosystem is a classic example of engineering-led design which is irrationally fanatical about security. We’re not talking about realistic security — we’re talking about theoretical security. Realistic security is when you consider how many codes have been broken in the last several years. Theoretical security is when you consider how many years of brute-force attack with unattainable supercomputers could possibly crack a code. Complex security designs, such as cryptocurrencies that use long public and private keys to permit simple transactions such as withdrawing and transferring money, take usability hits that are not always worth it. For example, the worst possible security solution is the one that ends of up permanently locking you out of your own property. Currently 1/5th of all Bitcoin (20 Billion USD) is lost or inaccessible, which is a direct consequence of Bitcoin’s focus on theoretical security instead of usable security. If that isn’t an example of a failed security design, I don’t know what is. The other half of the security coin is access.

“Anyone should be able to use this security design.”

August and Schlage both produce different kinds of digital smart locks. August has advertised itself as a solution for airbnb, housekeepers, and visitors by requiring users to download a phone app to unlock doors. It turns out that people’s phones run out of battery, apps are hard to download for some types of users, network connectivity can be bad, and people really don’t like to have to unlock their phone and open an app every time they want to open a door. In my case with the August door, one housecleaner refused to use the lock due to app installation issues, and one roommate complained of intermittent door-unlocking behavior. Then the door auto-unlocked when I drove up, and re-locked before I entered the house. I decided to switch to a simpler product. The new Schlage lock gives the complexity to the owner of the door and the simplicity to the typical users trying to enter the door. Users can either remember a code, or check a text message on their phone if they forget it. The owner can easily add users with distinct codes via an app and bluetooth, and track a history of how the door has been used. No need to carry keys around and no need for a key left under the flower pot.

“This is a reasonable process to expect users to do.”

Users should not have to go through unnecessary or inconvenient processes in order to get their work done.

Security professionals seem to have an expectation that pain is a necessary part of being secure. Yes, of course you should have to re-enter your username and password twice to get in— they are two different systems you are passing through. Of course you should have a randomized 20 digit password. Of course you should be required to change your password every month even though it was perfectly secure to begin with, and still is. Of course you need to reboot your system when it stops recognizing you. Of course you need to disconnect your WIFI when generating a paper wallet for your crypto and remember to only print it on a printer not attached to your local network. The things that security groups routinely think is acceptable for users to be required to do needs re-evaluation, and they are often not justifiable given the negligible amount of security gained.

“Our users will be willing to use this new security process.”

People will find ways to bypass your unusable security.

Users are great at optimizing their strategies to avoid things they don’t like. They will also passively and even subconsciously undermine and block people or organizations they don’t like — that is human nature. If your users don’t think you’ve found a reasonable security solution, there will be repercussions that are likely to negatively impact your organization.

A friend told me a story about a company, where a new security policy required employees to physically show their badge to the secretary, in addition to scanning it through an electronic door lock in order to enter the office. The office was small and the secretary knew everyone’s face, so showing IDs made it seem like both the company and secretary didn’t recognize or trust the employee. It also added an extra step of flashing badges when hands were full in the morning and every time an employee went to the restroom. The result was that many of the employees started avoiding the lobby and using a side entrance with a pin code instead. This decreased the social mixing in the lobby, the secretary ended up quitting, and everyone disliked the security group. The justification given for this was that electronic IDs could be hacked, and fired-employees could try to re-enter the premises, but there are better solutions to deal with these unlikely fringe cases. The 80–20 rule seems to have been lost on security policy designers — we’re erroneously expecting the 20% (or the .001%) all of the time.

“If the security system functions as designed, then it is usable.”

Great comic from: https://piedtype.com/2016/05/20/fear-of-flying/

Just because a system functions in the way the designer intended it to does not mean that anyone can use it. Many products have shipped without bugs and then been recalled due to how customers chose to use it or usage scenarios that could easily happen by accident. Security designs are often implemented out of concern about theft or intrusion, but fear can drive people to overreact. Most security systems have been designed primarily to ‘prevent fraudulent entry’, not to ‘permit rapid, secure, humane entry a high percentage of the time’ — and those are very different things.

In one example of “functioning as designed,” a company had a pin-code entry system on a lobby door with no secretary present. When the user pressed a number, the unit beeped twice in response. When punching in a 4-digit pin, the unit beeped 8 times, and then beeped more to confirm code acceptance. Every time the user pressed a button it felt like an error response. In addition, the door handle which appeared to rotate down, actually had to be pulled inwards, and there was no audible “unlock” sound as the electronic door lock disengaged. This resulted in annoyances for regular employees and security staff who had to open the door for visiting employees who thought their code didn’t work. The visitors were used to doors that beeped once per number press, the door clicking when the door unlocked, and a handle which actually rotated. When this was reported to security staff they responded that “it is operating perfectly.”

“The more difficult our security process is for users, the more secure we are.”

A google image captcha with characters that could be interpreted multiple ways. h or b? rn or m?

The more difficult it is for potential customers to use your product, the less money you make and the weaker your brand gets. Frustrating users online is just like hassling customers who enter your physical store. I have personally decided to stop using crypto-exchanges because of bad usability and one of them (Poloniex) has permanently locked me out due to a change in my 2FA keys. No reasonable recovery process is offered by them. Some of the recovery processes require a lengthy background check and insecure transfer of confidential IDs across international borders, which is unacceptable. This means that I am no longer a customer and write negative blog articles about them — but I’m sure they are happy that they are “secure”.

“You aren’t secure unless you do X,Y, and Z.”

A bike attempting to look secure.

No system is ever 100% secure. You will never achieve complete security. Higher-levels of security will always come at a cost of ease of access, speed of access, usability, money, decreased productivity, support load, dissatisfaction, or other factors — the question is whether an appropriate balance has been found between these factors. It is worth analyzing what security steps are being taken, how reliable they are, how humane they are, what a recovery process looks like, and how much security is actually needed. When the DMV decided to ask all callers to recite their current street address and birthdate to gain entry to private records, they almost certainly had a security staff member saying this would make them secure. In fact they are just annoying customers and publicly demonstrating how inept they are. The DMV already tarnished their brand long ago, and this certainly isn’t helping them improve it.

“We can’t do X, because it will make it insecure.”

There is no such thing as “secure” and “insecure” in the absolute sense. The open door in the right neighborhood is secure enough to keep a household safe for their entire lives. Security depends on the context, and it has tradeoffs with usability and other factors. Some things will add unacceptable levels of security risk given the context. If that is the case, and the new security measures can be demonstrated to actually increase security, and that is worth the costs of employee/user dissatisfaction, hassle, and disuse, then proceed. Embrace the fact that you will always have some level of insecurity in your design. Also, some users consider carrying a door key (or losing it and getting locked out) to be too much hassle and would simplify prefer an unlocked door — and they may be absolutely right in certain situations.

“This may increase security.”

A secure but unusable security solution.

Security measures usually come at a cost of time, money, usability, satisfaction, and other factors. Be very sure you are actually increasing security sufficiently before implementing the “solution”. Be sure it is the best option — there are usually alternatives. Pay attention to the details of making it seamless and usable. Consider how users will respond to measures they don’t agree with. Understand the tradeoffs you are making and be sure you are willing to accept the consequences. At one company, a computer mouse went missing one day. When the manager notified security he was chastised for “not sufficiently securing equipment.” This completely missed that point that perimeter security was obviously poor, and that the costs of chaining every mouse to the table would cost more in hardware and employee efficiency than the cost of replacing mice. It also resulted in a manager who would try to overlook security lapses in the future.

“We need to appear to be secure,” or “We need to look like we are doing something.”

Fake security signs to “deter” burglars.

After the 911 attacks, the FAA had a strong desire to appear to have security. The government’s choice to expand the TSA and give them increased powers to scan, ban, confiscate and grope, probably did more to make people think they had passed through security than actually being more secure. In reality TSA is failing to find 80% of the threats, while annoying 99% of passengers. When a group of terrorists were overheard discussing the possibility of using liquid-based bombs, the reaction was to ban all liquids from all plane passengers — instead of perhaps gauging the actual feasibility of the threat, or banning specific types of liquids only. While the cost of having a plane blow up is high, the measures taken to keep it from occurring do need to be humane, and they do need to be demonstrated to actually significantly decrease the threat. There should also be some recognition that any airport security measures will impact millions of travelers every day, which is a very large amount of collective dissatisfaction to cause just to decrease the probability of one theoretical attack vector. Being a prisoner in your own safe-house is its own punishment, and thinking you are secure is not the same thing as being secure. No one wants to live in a police-state where people are routinely searched, and a well-designed air transit system would not be requiring users to go through a typical TSA experience which is inconvenient, distressing, and unusable. The parallel TSA example for houses is the fake “protected by X” signs people put on their lawns when they don’t actually have a security service — it is the appearance of security, and the reliance on obscurity as the only measure of defense.

“Our secure access feature is down, so you can’t log in.”

A VPN client blocking access to a user and showing unusable technical feedback.

The examples of VPNs not connecting, login screens locking out users after three attempts, auto-authentication failing and requiring repeated daily logins, forgotten passwords, and being unable to access necessary data, are daily experiences for many users — but it doesn’t have to be that way.

An login error message providing no detail. Username wrong? Password wrong? SSO? Login server down?

Security systems need to be designed to fail gracefully. Fail-closed isn’t typically a justifiable option. A good security system has a plan B and a plan C, and alternate security methods can be used when plan A fails. The costs of simply locking out users is too high to have a lock-out be the de-facto response. Authentication systems are less reliable than many vendors and security professionals like to admit. When they inevitably fail, it can have dramatic impact on employee productivity and corporate costs. Interestingly, security products tend to have terrible feedback in error scenarios. It is very common for systems to show no error message at all, highly technical errors, vague messages such as “your username or password may have been incorrect”, or purposefully obscure messages such as “red LED blinks twice.” It seems likely that security system designers actually do this on purpose under the “increased obscurity is increased security” justification. Helping valid users recover from login problems does not decrease security, and giving attackers an increased idea of how your system functions shouldn’t decrease your security — because it shouldn’t have been designed to rely on obscurity to begin with. Any good security system should have a public architecture.

“Anyone failing this test is an attacker.”

Coinbase has reasonable options for getting back into their app that don’t make you feel like a criminal.

It is common for users failing security tests to immediately be treated as threats and either greatly inconvenienced or simply locked out of their own property and information. For example, it is very common for security measures to lock the account after three attempts to enter a password, often without informing the user. This is done, despite the fact that it is very easy to mistype (or misremember) a password three times, despite the lack of warning that a lock-out will occur, and despite the huge support case load this practice produces. Many administrators still think this is a reasonable setting to have in place for “security.” In contrast to this, some companies — such as CoinBase — have designed automated password recovery processes and clearly advertise them to users without making them feel like criminals. To make matters worse, “highly regarded” 2-factor authentication tools such as Google Authenticator never seem to have considered the scenario of having your phone stolen and needing to log in to your bank using a different device — this is not usable security.

“This needs to be secure.”

Unnecessary security (or obscurity) for a meeting reduces the number of participants.

Everything does not need to be secure. Open-source software was about taking something that was previously restricted and private, and making it public and open — there were lots of advantages to doing that. Similarly, many companies have discovered that opening product discussion forums to the general public — instead of restricting them to only customers or employees, produces better quality support information, free customer testimonials, and great design ideas. Security usually comes at a cost, and predicting those costs is not always easy. It could be reduced agility as a company. It could be decreased creativity in workers. It could be a higher turnover rate. It could be higher stress levels. It could be decreased communication effectiveness. In one example, a company had an important company meeting in an online meeting forum. However, they neglected to distribute the password to log in before the meeting, and a single-sign-on (SSO) solution was not used, so many people were unable to attend. Many meetings don’t actually discuss confidential content, and the tangible business risk of delayed meetings, canceled meetings, or annoyed customers may not justify default security measures that don’t add much value. Another frequent example is larger companies which are so scared to put any confidential data in the cloud that they ban employees from using common productivity-enhancing tools. This has indirect costs to the company which are rarely discussed or admitted.

Conclusion

The fact that most of us have front doors that don’t look like bank-vaults demonstrates that we often don’t need extremely high levels of security, and that we highly value convenience and ease of use. Current security design is focusing too much on potential attackers, and not enough on the more frequent case of legitimate users trying to remain productive and happy. Good security does not have to result in all of the downsides that it normally causes. Enterprise software companies have finally come to the conclusion that usability is an important business advantage, but security groups are dragging their feet on coming to the same conclusion.

Want to see ‘Part 2 — A path towards more usable security’?
If so, please add a comment showing your support and topic suggestions.

Potential topics:
- What guidelines should I follow for better UX in my security system?
- How can we evaluate how usable our security system is?
- What processes can I use to design a more usable security system?
- How to focus on use cases and not technologies
- Are security and usability always an inverse tradeoff?
- What does a security design with good UX look like?

Author Bio: Jeff Axup, Ph.D. has 20+ years of UX design and research experience, primarily in the security industry. His comments are his own and do not relate to current or past employers.

--

--

Jeff Axup, Ph.D.
NYC Design

UX, AI, Investing, Quant, Travel. 20+ years of UX design experience.