Fixing the cybersecurity problem needs a new approach — here’s one we haven’t yet tried that will work
May 1, 2015 update: Since this article appeared, DARPA has validated that the trusted federated identity approach described in this article is a viable strategy to combat the cybersecurity problem and was willing to fund the deployment. We are looking for a large relying party who wants to deploy this technology. If you are interested in helping to make this happen and share in the funding, please contact me. [ As of 3/2/2016, no volunteers.]
Abstract: Computer break-ins are happening more frequently than ever before. To fix this problem, we should do two things: 1) make it extremely difficult to get through the front door and 2) make the targets of the attack worthless to the attacker. The technology required to do this has been in commercial use for years, but it is not widely deployed. If the government collaborated with industry to launch a “trustable” federated identity (TFI) service, we can create a more secure (and easier to use) authentication solution. A TFI system is a new federated identity architecture that eliminates all points of mass breaches and preserves privacy of browsing and personal information no matter how badly the TFI system might be breached. The security of the identity assertions and privacy of data is also externally verifiable so there is no need to trust the TFI at all. It eliminates the need for multiple usernames and passwords, hardware tokens, knowledge-based authentication (KBA) questions, etc. There would no longer be a need to change your authentication credentials on any site. The TFI service could be hosted by governments and private companies. People would be able to freely choose their TFI provider and change it at any time. Adoption by sites and users would be optional and it can be phased in gradually by making it an optional way to log in. Secondly, TFI systems enable us to keep the targets (authentication credentials personal information, payment authorizations, confidential documents, and bank account balances) in a form that is useless to an attacker. TFI systems also improve security for client-server communication, improve privacy, and provide a nearly infallible mechanism for identification than using social security numbers or other personal information. This proposal is superior to the NSTIC effort the government is currently pursuing because it results in a much more comprehensive solution (it is more than just authentication) with superior benefits that could be commercially available in less than 2 years. The costs are minimal and we’d know quickly whether it worked. It can be done in parallel with existing efforts so it is risk free. This is our best chance for game-changing upside which is precisely what we need.
This is a long article. It covers in detail all the reasons current approaches have failed as well as all the objections that people might have to the approach advocated in this article. Feel free to skip over the parts you already know. The subheads in bold tell the entire story.
We aren’t making any progress in securing cyberspace. If anything, break-ins are happening more frequently than ever before. For example, last year, in just the US government alone, there were nearly 61,000 cyber attacks and breaches (and those are just the ones we know about).
Anthem Healthcare, SONY, eBay, Twitter, JP Morgan Chase, US Postal Service, Home Depot, K-Mart, Michaels, Goodwill, American Airlines, Target, Neiman Marcus, Zappos, United Airlines, Facebook, Yahoo, P.F.Chang, Dairy Queen, and more. Over 700 retailers, countless corporate and government servers, and the list goes on. And now, only 10 days after the Anthem breach, we learn that 100 banks in 30 nations were robbed of $1 billion dollars in the largest bank theft ever.
I see this impact personally as well. Last month, my debit card was used to ring up over $1,000 in fraudulent charges. Last week I was required to change both my username and my password every 3 days on my personal bank account at Wells Fargo. Each time takes 15 minutes talking to a person because I have to prove it is me before they unlock my account. This is a huge time waster for everyone.
Won’t we ever learn? Time after time we see that computers get compromised because authentication credentials get compromised.
Why does this keep happening? Because after decades of computer break-ins, we now treat them as a fait accompli: something we must live with and simply cannot fix. So we give people quick fixes like advice on managing their passwords instead of demanding that this broken system be replaced. We do that because we believe it can’t be fixed. It’s a widely held belief, but it isn’t true.
Where we are today
Here’s a supposedly “state of the art” secure login:
This is an absolutely HORRIBLE user experience and it’s also insecure too (everything is a shared secret). The password I use on all my other sites is less than 10 characters. So now I have to create and remember a new 12 character password. And I have to remember to use an email address from a company I left 10 years ago. And then I have to get my cell phone to enter an invite code and copy it over exactly into the box.
Using the method described below, we could eliminate all of this so you just click one button; no typing required to create your account securely. If you want higher security, you could require a 4 digit PIN (that can be used across all sites since it is never shared with the site). Want higher security? Hit a confirmation button on your cell phone. Want even higher security? Use that very same PIN code (or a fingerprint) on your mobile phone.
Which would you prefer as a user?
We need to face the facts: the current approaches haven’t worked
“How many more breaches will we endure before we admit that the private sector cannot solve this problem itself?” said Jackie Speier, D-San Mateo. “Our existing protections for all sensitive data — personal, commercial and governmental — are clearly insufficient.”
Rep. Speier is absolutely right in her assessment that the private sector can’t solve it themselves. Haven’t we proved that countless times over the past three decades? How more obvious can it be? We can’t keep doing the same things over and over again hoping it will suddenly start working. We need to try something completely different… something that could work this time.
Sadly, the current priority of the House Intelligence Committee is to “encourage private companies to share information about attacks on their systems.” Their strategy does nothing whatsoever to stop the breaches from happening in the first place.
But there is a solution that we have not tried yet that is virtually guaranteed to prevent most breaches if it is widely adopted. But to understand this solution and why it works, we first have to understand why we keep having security breaches in the first place.
The pervasive use of 50-year-old shared secret technology for authentication is our biggest security problem
Research has shown that two out of three breaches exploit weak or stolen passwords. Fix that one thing and you’ve made a huge impact. It is the low hanging fruit and the first thing we should do. If we can’t fix that, we’re toast. Job #1: We must put a stronger lock the front door.
Without a doubt, the biggest, most fundamental problem in today’s computer security is that we are still using 50-year-old “shared secret” technology to protect access to virtually all of today’s computers. This is a huge problem because whenever you have shared secrets in any client-server system (such as users logging into a web server), it means that there is single point of mass compromise (at the server).
Technical note: A shared secret is where you know a secret and the relying party has a copy of the same secret as well and uses it to authenticate you. The relying party is the entity “relying” on the identity assertion; it is the website or system you are logging into. For example, usernames and passwords are shared secrets. Credit card numbers are shared secrets. Your social security number and date of birth are shared secrets. And even those 6 digits codes on an authenticator device or app (like RSA SecurID or Google Authenticator) are shared secrets; the technology uses a secret shared between your authenticator device or app and the server in order to validate the code. Some people might argue that hashed passwords are not shared secrets since the server knows something derived from the secret and not the original secret. The key point is that the server has data that is either cleanly shared, or where the shared secret can easily be derived. Also, if my local computer hashes the password prior to sending it up to the server, then there is no need to crack any passwords. I just send up the matching hash code and I’m in. If the hashing is done server side, a simple brute force attack can determine the original passwords.
The bottom line is this: we are sitting ducks; we are disasters waiting to happen. Fix this one thing, and you’ve plugged a massive security hole in our nation’s computing infrastructure. For example, it was an administrator’s login credentials that were compromised in the recent Anthem breach.
Eliminating passwords is not the solution; it is when we use passwords as shared secrets (which is most of the time) that is the problem
While passwords are certainly troublesome, the fault is not the password, but how we have used them.
Virtually all use today of passwords and PINs over the Internet today are as shared secrets.
Some people say we must “eliminate passwords” or that “passwords are dead” or that we must use “stronger passwords.” That simply is not true. That will not solve our cybersecurity problem.
It is the use of passwords and PINs as shared secrets that should be eliminated, not the use of passwords and PINs which should not be eliminated. The “as shared secrets” is key. That’s the evil part, not the password.
When passwords are combined with a local private key to derive a local private signing key, that’s the right way to use passwords. Unfortunately, very few systems process passwords this way.
It’s important that we take a very thoughtful and well reasoned approach here, rather than a reactive approach. There are only 3 factors available to us for authentication: what you have (e.g., a private signing key in your device), what you know (e.g., passwords), and what you are (e.g., biometrics). If you arbitrarily eliminate one of those factors from consideration, you’ve now permanently reduced the options available to secure your system.
When you hear someone say, “we should ditch passwords,” you should ask the clarifying question, “Isn’t what you mean that we should ditch passwords as a shared secret?” We should absolutely ditch “passwords as a shared secret;” but we should absolutely not ditch “passwords.”
The sheer number of passwords can also be a problem for users. But if you’ve designed your system as described below, you can use just a single password everywhere without compromising security.
Password standards aren’t strictly needed; it has been shown that password standards are often counterproductive.
2FA is not the answer: Too little, too late, it doesn’t eliminate any shared secrets (it creates more of them), and it adds another layer of complexity on top of a system that is already too burdensome and insecure
Fundamentally, there is nothing inherently wrong with multi-factor authentication (MFA) including two-factor authentication (2FA). The more factors that can be used, the more secure a system can potentially be. So I’m a big supporter of 2FA and MFA.
But where I have a problem is how we do 2FA today: it’s (almost) always based on shared secrets. And it’s (almost) always an “add on” to using a base authentication using shared secrets. So you make an already bad system even worse. There is a much better way which will be introduced below, but first it’s important to explain why traditional 2FA (the 2FA that everyone talks about as “the” solution) must be ditched.
There is a great article written in 2005 by Bruce Schneier about the slow adoption curve of new technology. He said that two-factor authentication (2FA) was at that time about 10 years obsolete because it was too easy to crack. But he said that the ineffectiveness of 2FA wouldn’t stop people from deploying it. How prescient he was!
It’s now 10 years after that article was published and, just as Schneier predicted, 2FA is now widely accepted as the way to secure systems.
So let’s be clear. Our current state-of-the art approach to authentication is to use methods that basically became obsolete from a security perspective 20 years ago.
Want proof? The FFIEC required 2FA for banks years ago and this report shows that losses continued to increase.
So you have security experts saying it won’t work, people doing it anyway despite that, and then seeing for themselves it doesn’t work.
From a security standpoint, the problem is that both the primary username/password and the 2FA code (typically a 6 digit code the first time and then a cookied value for subsequent logins) are based on shared secrets too.
The other problem is that usernames/passwords suck and if you add another security layer of 2FA on top, then that sucks even more. Usability goes from “bad” to “worse.” That’s why 2FA adoption at sites is typically less than 1% of users (unless users are forced to use it, e.g., enterprise use) and sites make it optional (they don’t want to cause their users to go away). Hardware token are the worst. A banker told me the story of a major customer who pulled out an RSA SecurID and threatened the banker that if he was required to use this, he’d pull his account from the bank.
Many websites today offer 2FA where they send you a text message if you use a new device. That’s not secure (your phone can be hacked or diverted), but that’s not the problem. After that one-time authorization, they drop a cookie on your browser and keep a copy of that same cookie on their servers. That’s the shared secret.
Subsequent logins only require a password and presentation of the browser’s shared cookie secret. That means that once you breach the server’s databases with just read access, it’s all over. The hacker can log in as anyone. And often these read-only break-ins aren’t discovered for months or even years. For example, Mandiant’s 2014 Threat Report revealed that the average attacker was on a target’s network for 229 days before discovery! That’s plenty of time to grab the shared secret files and exploit them.
Even if a site forces all users to present the “Google authenticator” code for every login (which pisses off users to no end and also reduces site traffic), that is also vulnerable. The authentication done at the site is done by shared secrets. Any attacker who steals the shared secrets at the site, can login as anyone: Google Authenticator is no help at all. It may make people feel more secure and it does stop some rudimentary attacks, but because it doesn’t eliminate shared secrets, it does nothing whatsoever to prevent the mass breach centralized attack that is exposing authentication information and causing the most damage (since most people use the same password everywhere).
Also, many implementations of 2FA will only prompt for a second factor once the primary factor has been validated. This is poor system design because it allows an attacker to attack your account by solving smaller, easier problems. This is yet another reason the 2FA status quo is so bad; there are so many bad implementations of 2FA out there.
The dirty little secret that nobody admits is that when, after a mass password breach, a company announces that they now have two-factor auth, they never tell people that “by the way, our requiring you to use 2-factor auth does absolutely nothing to prevent our password (and 2FA secret) files from being compromised in the future. So the exact same attack can happen again because we did nothing to prevent it and we are just as vulnerable as we were before.” Ouch!
Shared secret authentication is also vulnerable to man-in-the middle/browser, phishing, and simple keystroke logging attacks. You type in your password and the 6 digit code into the phoney website and your identity is toast. That is why Schneier was absolutely right about 2FA.
Biometrics aren’t the solution either
Biometrics look attractive because most systems don’t use them today. So they hold the promise of the magical cure. “If we just implement biometrics, that’s all we need to be safe,” is the rallying cry.
But biometrics are an authentication factor just like passwords. They have advantages and disadvantages. They are definitely not the panacea that some people make them out to be.
Facebook security expert Greg Stefancik hates biometrics for all the right reasons.
Passwords are in many ways superior to biometrics, especially for remote use. This is because biometrics are like a password that you can never change. They are a shared non-secret. While they are very useful for in-person authentication using non-tamperable hardware controlled by the relying party, they are problematic for remote authentication unless the hardware device at a minimum digitally signs the biometric assertion and there is no way to compromise the device signature or fake out the biometric sensor without destroying the device.
Also, unlike a password, biometrics are not deterministic; they are different every time you present them. That means that the biometrics themselves cannot be used to decrypt a high entropy recovery key or high entropy private signing key or be combined with a private key in order to produce a signing key. Instead, these high entropy private keys are typically stored in the biometic device and then released to the application upon presentation of a pre-authorized biometric. This is how Apple TouchID works.
While this is in some ways superior to a PIN code, it is also problematic for two reasons: 1) biometrics are a shared un-secret so if I learn your biometrics from another app or from a hacker database, I may be able to use that info to breach your device and 2) because now software will have access to the private signing key associated with that biometric (e.g., when you present your fingerprint TouchID releases the private key to your software which can then be be used subsequently without the need for a biometric). Fingerprint and face data is often very easy to spoof such systems because in order to reduce costs, vendors supply hardware that lacks appropriate “liveness” detection.
Every day, biometrics get worse as a means of security. For example, now USAA is putting biometrics in their apps. Since the biometric data has to be stored, it’s now going to be out there with your passwords and credit cards for the thieves to steal. And you can’t change your biometrics.
The only safe way to use biometrics is when the presentation of a pre-authorized biometric enables the biometric reader hardware to do a single digital signature using a private signature key kept inside of the biometric reader. But that is insufficient because you can spoof what is being signed.
So to use biometrics securely, you need a biometric device that does three things:
- liveness detection,
- shows what assertion is to be digitally signed
- signs it right inside the device so that the biometric reader’s private key is never exposed.
When was the last time you saw one of those devices that did all 3 things? For most of us (including me), the answer is never.
All that being said, biometrics with liveness detection are useful when done locally such as building access where the reader can be made physically secure. They are typically combined with another factor such as a PIN code or an NFC card especially when matching in large sample sets, e.g., more than several thousand users. Iris recognition is one of the few biometric technologies where a single biometric is sufficient for large population use.
Encryption isn’t the answer
We can’t just encrypt the data that is being targeted. For example, the data in the Anthem breach was accessed through an administrator’s account. Even it were encrypted, it would still be accessible. Encryption is part of the solution for sure (as described below), but encryption alone is not sufficient to solve the problem.
Following Mandiant’s advice isn’t the answer
Mandiant, which is often called in to analyze break-ins, advises businesses to:
- Use 2FA for remote access
- Keep PCI systems segregated
- Use application whitelisting
- Reduce the number of privileged accounts. Use unique passwords and password vaults.
These are all band-aid solutions that really won’t move the security needle by very much. We already said 2FA isn’t the solution. Keeping PCI separate isn’t much of a barrier. Using app whitelisting is difficult to carry out in practice because it means experts must analyze every line of every application every time it is modified. Reducing the number of privileged accounts reduces the attack surface. Using unique passwords are incrementally more secure. Password vaults are a two-edged sword: they allow you to manage unique passwords, but the vault also creates a single point of mass failure.
In summary, incrementalism is not the answer. It will not make much of a dent in the problem.
Traditional federated logins aren’t the solution either: they create a massive central point of failure
Traditional federated login (including SAML) isn’t the answer either. A traditional federated has you validating your identity to the IdP and then the IdP tells the relying party it is you. So it’s a “man-in-the-middle” system where if you compromise the man-in-the-middle (the IdP), it’s all over for everyone. That’s why most security experts do not recommend traditional federated login. It’s simply too easy to have a mass identity breach.
For example, 600,000 Facebook accounts are compromised every day. That’s bad… That attacker now can log in as you anywhere Facebook login is accepted.
Also, a user’s privacy often goes away with the use of identity providers who are in the business of making money on your behavior.
Wouldn’t it be great if we can close all of those holes and eliminate mass breaches of authentication credentials (and certain personal information) forever while making login easier and more secure at the same time? We absolutely can!
Introducing the Trustable Federated Identity Service
The good news is that we have the technology to eliminate shared secrets and thus permanently eliminate mass breaches of login credentials. It’s been commercially available for years!
To fix the problems with shared secrets, we must eliminate them and replace them with identity assertions that are based upon end-to-end secure digital signatures using asymmetric cryptography such as ECDSA.
A trustable federated identity (TFI) system does exactly that. It is an identity system consisting of a client (the user), server (“the relying party”), and identity provider (IdP) but where the identity assertion is now made directly between the user’s device and the relying party. The IdP is called by the user’s device to assist in the transaction. Unlike traditional federated identity systems, a TFI system is longer a “man-in-the-middle.” So with a TFI system, we have, for the first time, true end-to-end security.
A user creates a single ID that can be used at any participating site. 13 things are assured:
- A user’s identity can never be asserted by the identity provider (IdP) no matter how bad the compromise is at the IdP or relying party. In short, mass breaches of identity credentials (or assertions) can no longer happen. This is a big deal.
- The identity assertion is end-to-end secure between the user and the relying party. This means that any compromise of the relying party or any third party (such as the IdP) will never disclose the authentication credentials. Among other things, it means that if there is a break-in at a website or enterprise or even your own IdP, you won’t have to change your private keys or passwords or PIN code.
- Shared secrets are eliminated. Passwords and PIN codes should never be asserted as shared secrets; they must only be used on a local device and combined with a local private key to derive a new local private signing key. It’s when passwords and PINs are used as shared secrets that we create mass breach vulnerabilities.
- The user’s private personally identifiable information (PII) is kept private because it is encrypted using secret keys known only by the user’s devices; the IdP cannot decrypt it no matter how hard they try. No more mass breaches of the personal information that is stored in this manner. The user is in control of which sites the user wishes to share his PII with.
- The user’s browsing information (what sites are logged into) is not known to the IdP
- The user’s credentials cannot be correlated between sites (i.e., the public keys on file at Site A are different from the public keys on file at site B for the same user).
- The user’s identity credentials at the relying party (typically a set of public keys) are stable, even if a user changes their devices or any private key(s) of the IdP are exposed. A stolen device should not be able to successfully authenticate anywhere.
- Account recovery requires possession by the person doing the recovery of a high entropy secret
- There are no single points of compromise. Completely compromising any device or computer should not lead to unrestricted access to authentication or authorization.
- There should be a provision for a virtually unbreakable step-up authentication that can tap into a variety of methods to virtually assure a relying part it is really you, e.g., signing with a private key that is unlocked in hardware using biometrics. Today, there are no reliable on-line identity assertions. For example, after the Anthem breach, I went to Experian to put an extended 7-year fraud alert on my account. They require you to fill out a long form, mail it in, and include copies of all sorts of documents including an identity theft report from a law enforcement agency. I talked to a deputy sheriff to get a case number and I’m still waiting for Anthem to figure out whether my records were compromised. What an enormous waste of time and money for 80 million people. The billions of cost for just this one incident alone more than justifies the government spending a few million dollars a year to mitigate this problem.
- It does not use SMS for authentication or new device pairing. SMS is problematic for a long list of reasons.
- It will never lock you out of your account. It should allow you to disable individual devices when they are compromised either by you disabling them manually or the system auto-disabling a particular device when consecutive authentication attempts have failed, e.g., 4 consecutive bad PIN codes.
- It handles account lifecycle management (e.g., stolen/lost devices that contain your keys) without system operator intervention. So you shouldn’t have to talk to a person to recover your identity or unlock your device. In fact, it should be impossible for anyone to assist you to recover your identity, i.e., no person should have magical “super user” credentials to help you reset or repair your account. This just creates a major security hole. For example, when my Wells Fargo bank account was disabled, I had to answer four KBA questions to reset it. The phone agent can copy all that information down, pass it on to a hacking ring, and that hacking ring can use all that information to pose as me because they ask the same 4 questions every time.
TFI systems use asymmetric crypto secrets (private keys) that are generated on user devices (and never leave the devices) and all the authorization happens using digital signatures on the user’s devices and all shared secrets are eliminated. The authentication is end-to-end secure: the user signs it on his device using his private key(s) and the site verifies it against the public key(s) on file for that user’s account.
The ideal TFI system should not require any new hardware and it should work with all devices. Extra hardware can be used to increase the security of the system.
How a TFI system can make the most popular targets of a breach useless to an attacker
A TFI system allows us to store the three most common targets (authentication credentials, credit card numbers, and personal data) in a form that is completely useless to an attacker, without impacting legitimate use.
Authentication credentials: on file at the relying parties are all public keys so they are completely worthless to an attacker. We can eliminate a lot of the $21B/yr of Stolen Identity Refund Fraud (SIRF) by signing tax returns with our identity rather than using social security numbers.
Credit card numbers: Instead of storing people’s credit cards at websites, they can simply digitally sign invoices (using private keys associated with the public keys on file with their payment provider) to authorize payments. For “card on file” functionality where the person isn’t there to sign a transaction, the consumer places the public key of that vendor on file with their payment provider. There is start-up company doing this today.
Personal information: This can be held in encrypted form by the IdP and/or the relying party (RP). The secret key would only be held on the user’s device so that information can be decrypted on demand by the user and supplied to the RP using end-to-end encryption who then can use it and discard it. This means that the personal information is only available in unencrypted form for a very brief period. It also means that it is impossible for an attacker who breaks into the RP to decrypt the data. This approach is vastly superior to the “standard” approach of keeping the decryption secrets at the RP because it eliminates any potential for a mass breach.
Secret documents: Nation states like to steal information, such as plans for military weapons. There several options to secure these documents. You can encrypt the document with a secret key and then encrypt the secret key with the public encryption key of each authorized viewer. The secret key can be changed whenever there is a change in the access control list and the new secret encrypted with each viewers public key. Or there can be a secure document server which requires TFI to get access to the document. Of course if someone can breach the identity of someone who has access to the document in question, the attacker can gain access to the asset. But a TFI system makes that breach very hard, especially if step-up authentication (e.g., involving a digital signature from a second device) is required by the document server.
Bank accounts: One of the most important lessons from bitcoin is that we can keep bank accounts in plain sight of attackers and even publish the source code and the account balances can’t be hacked. That means no more robberies like the $1B heist of 100 banks that just happened. But to make the system work, you need a good way to protect your private keys because that is the weak part: that is where thefts occur. A TFI system provides such protection. Put a TFI system in place and you can build banks that can’t robbed.
Other benefits of a TFI system
There are many other benefits of a TFI system:
- As long as the person with the identity doesn’t do anything blatantly stupid, his identity is unbreakable, even if he tells the world his username and password and gives the attacker superuser access to his identity provider. Now that’s real cybersecurity!
- it’s really easy to use; it eliminates the need to remember hundreds of usernames and passwords. Passwords, since they are no longer shared secrets, do not have to conform to “standards” because brute-force attacks cannot be done anymore.
- secure authentication is the bedrock. It is a prerequisite for all other secure services like client-server protocols and payments.
- it is both more efficient and more secure to have one system that is thoroughly tested and documented and reviewed. These systems need to be implemented very carefully by security experts. For example, if you sign with insufficient entropy, your secrets are toast.
- it will (eventually) make identity theft nearly impossible. Identity theft has been the #1 FTC complaint for over a decade.
- it will enhance privacy. For example, today when you register on a website you are asked to enter your email address because most people can’t remember dozens of usernames (you can’t use the same username everywhere because it will be taken). So for the sake of usability, we give up privacy. A TFI system allows people to register by giving up a public key unique to the site. This preserves privacy, prevents cross-site tracking, and doesn’t impact usability at all.
- we can build in lots of anomaly detection/suspicious activity reporting right into each identity and alert administrators of suspicious activity or require step-up auth. All of this work can be leveraged over all applications where the identity is used. This is much more efficient than having every identity provider have to write this code and centralizing it all in one identity makes suspicious activity easier to detect.
- Authentication and authorization security is greatly improved because the attack surface is significantly reduced and it allows for easy step-up authentication.
- mass identity breaches are permanently eliminated
- PIN/passwords cannot be locally cracked or circumvented. This is because the PIN verification is done at the server which will refuse to respond after a certain number of consecutive PIN failures shuts down the device.
- compromises of third party systems (such as other websites or the identity provider) no longer reveal any authentication information that can be used on other sites
- client/server protocols are more secure (no more shared secrets)
- identification is more secure (no more social security numbers, DOB, etc)
- it provides a basis for eliminating payment card breaches (via digital signatures)
- it provides a much more secure method for account recovery (because it can use the system itself)
- it means I’ll never be forced to change my username/password again, or have to unlock my account. It is very very frustrating when I have to spend 15 minutes each time this happens (it’s happened 4 times in the last week for me).
- it allows you to easily share access to an account with others; you just approve their public keys to be able to have restricted access to your account, e.g., I could have my assistant pay bills on my bank account.
- it eliminates all the time wasted filling out forms to create an account at a new website. That can now all be done with a single click, no typing. And the site would never have access to spam your friends or post to your wall (too many sites require you to give this up in order to use a Facebook login).
- it eliminates the need for application developers to write their own security systems that are always going to be less secure than using the TFI system. For example, my company Cointrust, is developing a faster payments solution. At great time and expense, we had to write our own identity system because there isn’t a TFI system on the market with the features we need. That’s an enormous waste of time and results in a system that is not as secure as a well-tested TFI system.
- it can be used as a basis for doing secure transactions. I am a recent victim of debit card fraud… the fraud went on for two months before we noticed the bogus charges on our statements. We had to cancel the card and then re-register the new card everywhere we had the card on file. This is a major hassle. It happens because our payment cards authentication is based on shared secrets (except for in-person EMV/chip-and-PIN transactions)
- it allows the government to preserve the right to do certain operations for certain people if authorized by a court order. This can be more easily managed in a federated identity system than one that is more distributed.
With a TFI system, logins now transparently add a second factor. So a typical login where a user types in a password is now transformed into a much harder to compromise 2-factor login where neither factor is a shared secret. So this is both a boost in usability and security at the same time. It is literally “much better than 2FA security with a user experience that is easier than single factor security.”
Compromises at the website or the identity provider cannot impact the security of the login. Even if a user’s password or PIN is revealed (by compromising another site the user uses), the user’s identity can’t be compromised because the login requires the private key(s) and those never ever leave the device so cannot be compromised when any external site is broken into. This is a big deal because half the users use the same password everywhere.
The architecture of a TFI system greatly reduces or eliminates all sorts of attacks. Mass breaches of identity and the PII kept by the IdP are now impossible. It creates a virtually unbreakable identity as long as a higher assurance authentication requires participation of more than 1 device. That’s because any single device can be compromised; it is much more difficult to compromise two devices from the same owner at the same time. I’m not aware of any such compromises.
Computer to computer security is improved. Look at almost any client-server protocol. They virtually all use an “API secret” that you supply in the call that is a shared secret between client and server, just like a password. Anytime you have shared secrets in a client/server architecture, that is a mass breach of authentication credentials that is just waiting to happen. Breach the server’s copy of the shared secrets and you can do an API call that looks like it came from any client of your choice. So you can move money, change records, control equipment, etc., etc. And sometimes you don’t even have to breach the server at all. For example, if there is a bug in common SSL implementations, as we had recently, this shared secret data is also exposed to attackers.
Another application of this technology is for holding a credential that can be used after you’ve been identity proofed. Today, we use shared secrets such as social security numbers, date of birth, etc. to prove who we are. With the Anthem breach, all of that information is now in the hands of attackers. That makes it much less useful. Isn’t it time we switched from shared secrets to digital signatures to prove who we are? This same technology can be used for digital claims so that your identity and the attributes of your identity (such as the items that appear on your driver’s licenses) can be asserted in a trustable way.
A TFI system gives users a way to digitally sign things. That means you can sign transactions instead of sharing a credit card. So payment card breaches can be completely eliminated. It enables us to create the on-line equivalent of EMV.
A widely deployed TFI system also means that the TFI system itself can be used for secure account recovery of your TFI identity. No more insecure/problematic knowledge-based authentication (KBA) questions (which typically are either researchable by an attacker or too ambiguous to remember… for example when asked “where were you born?” sometimes you might answer Brooklyn, other times New York).
Why TFI systems are too hard to bootstrap without help from the government
You may be wondering, “If TFI systems are so great, then how come nobody is using them in the US in any large numbers?”
It isn’t for lack of trying. It’s a bit like deploying fax machines…nobody wants to buy a fax machine if there is nobody to send faxes to. Similarly, nobody wants to implement a TFI interface if very few people have a TFI identity.
Here are some of the reasons why:
- Many people responsible for securing a computing do not understand the value of eliminating shared secrets. For example, the top security guy from one of the largest banks in the world told me that shared secrets are fine and that there was no need to upgrade login security for customers or internal use. The CISO for one of the most forward-thinking banks in the country said the same thing: “there is no need” for technology that eliminates shared secret authentication methods. In fact, I don’t know of a single CEO or CISO at any US company who has ever said we need to ditch the use of shared secrets. Nor do I know of any company that has actually done it. And that includes every company that has already been breached!
- There were many banks who were educated and wanted to use the technology. But they all used a core banking systems from third party providers. Those company don’t make any changes unless 80% of customer demand it, and even then, they may not make a change. Getting to that demand threshold is nearly impossible for a new technology especially one that is not trivially explainable (like this one) and universally validated by the entire industry.
- People mistakenly believe that this new technology is too hard for people to use. It’s actually simple; it is no more difficult to do than logging into a third party site using your Facebook identity.
- People think it is less secure because hardware is always more secure, like a hardware token (e.g., RSA SecurID). But that’s wrong because hardware tokens use shared secrets and they are trivial to man-in-the-middle.
- It takes a long time for people to change their beliefs and trust something new, despite clear and compelling evidence. This slow adoption curve is not limited to high tech. We see a similar slow adoption curve of new technology in other fields as well. For example, in medicine, it takes an average of 16 years from when a new discovery is made to when half the doctors put it into practice. My favorite example is America’s dietary guidelines. Scientists knew 35 years ago that limiting fat intake would make us fat. But here we are, 35 years later, and the government guidelines still haven’t changed to reflect the research.
- There is a chicken-egg problem with roll out (e.g., nobody wants to support a new secure identity system if nobody has it already),
- There can be availability issues (do you trust that the company that is running this will still be in business in a year),
- No single company can just implement it themselves; they must rely on a third party trustable identity provider. This is because otherwise the private key management becomes unwieldy for users who would have to manage their devices (and PIN codes) with every relying party,
- It is somewhat harder to implement than username/password; the technique of challenge and digitally signed response is unfamiliar to most programmers because it typically isn’t taught in school
- It is a little more complex to manage both a legacy username/password system and a federated login system (e.g., account linking and unlinking),
- Very few security consultants are familiar with trustable federated identity (and the difference between a “trustable” federated identity and today’s federated identity). Some flatly refuse to be briefed.
- Companies that want to deploy it for enterprise use are thwarted because the vendors they use don’t support it. If the companies ask the vendors to add support, they are told by the vendor that there isn’t enough customer demand to add this feature. Another chicken/egg problem.
- Government agencies that want to deploy it are forbidden to use it by the President (HSPD-12) because it must first be certified by FICAM, but FICAM can’t certify it because it has to be popular enough to be able to qualify for certification. Another Catch-22.
- When they finally understand and trust the new technology and all the other issues have been addressed, it is too late.
That is why sites that haven’t been broken into haven’t adopted a secure solution. But what about sites that have been breached? Why aren’t they using the new technology? There are 3 reasons:
- The outside security consultants that are brought in are most likely to recommend “proven solutions” that they are familiar with
- The staff of the outside security firms that are brought in are very unlikely to be aware of this particular solution (I don’t know of any who are), and
- Even if they knew about this approach, they couldn’t recommend it because it isn’t viable yet. It requires a public/private partnership to get traction and so far the government hasn’t been willing to take the necessary steps to fix the problem in any near-term time-frame (e.g., the National Strategy for Trusted Identities in Cyberspace (NSTIC) is not near-term).
This is why even one of the world’s largest security companies couldn’t find a way to move people to a TFI system. They told me it was simply “too hard” for them to do without help.
The solution to the adoption problem is simple: a public/private partnership to bootstrap a standard TFI system
All of these deployment reticence issues disappear when a big trusted entity with “staying power” helps to bootstrap the new identity system.
For example, Facebook proved that you can do this in practice. Millions of sites use their federated login — it’s just that they still use the same old insecure shared secrets.
Similarly, the US government could easily solve the problem. They are a big trusted entity and they can easily remove all the barriers to adoption just like they did with the ACH system (ACH was dead on arrival until the US government got involved) and the creation of the Internet (DARPA provided funding and some management).
The key to success is picking the right strategy. I’ve detailed my recommendations in the next section, but basically you pick a technology, invest in it, open it up for all players, get the big browser makers to incorporate support for it in the browser and in mobile OSes (this closes a a potential trust loophole), and then the White House publicly appeals to major sites to offer it as an optional way to authenticate in the interest of national security. Once a few big players comply, there is enough of an installed base that everyone will want to support it just like they support Facebook login.
What’s new here is that the government helps in the rollout. Neither the government nor any single company acting alone could ever achieve this. Nor is it likely that group of companies in the private sector could achieve this. What is needed is a true private / public partnership that is synergistic where 1+1=5.
Once bootstrapped on the supply side, users will want to use it because it is much easier and more secure than username/password login (no more usernames and passwords to remember and no more password standards!).
In the very unlikely event that that strategy doesn’t work, we could make a law that any website with more than a certain number of users must offer a means of authentication that doesn’t use shared secrets.
The bottom line is that the US government can absolutely can guarantee that there will be mass adoption.
The Canadian government has proven that this strategy actually works in practice; they partnered with a private company to run the authentication service and rolled it out to millions of their citizens without any problem using access to government services as the only adoption incentive.
So the current unwillingness for the US government to be involved is fixable. They just have to shift their strategy.
I suspect that the unwillingness to get involved is primarily two-fold:
- Very few people in government understand how they can help fix the problem (simply convening stakeholders is not a good strategy) and why government involvement is so critical (failure to understand the realities enumerated in the previous section), and
- They hoped that the private sector would and could do it themselves without any involvement from the government. After that strategy didn’t work, they created NSTIC. But we need to face reality: that’s not happening. Some really smart people I know dropped out of NSTIC because it was moving at a snail’s pace (in fact, some people remarked that snails move faster). For technical reasons, it’s not clear to me that NSTIC will ever work. The NSTIC design strategy was to encourage industry to work together to solve the cybersecurity identity problem. The fact that there actually were technologies on the market at that time that met all the NSTIC requirements was irrelevant. The goal of the project was to create a process for industry to work together, and not to actually solve the problem itself. If the NSTIC directive had been to “get this problem solved ASAP” that could have been accomplished years ago. Moral: be careful what you ask for.
Once we educate Congress or the White House about the problem and the solutions, and make it clear exactly why existing approaches have failed, and how a public/private partnership as described in this paper will yield a superior solution, we have a chance to take action to fix it. We just need a person empowered to make decisions and get it done.
Just like we have with the Internet itself, governments and private companies could all run the same TFI service and provide identities to anyone in the world.
A user’s identity provider could be in their own country (or in another country), and that everyone can authenticate to any computer (public or within an enterprise) using the same protocol. Wouldn’t that be awesome? There would be no longer a need for shared secret authentication for any computer anywhere in the world.
Nobody would be forced to use it. To ease the transition, TFIcan be provisioned side-by-side with the existing shared secret logins so there is no downside to adding the secure option. This makes the transition easy.
The implementation difficulty is low…about as difficult as adding a social login to a site, a feat that millions of web sites have been able to accomplish without external help.
Today’s TFI systems also preserve user privacy as well (that’s why it is called “trustable”). Personally identifiable information (PII) can only be decrypted on the user’s devices and a user’s browsing history is not known by the system.
TFI systems are vastly superior to existing non-trustable federated identity systems like Google and Facebook where 1) they know all your PII, 2) know every site you log into, and 3) are also a single point of mass compromise. An evil programmer at a non-trustable federated identity provider can log in as anyone. With a TFI system, that’s impossible. That’s a huge difference.
Trust is baked into the architecture and the protocols and is externally verifiable. A well designed trustable federated identity architecture guarantees that the identity provider has no access to my identity private keys, PII, or browsing history and that a mass breach of the identity provider doesn’t compromise the security of the system.
So there is no excuse not to do this that I’m aware of. The technology has been available for years. Once the government decides to do this, it could be up and running in less than a year.
Today’s TFI systems do not require specialized hardware: no hardware or download is required to use it. So there is minimal cost to the government or any private company to host such a system.
Change is hard for people to accept, even when it is for the better.
Over 1,500 people have read this article and no one has been able to propose an attack that would work to compromise the system.
Comments on this article that people posted include the following:
Q: “I also do not think that a 3rd party in equation is more secure, same as to have dedicated device. What if they get hacked? And who says we have to trust that 3rd party will never abuse the information?”
A: There is no data to hack. It’s all either encrypted using keys held in the users’ devices or it is public keys. You haven’t described an attack that will work. This system was designed by experts and independently analyzed by experts and nobody has found a hole. You don’t have to trust the IdP at all because the user’s device itself never reveals any secrets to the IdP and that code is in plain sight and can be verified by experts. So no matter how badly the IdP is coded, it cannot be hacked to breach an identity or personal information.
Q: “What you are suggesting are solutions that are tied to devices the person uses. But if that person wants to log in from say a brothers computer, or the hotel lobby computer, or someones smartphone while at a conference they are stuck. The solution needs to be free of the end point device.”
A: You can pair your identity to a new device using any device with your identity. If you lose all your devices (or don’t have any with you), you can use a high entropy passphrase. If you forget the passphrase, you can have the hint for it emailed to a registered email account. This avoids having to store any secrets on the IdP that could be decrypted. At best you get the passphrase hint.
Q: “The challenge is not if the 2FA tech works, but is how to deal with multiple logins. Right now a person needs to have multiple FOBs for multiple logins.”
A: This solution eliminates the need for multiple FOBs. There are no FOBs at all. There is one account ID and a PIN code. That’s it.
Q: “There are still many places that cell coverage does not work. Sure, probably 80% of the population in the United States is in good cell coverage almost all the time. But there are huge sections of the country not covered.”
A: There is nothing in this proposal that depends on cell phone coverage.
Q: “The other thing is that a lot of the fault is in either poorly implemented systems, or aging systems that are still in use that have huge issues. I am still shocked that we have systems based on old tech that is limited to 8 or 12 character password limitations. Length in a password is everything, everything. An 8 character password is laughable.”
A: Longer passwords with high entropy are unacceptable from a user standpoint. How many people have 12 or more random character passwords? They are vulnerable to keylogging, phishing, man-in-the-middle, etc. attacks. We have invented much better techniques in the last 50 years ago. Sure longer passwords are better than shorter passwords, but asymmetric crypto is way better than long passwords. And the proposed architecture makes it so you can have terrible implementations at the IdP or relying party and there is no security breach since these entities lack access to secrets. So why not switch to the better solution?
A practical path to deployment
Here are the main steps:
- Determine who is the decision maker is. Strong leadership is critical. You can’t have this led by someone who needs to see consensus before making a decision. You will never ever in a million years get any kind of consensus on how to fix this issue, no matter how good the solution. You must have the a person in charge who can listen to the input, fully understand the issues, and who is not afraid to make a decision that might piss people off. They must have the courage to make the hard decisions, stick to those decisions (unless they are clearly wrong), and guide people down a path to a successful outcome.
- The leader should seek input on both the process and technology from at least two groups of people. The first group are the people who have actually designed, built, and commercially deployed virtually unbreakable crypto-based federated identity systems. By virtually unbreakable I mean that even if you knew the user’s username and could answer any knowledge based questions—including disclosing their password and PIN—and also had super-user access to all the computers at the identity provider, that you couldn’t crack that user’s account) After all, if you are trying to build a secure system, you should get advice from people who have done it. The second group to get input from are the are the world’s best white-hat crackers, people like Karsten Nohl who are so smart that they can crack into systems that other experts think are uncrackable. Getting input from others, such as people who have written thoughtful proposals for how to solve the cybersecurity authentication problem, may also be helpful. The input should not be limited to the “usual sources” relied upon by the government because those people often think conventionally. Seek input from new sources who have great, innovative ideas that are well reasoned. The best way to do that is to announce that you are soliciting public comments on the best way to secure authentication other than the following approaches that we already know about and list them. That way, you filter out a lot of noise, and determine if there might be something even better than what is proposed here and already known.
- Pick a small group (e.g., 5 people) of crypto and identity experts who really understand end-to-end security and trustable federated identity to act as advisers to the leader. Keep it small. 5 or 6 experts is plenty. There are three simple, yet clever questions you can ask them to determine immediately whether they are qualified to be on the committee (ask me privately).
- Have the team come up with the requirements for a secure identity system (e.g., 20 key attributes). They can use the NSTIC requirements, the criteria that was used by the USPS in selecting their national identity system, and/or the requirements for a TFI listed here as input for that list. Don’t create a list of 100 things. Pick the 10 to 20 most important criteria.
- Invite technology vendors who have a product on the market today that either closely matches the criteria or that can provide secure, virtually unbreakable authentication technology to submit a short (e.g., 5 page maximum) description of their solution describing either how it meets the requirements or some other set of key features it has that might be better than the requirements list. That way if there is an innovative solution better than TFI with a different set of advantages, you don’t preclude it.
- Have the team evaluate the responses against the criteria and recommend a ranked set of finalists.
- Invite the top 5 finalists to meet with the committee and the decision maker to present, demo, and discuss their solution and clearly explain why it can’t be breached.
- The decision maker listens to the advice from each review team member, and the review team as a whole and picks a winner.
- Picking a single winner is incredibly powerful. The reason nuclear power got of the ground in the US is because one guy, Admiral Rickover, picked a winner. He didn’t chose the safest or even the best design. He chose the nuclear design that worked on submarines. Despite that, the focus on a single workable path and made everything much more efficient. Doing this right is really complicated and hard. Picking more than one winner is a tremendous dilution of focus and will cause the effort to fail. Focusing on a single path is critically important.
- If the pricing of the effort is unreasonable, repeat for the next best vendor.
- Invest in the winner. Encourage the winner to collaborate with key stakeholders (such as Apple, Google, Microsoft, and Mozilla) and crypto/security experts (David Kravitz, Dan Boneh, etc) to modify the design to meet the requirements and ensure that the implementation is solid and can be hosted by multiple IdPs. Engage experts to do penetration testing. Offer a large bounty to any white hat who can crack it.
- Ask the big vendors of web browsers to embed the client side code into the browser (this improves trustability of the system) and provide support inside each popular mobile phone operating system (so it can be used by mobile apps). This is very important. This creates one of the key benefits of government and private industry working together to create a synergistic outcome that is superior to anything either party acting alone could ever achieve. It was the lack of support for a TFI by the infrastructure that forced us to have to write our own identity system inside our app; it was something we didn’t want to do and shouldn’t have needed to do, but had no choice.
- The software technology should be made available for a nominal charge (e.g., $100K/yr) for any company to host. This has several additional advantages: more choice—users get to pick their favorite IdP (e.g., the US government, Apple, Google, etc.) and greater acceptance among the major players (for example, if just Google could host the identities, Apple might not support it and vice versa).
- Make the system commercially available on an optional basis, e.g., make it available on consumer-facing government websites and other websites.
- Publish the source code so that privacy advocates and security experts can examine it.
- Increase the bounty for a breach.
- Have the President ask companies to offer it as an optional way to log in to websites and enterprise systems.
The cost to the government to do this? Less than $25 million a year (or about $1M for each government agency). That’s a fraction of the hundreds of millions of dollars each year we are spending in just a few US government agencies on cybersecurity projects. For example, just to secure the .mil network, we employ over 6,000 people. We could spend that money a lot more efficiently by solving the problem once, instead of solving it hundreds of times.
The cost to the government to not fix it? Hard to say but if you just look at federal tax refund fraud alone, it’s well over $5.2 billion a year. Of course, not all of that is preventable, but by incentivizing all taxpayers to digitally sign their tax returns (e.g., offering a tax credit), we can lock down the system for a huge amount of taxpayers and cut that fraud loss in half or even more. So that $25M/yr investment in fixing the problem is rounding error compared to the benefits.
So why shouldn’t we make available a TFI system for Americans to use? Why not give people a choice? TFI technology has been available commercially of the shelf (COTS) for years. Hundreds of thousands of people already have these identities. It is so easy to use that many users have no clue that they have one.
While TFI systems don’t solve all problems, it is an important step in the right direction to secure authentication and authorization with easy to use, yet very secure technology. While that doesn’t guarantee success, it’s a necessary component for success and a good step in the right direction.
How this is different from NSTIC
Some people might think that this public/private partnership proposal to create a secure identity system is already being done: the NSTIC effort.
The process suggested here is superior because it achieves a much better result in a much shorter period of time. If we had a decade, the NSTIC process might produce something that might work OK. But we simply don’t have a decade.
Here are the differences:
- NSTIC/IDESG convenes a huge number of stakeholders to try to agree on a design for a trustable identity system. Such a process can take a very long time. Furthermore, trying to make everyone happy often results in a system design that is bloated, very complex, and hard to implement.
- The process suggested here has a very small group of experts pick the most promising technology that is already on the market and provides a relatively small focused team with the resources needed to design and deploy a world-class solution.
Let’s look at a four game-changing technologies which is the kind of result we need here.
The iPhone. It was created by a small team at one company. Does anyone think that if we had brought together 500 stakeholders in a room and asked them to design a better phone that the result would be superior to the iPhone?
The Internet. It was designed by a very small team of very smart people who shared a common vision and were left alone to solve the problem. The government definitely did not gather representatives from the Fortune 1000 in a room and ask them to design a networking system that they could all agree on.
The World Wide Web. Small team that was left alone.
The web browser. Small team that was left alone. Additional innovations came from small teams who submitted them to standards committees.
Consider the national on-line identity in Canada. They did not use the NSTIC process; they used a process similar to that outlined here, picked a small vendor, and got a result to market very quickly.
NOTE: I’m not suggesting we choose the same vendor Canada did because our requirements are different (they wanted a Canada-based vendor) and there is newer technology on the market now that should be evaluated.
Here’s the test: Can you name one big problem that was elegantly solved in record time by following the NSTIC/IDESG process? I can’t.
Note: NSTIC was the vision. The process was all added in IDESG, it was very heavy weight, and most of the people who gave us all that subsequently left.
Dealing with the objections
Anytime you try to change the status quo, people will object. And on something as big as this, you are going to get objections to anything you propose, no matter how perfect your plan might be.
The only real objection I’m aware of is this: “If everyone used the same identity system and there is a bug, that it would be devastating because then all computer systems would be immediately vulnerable until the bug was patched.” This is why banks don’t like to use the same authentication service as other banks because they fear this.
This would, at first glance, seem like a real show stopper, but it is very manageable. Most all bugs in the IdP or the relying party will cause those few transactions that trigger the bug to “fail open,” i.e., the transaction won’t go through. There are only two exceptions to that. If the digital signature algorithm (DSA) verification algorithm returned true in certain instances, that would be problematic. But this code is relatively straightforward to write, it’s been around for ages without any bugs, and it is short. So it is easy to verify. A bug in a web browser could result in the private key being exposed. If this could be exploited by a malicious server or browser extension, this could result in some accounts being breached at the low-security level. The damage would also be mitigated because it is hard to get a lot of people on a malicious site or to download a malicious extension.
If an exploit is found by a black hat, there are several ways to mitigate the damage: install early warning systems (e.g., decoy accounts), allow bug fixes to be distributed quickly, and so on.
Because the system is widely used, we can afford to make the investment in creating these defensive mechanisms. It is much easier to put in defensive mechanisms for a single system than making investments to defend millions of identity systems. Also, effective release management is important; new releases should be deployed in a staggered fashion even within an IdP (e.g., 1% of users get the upgrade on day 1, etc.)
Also, since the system employs step-up authentication, the only bug that could really cause harm is a bug in the DSA verification algorithm. That’s just very unlikely.
A lot of people say “having the government involved will slow things down and mess things up.” They argue that we should leave it to the private sector. In fact, I might have said the same thing 5 years ago too. Now I know from real world experience what that is like. The response to that suggestion is: “Been there, done that, didn’t work.” Conversely, involving the government is not a guarantee of success either. If the government chooses a different process than is recommended here, it could be a disaster. It’s a two-edged sword, no doubt about it.
Political viability is important. I’ve spent many hours of time talking with members of Congress about this and came away convinced that the climate has changed enough for this proposal to be politically viable.
I think the key argument is this: The status quo is unacceptable and we need action now. For example, Lloyd’s of London just announced that cyber attacks are now too big for private insurance companies to handle.
Do the people objecting clearly understand what is being proposed? What is there experience basis? What are their qualifications? Have they done this themselves? Do they have a better proposal?
Which do we really think is safer for Americans that will be more likely to secure our computers and preserve people’s privacy:
- The status quo where we have millions of authentication systems run in thousands of datacenters where a single slip up in any server can result in a mass breach?
- Or a single identity system which has been designed by the world’s best security experts, thoroughly analyzed, vetted, published, pen tested, where all the IdP’s private keys are stored in HSMs, and where even the most successful attack on any identity system operator can never result in a breach of private data or authentication credentials?
A tough call, isn’t it?
But the nice thing is that we can get started right now on the process described above. This will give us plenty of time to debate the approach. When the debates are all done, the system will be battle tested and ready for roll out. That way, no time is wasted.
Some people have suggested we should have multiple identity systems. I don’t agree with that. I think that’s a bit like saying we should have multiple choices for which Internet protocols we use. Should we use TCP/IP or a competitor? So for some websites we use TCP/IP but for other websites we use another protocol? That would make things very confusing and all the equipment we buy would be more complex and more buggy.
But we don’t have to argue that point now at all. Let’s get one up and running first. That should be the priority. Get one working.
Another point is that this approach doesn’t have to be our only bet. We can still do what we were doing before. So this is just an additional shot on goal that may actually solve the problem this time.
I think the advice given by the Google engineers who looked at solving climate change is equally applicable to cybersecurity:
Consider Google’s approach to innovation, which is summed up in the 70–20–10 rule espoused by executive chairman Eric Schmidt. The approach suggests that 70 percent of employee time be spent working on core business tasks, 20 percent on side projects related to core business, and the final 10 percent on strange new ideas that have the potential to be truly disruptive. Wouldn’t it be great if governments and energy companies adopted a similar approach in their technology R&D investments? The result could be energy innovation at Google speed.
Ross Koningstein and David Fork are engineers at Google, who worked together on the bold renewable energy initiative known as RE<C. They dedicate this article to the memory of Tim Allen, who led the project. Allen inspired them to question their assumptions about what it would take to reverse climate change. “He wasn’t married to one approach,” Koningstein says. “He was intent on solving the problem.”
What are we waiting for?
How many more break-ins will it take before we take the most important first step? Clearly what we are doing now hasn’t worked. Isn’t it time to try something new that might work? Or should we keep doing the same thing over and over and expect a different result?
We’ve had plenty of time for analysis. It’s time to make some hard choices and implement them.
I think it’s time for a change. It’s time to try something new. We have great private sector solutions with virtually unbreakable authentication that have been out for years and clearly worth trying, but we need the government to assist in the rollout. Are they perfect? No. Will they stop all breaches? No. Are they are big step in the right direction? Absolutely!
David Kravitz, the inventor of the US government’s digital signature standard (which is used in ECDSA which has never been cracked and protects very valuable things like bitcoin) thinks that using a TFI service makes a lot of sense. He points out that the recent massive breach at Anthem clearly makes this an issue of national security. Hackers gained access to 80 million names, birthdays, medical IDs, Social Security numbers, street addresses, e-mail addresses and employment information, including income data. What is the point of having laws protecting user privacy if companies can’t procure the technology they need to keep their systems secure?
Other computer security experts I’ve consulted in writing this article also think it has an excellent chance of making a huge difference.
I am the CEO of a faster payments company, secure identity is absolutely critical. We spend 75% of our design and coding effort on secure identity today. That’s hugely inefficient. We would love to be able to leverage a TFI system so we can focus on our unique strengths.
I think we are very foolish if we don’t try this. Is there a better new idea on the table?
Six years ago, in collaboration with members of Congress, the Center for Strategic and International Studies (CSIS) issued a report Securing Cyberspace for the 44th Presidency. Here are two excerpts from that report (emphasis mine):
- America’s failure to protect cyberspace is one of the most urgent national security problems facing the new administration that will take office in January 2009
- In pursuing the laudable goal of avoiding overregulation, the strategy essentially abandoned cyber defense to ad hoc market forces. We believe it is time to change this. In no other area of national security do we depend on private, voluntary efforts. Companies have little incentive to spend on national defense as they bear all of the cost but do not reap all of the return. National defense is a public good. We should not expect companies, which must earn a profit to survive, to supply this public good in adequate amounts.
Steven M. Bellovin, security researcher and Columbia University computer science professor, pointed out in his blog in 2008, they were being much too polite in merely suggesting that the government get involved.
The attackers to our computing infrastructure change their methods on a daily basis. Research has shown that two out of three breaches exploit weak or stolen passwords. How can we protect our computer systems when we last changed our authentication methods 50 years ago? It’s time to upgrade them. Now. To something that is truly industrial strength, state-of-the-art that we can use as a reliable foundation for building other secure services.
Today, it’s about as hard to break into our computers as it is to break into a piggy bank. We want it to be as hard to break into our computers as it is to break into a bank vault. That is the very first problem we should be intent on solving. No solution is perfect, including this one. We should make a list of the most viable new solutions that can make a big difference, pick the best one, and encourage its adoption.