Why we should no longer use bearer tokens to protect sensitive Single Page Applications?

Fabien BERTEAU
ManoMano Tech team
7 min readOct 13, 2021

--

Nowadays, most of the web application are developed in the form of Single Page Applications (SPA), thanks to the many JavaScript frameworks that one can find. These frameworks take care of the presentation and control layers while calling on web services which take care of the data model.

A SPA needs to authenticate its users and to restrict access to its features. The level of security required depends on the maximum level of risk assumed by the user or by the owner of the application. You don’t securize a simple portal application as a bank accounting one. In the first case a simple login/password may suffice, but a strong Multi-Factor Authentication (MFA) is required in the last one.

Many technologies have been proposed in the past to deal with this problem, but few of them seem to be able to respond to these modern architectures. One of them seems to have won all the votes since its evolution in 2012: OpenId Connect. Based on the oldest OAuth 2.0, it is entirely based on the exchange between stakeholders of a bearer type security token.

This specification describes how to use bearer tokens in HTTP requests to access OAuth 2.0 protected resources. Any party in possession of a bearer token (a “bearer”) can use it to get access to the associated resources (without demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport.

The OAuth 2.0 Authorization Framework: Bearer Token Usage, RFC 6750

I underline:

To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport.

Of course Transport Layer Security (TLS) is essential and today everywhere present to protect the transport, but what about the storage?

Yet everything was fine as long as we have isolated the code from the controller and the data layers within our infrastructure in what OAuth calls confidential clients: PHP, Java (etc.) legacy applications. These tokens were never transmitted to the browser because they were stored in the user session. They were retrieved each time the browser called an URL by transmitting the session cookie of the user.

But there is a slight problem in the case of our dear SPAs, because whatever the care taken to recover this token with Proof Key for Code Exchange (PKCE) or any other way, token is finally stored in the browser and therefore it becomes sensitive to Cross-Site Scripting (XSS) attacks than can lead to massive token leaks. Remember that PKCE was designed to protect OAuth public clients from Cross-Site Request Forgery (CSRF) and authorization code injection attacks, not from XSS ones. Explain why all browser storage modes but HTTP only cookie are sensitives to XSS attacks is a question that should not answered here but instead in another article, why not.

Remember that anyone who owns a bearer token can use it instead of its rightful owner, and can do whatever this token can do.

But everyone knows how difficult it is to protect from XSS attacks. In practice and for a site of a certain importance using a lot of client side technologies, it is practically impossible to be completely protected. Therefore, from the moment where the risk incurred by user or by the service is more than minimal, it becomes obvious that we can no longer use any technologie based on the storage browser side of a bearer token in a perimeter where it can be discovered and captured by an XSS attack.

On a performance side, I could also add that even a small token has a size that can be counted in kilobytes and that most web servers limit the size of the request line and all header fields around 8K. We can easily understand that there is not much room left for other legitimate traffic like cookies or headers. And what about the fact that we have to check token validity and to decode it each time we use it.

So what remains for us as a solution if we want to continue using OpenID Connect to protect our sensitive API without having to store bearer tokens on client side ?

I won’t even consider offering you to store these tokens in an HTTP only cookie because of the previous consideration about the header size limit. Some could try to use a reverse proxy. All of them add an overlay on the authorization code flow, aiming to keep the tokens within the infrastructure by translating it in a session cookie and making the authorization server believe that it has to do with a confidential client. Let’s take a look at what it could looks like with an adaptation of the NextAuth.js framework as a reverse proxy.

I see that we need 9 participants and 43 interactions to be fully authenticated on the application.

At each web service call, the API gateway has to translate the session cookie in a standard authorization header that contains the corresponding access token.

Web service call with a reverse proxy

And yet we are in the simplest case. I spare you what happens when the access token has expired and even more so if the refresh token has also expired. I estimate that this type of implementation results in creating a maze system with too many component interactions leading to huge complexity and potentially opening up other as yet unknown flaws.

For my part the right solution would be to use a technologie that natively use HTTP only and secured session cookies: SAML v2 for example. I can already hear the crowd booing me: how dare I propose such an old XML based thing. Do I remind you that SAML v2 is born in 2005 while OAuth in 2006 ? SAML has continued to evolve since then and will continue to do so for a long time to come I hope. But I am not here to make a detailed and exhaustive comparison of these two protocols, but to draw your attention on two aspects in particular. First, SAML natively use HTTP only and secured session cookies to index the user security context on the server side: no need to add any additional layers and components to protect from any type of attack. And last but not least, it consumes less bandwidth and less resources than a bearer token to be used. Let’s take a look at what it could looks like with the Shibboleth SAML technologie.

You can see that we only need 6 participants and 23 interactions to achieve the same result.

SPA call a SAML protected web service

Interactions are simpler too: the SP plays the role of a gatekeeper in front of the web service.

So even if the first exchanges are made up of SAML assertions with a bigger size and time to process than a simple JSON Web Token (JWT), once the session is established, all that is seen is an opaque session cookie for the thousands of calls that will follow until the session expires. Therefore we should not stop at trivial and obsolete ideas about XML technologie, and we should remember that finding a server side session will always be much faster, cheaper and above all more secure than having to systematically validate and decode a client side bearer token. At the same time, I should like to draw your attention to the fact that all this also applies to all other bearer type mechanisms as Google macaroons or other biscuits. Generally speaking, we should be careful not to eat too many sweets.

If all you got is a hammer everything looks like a nail.

OpenID Connect is a good technology that can be of great service in many situations, but we need to know its limits and we must equip ourselves with a wider range of tools to respond effectively to all situations. SAML is the technology that wins when the level of risk involved in identity theft becomes certain and should not be judged solely on its technical grounds. It also has a functional depth that OpenID Connect is still far from reaching and that I propose to share with you in other articles.

Learning and sharing

Feel free to post your feedback below, reach out to me on LinkedIn. Whether you had a similar or totally different experience, I’d love to hear about it.

--

--