How To Implement OAuth 2.0 — Part 4. Frontend’s Crazy Flows and Redirects

A step-by-step guidance to develop secured zero-trust frontend and backend applications with Azure AD B2C

Han Wang
8 min readNov 5, 2021
Photo by Omid Armin on Unsplash

How to implement OAuth 2.0 series

Frontend client application types

Compared to the backend integration with authentication middleware in the part 3, frontend is more complex and tricky.

A frontend client application’s source code is exposed to the end user, so it cannot hold a secret like a backend application. Therefore how to safely build the trust relationship is always the core of OAuth.

Different types of client applications pose different risks and challenges. Most common client applications are:

  • JS-based web apps (e.g. single-page apps)
  • Desktop and mobile apps running on devices that have a keyboard
  • Other native apps running on devices that don’t have a keyboard (e.g. TV, watch)

JS-based web apps and native/mobile apps are very similar in that they both distribute the source code to the end user. I will focus on single-page apps first and discuss the similarities and differences between the two. A device often delegates to a mobile or web app to authenticate so it has essentially no difference.

There are also backend client applications:

  • server-side web apps
  • daemon / background apps

These 2 types of client applications are running on remote servers and do not expose their source code so they are safe to store a client secret. We will discuss briefly about them.

Communication channels

First we need to introduce 2 concepts: front channel vs back channel.

Back channel is pretty easy to understand with a great analogy: you put the secret message (data) inside an envelope and on the cover of the envelope is only the address (url) of the recipient. You personally drop the envelope at the post office.

The HTTP client in your application makes a HTTP request directly to the authorization server through the wire and receives the response directly from the AS. Data in request and response is in the body and with HTTPS they are encrypted.

But the issue is that the domain of your client application is different from the domain of authorization server, so that request is a cross-origin resource sharing (CORS) request. Not very long ago, such requests were not allowed in browsers. That’s why we have the front channel.

Back to our analogy: this time you write the secret message directly on the cover of the envelope right after the recipient’s address, and you drop the envelope on a mailman’s truck hoping that it will take your mail to the post office.

This is how the front channel works. The data that should’ve been in the body is instead within the url in plain text and the client application delegates the communication to its agent — the browser.

Implicit grant flow

The first auth flow in OAuth 2.0 to introduce is the notorious implicit grant flow. It completely relies on the front channel communication.

The client application initiates a redirect so the browser takes the user to a different url — the authorization server’s authorize endpoint. So the data that should’ve been in the request body in a POST request is encoded into the url itself.

https://.../oauth2/v2.0/authorize?
&client_id=a3142eab-4e31-47f1-8624-196e5a94b548
&scope=openid profile offline_access
&redirect_uri=http://localhost:4200
...

The authorization server processes this request and if the parameters match the records set by our application registration, it will return a response with another redirect so the browser takes the user to the new url. If it’s using the user sign-on flow like we did in part 2, user will land at login page. After username and password are validated, a 3rd redirect url is sent to the browser and finally it is the landing page of the client application. If single sign-on is implemented like in most organizations, the login step will be skipped.

http://localhost:4200/#
access_token=yJ0eXAiOiJKV1QiL...
&token_type=Bearer
&expires_in=4321
&scope=openid profile offline_access
...

Then the client application will possess the data encoded in that url. This is how client application can communicate with the authorization server without making CORS HTTP requests.

But guess who else also has the possession of the tokens? The browser! Because the access token is a fragment of a url, so the easiest place is the browser history which a lot of things have access to. For example, a malware browser extension is able to grab it to impersonate the user. This poses a huge security risk.

Most authorization servers mitigate this risk by making tokens very short-lived, normally 30 mins or 60 mins. But at the same time, to not make user experience horrible (getting kicked out every 30 mins), most client OAuth SDKs automatically use hacks (e.g. hidden iframe) to obtain new tokens before expiration to keep the session live. So if the malware can also secretly monitor the token refreshes, it can impersonate and steal information for an extended period of time.

Be careful of browser extensions!

Auth code flow

Fortunately modern browsers quickly made it possible for CORS requests so back channel can finally be used. This flow is called authorization code flow.

The front channel is not entirely out of the game, it is still being used. But instead of passing access token, it is passing something less important — auth code. An auth code is not an access token; it is something that can redeem the access token.

The first redirect remains the same but the response from authorization server now looks like this:

http://localhost:4200/#
code=eyJraWQiOiJjcGltY29...
...

Now the client application constructs a HTTP POST request to authorization server’s token endpoint:

https://.../oauth2/v2.0/token

The auth code along with other information goes into the request body:

{     
client_id: "a3142eab-4e31-47f1-8624-196e5a94b548"
redirect_uri: "http://localhost:4200"
scope: "openid profile offline_access"
code: "eyJraWQiOiJjcGltY29..."
grant_type: "authorization_code"
...
}

Authorization server returns the access token in the response body:

{     
id_token: "eyJraWQiOiJjcGltY29..."
not_before: 1635875589
refresh_token: "eyJraWQiOiJjcGltY29..."
refresh_token_expires_in: 86400
scope: "openid profile offline_access"
token_type: "Bearer"
...
}

The call sequence looks like this:

Up until this point you might be wondering “Is this really more secured? Can someone take the auth code and make a request to get the access token?”

Yes you are absolutely correct. If implemented with single-page apps, the only added security is just the additional POST request. So the use case for the auth flow is actually server-side web apps.

The ultimate flow — Auth code with PKCE

Finally we are here at the tip of introducing the ultimate flow for single-page and naive/mobile apps.

PKCE (pronounced [pixy]) stands for Proof Key for Code Exchange. To avoid the auth code being stolen and used to make a request for access token, PKCE makes sure the request is made by the same person who has made the first call for the auth code.

A hash function is a one-way function that converts input into a string and you can never reverse-engineer it from output back to input because a very tiny change in the input results into a completely different output as illustrated below. There are a bunch of algorithms and one of them is SHA256.

Photo by Wikipedia

To start the flow, the client app generates a random string — code verifier, then it uses it as the input to generate another string — code challenge. In the first redirect to authorize endpoint, the client app attaches the code challenge in code_challenge parameter. The algorithm name S256 (short for SHA256) also needs to be mentioned.

https://.../oauth2/v2.0/authorize?
&client_id=a3142eab-4e31-47f1-8624-196e5a94b548
&scope=openid profile offline_access
&redirect_uri=http://localhost:4200
&code_challenge=7SdM5F3nmBpB3Y91Xpxe2kxdeB9kHEyXnrlp0M1gpiE
&code_challenge_method=S256
...

The authorization server notes down the code challenge and returns the auth code as usual.

Now when the client makes the back-channel POST request, it must attach the code verifier in the request body:

{     
client_id: "a3142eab-4e31-47f1-8624-196e5a94b548"
redirect_uri: "http://localhost:4200"
scope: "openid profile offline_access"
code: "eyJraWQiOiJjcGltY29yZV8w..."
grant_type: "authorization_code"
code_verifier: "ndSMF3z1qdd5R0RbbU4Fp0CJ7LefERpXP1_P_Mot6yY"
...
}

The authorization server receives the request, uses the same hash function with the code verifier as the input to generate an output, and compares it against the code challenge. If they match, bingo; otherwise, it means the auth code is intercepted by someone else.

Client credential flow

If you have a client app running on a server that does it own thing, without needing to act on behalf of a user, then congratulations you have unlocked the easiest flow in OAuth 2.0 — the client credential flow.

Since the app is not accessible by end users, it can hold a client secret (password). Instead of making redirects, it goes directly to the token endpoint:

https://.../oauth2/v2.0/token?
grant_type=client_credentials
&client_id=a3142eab-4e31-47f1-8624-196e5a94b548
&client_secret=...
&scope=.../access_as_application

There’s a caveat that you must specifically expose the application scope on your API/resource server app registration before any client apps can access it with the client credential flow.

In contrast with the access_on_behalf_of_user we did in part 2, you should give the scope a meaningful name, e.g. access_as_application.

The easter egg flow

There is also a flow that’s available in OAuth 2.0 called resource owner password grant flow. As the name indicates, you can bypass the redirects and directly hit the token endpoint with the actual username and password like the good old days!

https://.../oauth2/v2.0/token?
grant_type=password
&client_id=a3142eab-4e31-47f1-8624-196e5a94b548
&client_secret=...
&scope=.../access_on_behalf_of_user
&username=...
&password=...

The account for username and password must be native to the authorization server like what we created in part 2. If the account is federated from other source, this flow won’t work. If you are not familiar with federation, please read the part 1 here.

By all means this flow should be avoided, but (there’s always a but) if the situation calls for it, you gotta do what you gotta do. This might be the case where you need to integrate with an old legacy system that only supports service account authentication.

If this is the only viable flow, you might want to consider setting up an authorization server inside your intranet, instead of going for the cloud. At least the network is secured in this case (hopefully).

Hopefully you understand the different flows for OAuth 2.0 on the frontend side. Next we will walk through how to use Microsoft’s MSAL library for Angular to sign in your users and implement flows to access your backend API.

--

--

Han Wang
Han Wang

Written by Han Wang

Full stack developer / cloud architect / data engineer

No responses yet