Building SSO based on Spring Authorization Server (Part 2 of 3)

D Snezhinskiy
10 min readOct 27, 2023

--

This is a continuation of the first article, in which we implemented Code Grant Authorization and explored how to implement a custom Grant Password Authorization method (also known as Resource Owner Password Credentials) by passing the username and password from a third-party application.

Table of Contents

Part 1 — link

  • Introduction
  • Chapter 1: Code Grant Authorization
  • Chapter 2: Grant Password Authorization

Part 2

  • Chapter 3: Transition to Opaque Tokens
  • Chapter 4: PostgreSQL + Role Model

Part 3 link

  • Chapter 5: Authorization using Social Login (Google as an example)
  • Chapter 6: Resource Server Configuration

Chapter 3: Transition to Opaque Tokens

In the first part, we used JWT, and there’s a reason for that. JWT is familiar and understandable to everyone. It’s convenient, especially during debugging, as it contains all all the data and can be easily decoded and verified. However, I also wanted to demonstrate how to work with Opaque tokens.

Let’s start with some theory:

Opaque access tokens are tokens in a proprietary format that you cannot access and typically contain some identifier to information in a server’s persistent storage. To validate an opaque token, the recipient of the token needs to call the server that issued the token.

In other words, an Opaque token is simply a string containing random characters, and it looks something like this:

Jio1i9leWrjYoDndyWpwA7QcPhgvTKJedGphCfbwSlyxtvFZMQkNCVV6dK_VdDXJF1uc3qB_IQm9DELlKMgJdVT6uSpIyylLPu5-lrxR8UfAypbFWonsImts98X7_33-

Well, there are several reasons why, in my opinion, it’s worth choosing an Opaque token instead of JWT. Below, I’ve outlined them as separate points, as I see it:

  1. Enhanced security. While JWTs do have signatures, they are not encrypted documents. As a result, anyone can decode and read their internal structure, which, in one way or another, exposes certain information about the internal workings that we would rather keep private.
  2. Centralized validation. Validation and verification of Opaque tokens are carried out on the SSO server. This means the server has full control over the token’s lifecycle, including the ability to revoke tokens.
  3. Reducing network overhead. But this is more of a minor bonus. In the case of JWT, we would also need to confirm it by sending a request to the SSO server (the option of distributing keys to all services in the cluster for independent signature verification is not considered, as it increases risks). In our example, the size of the JWT token is only 676 bytes, but in reality, if authorities are added, it can easily exceed several kilobytes. Meanwhile, an Opaque token always has a consistent size, just under 100 bytes. Yes, the additional traffic due to the use of JWT won’t significantly increase the bill, but it will slightly increase network delays.

Let’s use the Authorization Grant Password flow as an example to understand how the interaction with the SSO server will change after switching to Opaque tokens. I’ve added additional steps to the diagram to explain what will happen after the authorization process is completed.

Steps 1 through 4 are no different from the standard scheme. The user enters their username and password into the form generated by the SPA and sends the request.

5. In case the authorization is successful, the server returns a response with an Opaque access token.

6. At this stage, the SPA already has the access token on its side and can handle user requests. Let’s suppose the user now requests some data from the SPA.

7. When receiving a request from the user, the SPA, in turn, sends a request with the access token to the Resource Server

8. Resource Server receives the request, but before processing it and returning the data, it needs to ensure that the user has the necessary authorities. To do this, the Resource Server sends a request to the SSO’s special introspection endpoint.

9. If the access token is valid, the SPO responds with a list of claims. We’ve encountered them before in JWT. For example:

{
"active": true,
"sub": "demo-client",
"aud": [
"demo-client"
],
"nbf": 1694723187,
"iss": "http://localhost:8081",
"exp": 1694741187,
"iat": 1694723187,
"jti": "c5f6e5ad-60e2-4537-b932-09763781b0cd",
"authorities": [
"ARTICLE_WRITE",
"ARTICLE_READ"
],
"username": "admin",
"client_id": "demo-client",
"token_type": "Bearer"
}

10. If the introspection response contains the necessary authorities, the Resource Server processes the request and returns the result.

I think everything should be clear here.

So, let’s move on to configuring our server to work with Opaque tokens. As usual, let’s refer to the documentation. Here’s what it says there:

The format of the generated OAuth2AccessToken varies, depending on the TokenSettings.getAccessTokenFormat() configured for the RegisteredClient. If the format is OAuth2TokenFormat.SELF_CONTAINED (the default), then a Jwt is generated. If the format is OAuth2TokenFormat.REFERENCE, then an “opaque” token is generated.

In other words, to transition to Opaque tokens, we need to update the bean configuration of the RegisteredClientRepository according to the recommendations:

AuthorizationServerConfiguration.java

//...

@Bean
public RegisteredClientRepository registeredClientRepository() {
RegisteredClient demoClient = RegisteredClient.withId(UUID.randomUUID().toString())
.clientName("Demo client")
.clientId("demo-client")
.clientSecret("{noop}demo-secret")
.redirectUri("http://localhost:8080/auth")
.clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC)
.authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE)
.authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN)
.authorizationGrantType(AuthorizationGrantTypePassword.GRANT_PASSWORD)
.tokenSettings(
TokenSettings.builder()
.accessTokenFormat(OAuth2TokenFormat.REFERENCE)
.accessTokenTimeToLive(Duration.ofMinutes(300))
.refreshTokenTimeToLive(Duration.ofMinutes(600))
.authorizationCodeTimeToLive(Duration.ofMinutes(20))
.reuseRefreshTokens(false)
.build()
)
.build();

return new InMemoryRegisteredClientRepository(demoClient);
}

//...

The TokenSettings.builder() methods are quite self-explanatory and, I believe, do not require further explanation.

The short setup procedure is now complete, and we can test the SSO server’s functionality. We send an authentication request from Postman in the same way as we did previously in the Grant Password method.

In response, we receive the following:

Now our goal is to obtain a list of claims. To achieve this, let’s simulate a request to the introspection endpoint (/oauth2/introspect) using the received Opaque access token. Don’t forget to turn on Basic Auth and fill the username and password fields with the clientId and clientSecret data.

Everything is working, we’ve obtained the list of claims! However, our Resource Server is still unhappy and will block our request because we lost the authorities along the way. If you remember, we manually added authorities and username to the claims in the jwtTokenCustomizer for JWT tokens. But now everything has changed, and JWT tokens are no longer generated. Instead, right after authorization, an OAuth2AccessToken is generated:

Then, the token is stored in the InMemoryOAuth2AuthorizationService, from where it will be retrieved during subsequent introspection requests.

The documentation told us that to customize the OAuth2AccessToken, we need to add a new access token сustomizer:

TokenConfiguration.java

//...

@Bean
public OAuth2TokenGenerator<? extends OAuth2Token> tokenGenerator(
JWKSource<SecurityContext> jwkSource,
OAuth2TokenCustomizer<OAuth2TokenClaimsContext> accessTokenCustomizer
) {
NimbusJwtEncoder jwtEncoder = new NimbusJwtEncoder(jwkSource);
JwtGenerator jwtGenerator = new JwtGenerator(jwtEncoder);
OAuth2AccessTokenGenerator accessTokenGenerator = new OAuth2AccessTokenGenerator();
accessTokenGenerator.setAccessTokenCustomizer(accessTokenCustomizer);
OAuth2RefreshTokenGenerator refreshTokenGenerator = new OAuth2RefreshTokenGenerator();

return new DelegatingOAuth2TokenGenerator(
jwtGenerator, accessTokenGenerator, refreshTokenGenerator
);
}

@Bean
public OAuth2TokenCustomizer<OAuth2TokenClaimsContext> accessTokenCustomizer () {
return context -> {
UserDetails userDetails = null;

if (context.getPrincipal() instanceof OAuth2ClientAuthenticationToken) {
userDetails = (UserDetails)context.getPrincipal().getDetails();
} else if (context.getPrincipal() instanceof AbstractAuthenticationToken) {
userDetails = (UserDetails)context.getPrincipal().getPrincipal();
} else {
throw new IllegalStateException("Unexpected token type");
}

if (!StringUtils.hasText(userDetails.getUsername())) {
throw new IllegalStateException("Bad UserDetails, username is empty");
}

context.getClaims()
.claim(
"authorities",
userDetails.getAuthorities().stream()
.map(GrantedAuthority::getAuthority)
.collect(Collectors.toSet())
)
.claim(
"username", userDetails.getUsername()
);
};
}

//...

If you now try to send a request to the introspection endpoint again, you will receive the desired result with the authorities and username fields:

{
"active": true,
"sub": "demo-client",
"aud": [
"demo-client"
],
"nbf": 1694853187,
"iss": "http://localhost:8081",
"exp": 1694871187,
"iat": 1694853187,
"jti": "d0bae266-22ce-4146-a5fa-542a2bce2ce0",
"authorities": [
"ARTICLE_WRITE",
"ARTICLE_READ"
],
"username": "admin",
"client_id": "demo-client",
"token_type": "Bearer"
}

And now, we can remove jwtTokenCustomizer, jwkSource, and generateRsaKey(). If we continue with Opaque tokens, we no longer need them. The entire journey with JWT was mostly for understanding all the processes happening under the hood of the authorization server. Plus, if you choose to work with JWT, you already have a functional example to reference.

Chapter 4: PostgreSQL + Role Model

It’s time to move away from the InMemory storage and transition to storing users in a database. For this purpose, we’ve chosen PostgreSQL, as it’s a most common choice in projects of various levels. Additionally, since we’re migrating to a real database, it would make sense to implement a role-based model.

I won’t spend much time describing various implementations of access control role models, as it’s not important in the context of this article. Below is a schema that I consider optimal for small and medium-sized projects.

Such a model allows for flexible access control to the API based on authorities.

Let’s start by adding the necessary dependencies to the Gradle project.

implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.flywaydb:flyway-core'
compileOnly 'org.projectlombok:lombok'
runtimeOnly 'org.postgresql:postgresql'
annotationProcessor 'org.projectlombok:lombok'

Next, we will deploy PostgreSQL in a Docker container.

docker-compose.yml

​​version: '3.9'
services:

demosso:
image: 'postgres:14'
container_name: demosso
ports:
- "5442:5432"
environment:
- POSTGRES_DB=demosso
- POSTGRES_USER=demosso
- POSTGRES_PASSWORD=password

To manage database migrations, we will use Flyway. Let’s add the necessary PostgreSQL and Flyway configurations to the application.yml file.

application.yml

//...

spring:
datasource:
url: jdbc:postgresql://localhost:5442/demosso
username: demosso
password: password
driver-class-name: org.postgresql.Driver
type: com.zaxxer.hikari.HikariDataSource

hikari:
connection-test-query: "SELECT 1"
connectionTimeout: 30000
validation-timeout: 30000
maximum-pool-size: 10
initialization-fail-timeout: 1
leak-detection-threshold: 0
auto: none

flyway:
url: jdbc:postgresql://localhost:5442/demosso
user: demosso
password: password

// ...

Flyway by default will look for migration files in the directory: classpath:/db/migration/

Let’s create two separate migration files, one for the database schema and one for test data:

V1__2023.09.16.sql

CREATE TABLE IF NOT EXISTS app_user (
id uuid NOT NULL PRIMARY KEY,
username varchar(64) NOT NULL,
password varchar(64) DEFAULT NULL,
first_name varchar(32) DEFAULT NULL,
middle_name varchar(32) DEFAULT NULL,
last_name varchar(32) DEFAULT NULL,
locale varchar(2) DEFAULT NULL,
avatar_url varchar(2048) DEFAULT NULL,
active boolean DEFAULT false NOT NULL,
created_at timestamp without time zone NOT NULL
);

CREATE TABLE IF NOT EXISTS role (
id serial NOT NULL PRIMARY KEY,
name varchar(64) NOT NULL
);

CREATE TABLE IF NOT EXISTS user_role (
user_id uuid NOT NULL,
role_id integer NOT NULL
);

CREATE TABLE IF NOT EXISTS authority (
id serial NOT NULL PRIMARY KEY,
name varchar(32) NOT NULL
);

CREATE TABLE IF NOT EXISTS role_authority (
role_id integer NOT NULL,
authority_id integer NOT NULL
);

Let’s create two test users with roles USER and ADMIN:
USER: username=user, password=secret
ADMIN: username=admin, password=secret
And then, we will assign the appropriate authorities to these roles.

V2__2023.09.16.sql

INSERT INTO app_user (id, username, password, active, created_at)
VALUES ('7f000001-8a56-11d1-818a-56e25ae30000', 'admin', '{noop}secret, true, NOW());
INSERT INTO app_user (id, username, password, active, created_at)
VALUES ('7f000001-8a56-1695-818a-56687e770000', 'user', '{noop}secret, true, NOW());

INSERT INTO role (id, name) VALUES (1, 'ADMIN');
INSERT INTO role (id, name) VALUES (2, 'USER');

INSERT INTO user_role (user_id, role_id) VALUES ('7f000001-8a56-11d1-818a-56e25ae30000', 1);
INSERT INTO user_role (user_id, role_id) VALUES ('7f000001-8a56-1695-818a-56687e770000', 2);

INSERT INTO authority (id, name) VALUES (1, 'ARTICLE_READ');
INSERT INTO authority (id, name) VALUES (2, 'ARTICLE_WRITE');

-- ADMIN can read and write
INSERT INTO role_authority (role_id, authority_id) VALUES (1, 1);
INSERT INTO role_authority (role_id, authority_id) VALUES (1, 2);

-- USER only can read
INSERT INTO role_authority (role_id, authority_id) VALUES (2, 1);

And now we can proceed to create the entity classes. We will need User, Role, and Authority.
There’s nothing special here, except I’d like to mention that we will choose UUID as the type for User.id. This will allow us to safely expose a form for creating and editing users in the future. To preserve the ability to sort by id, we will populate it with time-based UUID values.

User.java

@Setter
@Getter
@Entity(name = "User")
@Table(name = "app_user")
public class User implements Serializable {
@Id
@UuidGenerator(style = UuidGenerator.Style.TIME)
@Column(name = "id", updatable = false, nullable = false)
private UUID id;

@JsonIgnore
@ManyToMany(cascade = CascadeType.MERGE, fetch = FetchType.EAGER)
@JoinTable(name = "user_role",
joinColumns = @JoinColumn(name = "user_id"),
inverseJoinColumns = @JoinColumn(name = "role_id"))
private Set<Role> roles = new HashSet<>();

@Column(nullable = false, unique = true)
private String username;

private String password;

private String firstName;

private String middleName;

private String lastName;

private String locale;

private String avatarUrl;

private boolean active;

@CreationTimestamp
protected LocalDateTime createdAt;
}

Role.java

@Getter
@Setter
@Table(name="role")
@Entity
public class Role implements Serializable {
@Id
@Column(name = "id")
@GeneratedValue(generator = "role_id_seq_gen")
@GenericGenerator(
name = "role_id_seq_gen",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
parameters = {
@org.hibernate.annotations.Parameter(name = "sequence_name", value = "role_id_seq"),
@org.hibernate.annotations.Parameter(name = "initial_value", value = "1"),
@org.hibernate.annotations.Parameter(name = "increment_size", value = "1")
}
)
protected Integer id;

@ManyToMany(cascade = CascadeType.MERGE, fetch = FetchType.EAGER)
@JoinTable(name = "role_authority",
joinColumns = @JoinColumn(name = "role_id"),
inverseJoinColumns = @JoinColumn(name = "authority_id"))
private Set<Authority> authorities = new HashSet<>();

private String name;
}

Authority.java

@Getter
@Setter
@Entity
@Table(name = "authority")
public class Authority implements Serializable {

@Id
@Column(name = "id")
@GeneratedValue(generator = "authority_id_seq_gen")
@GenericGenerator(
name = "authority_id_seq_gen",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
parameters = {
@org.hibernate.annotations.Parameter(name = "sequence_name", value = "authority_id_seq"),
@org.hibernate.annotations.Parameter(name = "initial_value", value = "1"),
@org.hibernate.annotations.Parameter(name = "increment_size", value = "1")
}
)
protected Integer id;

private String name;
}

And, of course, we’ll need a UserRepository and UserService for data access.

UserRepository.java

@Repository
public interface UserRepository extends CrudRepository<User, UUID> {
Optional<User> findByUsername(String username);
}

UserServiceImpl.java

@Service
@RequiredArgsConstructor
public class UserServiceImpl implements UserService {
private final UserRepository repository;

@Override
public User getByUsername(String username) {
if (!StringUtils.hasText(username)) {
return null;
}

return repository.findByUsername(username).orElse(null);
}

@Override
public User save(User entity) {
return repository.save(entity);
}
}

Furthermore, it’s time to move away from the previously declared InMemory userDetailsService bean, which was used for testing purposes in SecurityConfiguration.java, and declare a service that will interact with the database.

CustomUserDetailsService.java

@Service
@RequiredArgsConstructor
public class CustomUserDetailsService implements UserDetailsService {
private final UserService userService;

@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
User user = userService.getByUsername(username);

if (user == null) {
throw new UsernameNotFoundException("Unable to found user: " + username);
}

return new CustomUserDetails(user);
}
}

In the CustomUserDetails class, let’s declare a new constructor with a single parameter of type User.

//...

public CustomUserDetails(User user) {
this.username = user.getUsername();
this.password = user.getPassword();
this.authorities = user.getRoles().stream()
.flatMap(role -> role.getAuthorities().stream()
.map(authority -> new SimpleGrantedAuthority(authority.getName()))
)
.collect(Collectors.toList());
}

//...

So now, we have completed the work related to connecting to PostgreSQL. And our system has two users, ‘user’ and ‘admin,’ each with their respective roles and authorities.

The next part will be released in a week, and I’m sure you’ve subscribed and won’t miss it, right? 😉

--

--