Authentication at Namely: Moving to the Edge

Abi Srivastava
Namely Labs
Published in
6 min readApr 9, 2019

In the coming year, Namely Engineering is on track to add over 100 new microservices to keep up with our technical and product growth. And as Namely grows, we need to continue adhering to best practices in security and ensuring security is provided by default. With this, we need to implement secure mechanisms to provide authentication information to services in a way which is scalable and maintainable.

Authentication is the process of verifying that you are who you say you are, and it is a fundamental concept in SaaS software. When building microservices, authentication is something about which nearly every service is concerned. A service needs to know the user making a request so that it can associate ownership with resources and check whether the user is allowed to perform the action they’re requesting. Without a centralized strategy, this can be a big overhead in development and in the footprint of each service.

Namely Before a Common Authentication Framework

Before introducing a standard approach, we had authentication implemented in our monolith, which required services to do the heavy lifting to authenticate themselves. To ensure that we continued to adhere to a high security standard, we decided to make security a ubiquitous concept. This makes it even easier for our engineers to meet these standards, as services would not have to care about a primary function like authentication.

Designing a Brand New Gatekeeper

After multiple brainstorming sessions, a few key features stood out as must-haves for the new authentication system.

  1. Scalable and highly performant
  2. Provides a standard and uniform method to authenticate across all services
  3. Treats authentication as a default action, a no-op
  4. Minimizes the effort needed to use the authentication system for a new service

Keeping these must-haves in mind, we decided to handle the process of identifying and authenticating a request at the outermost layer, which is the proxy server. Since all requests pass through the proxy server, we would guarantee security by default.

We call this system Authentication at the Edge.

How does it work?

Any request flowing through our ELBs passes through our OpenResty proxy servers to the Kubernetes cluster. Our OpenResty proxy servers authenticate the request by looking up session information. They then generate and sign a JSON Web Token (JWT) for the particular request. The upstream service can use this JWT to verify a user’s identity and to extract other user information. This JWT will only be visible from inside the cluster and is never exposed to the end-user, thus providing a protected layer of security.

OpenResty

OpenResty is a flavor of NGINX with added Lua support which adds the ability to provide some core logic inside of the proxy server.

When a request flows into nginx, theauth_request nginx module is used to invoke a custom OpenResty Lua library which checks for a valid session in the request. If the session is invalid, it redirects the request to the user authentication page. In case of a valid session, a JWT is generated using the session variables, and the request is forwarded to the kubernetes ingress controller for service resolution with the JWT added as a header.

JWT

JSON Web Tokens are an open source industry standard outlined in RFC 7519. They are used as a method for securely representing claims between parties.

A JWT consists of three base64 encoded strings representing a header field, a payload field, and a signature. The header contains information about reading the JWT (the signature algorithm, the type of token). The payload field contains a series of claims (key-value pairs). The signature is a digital signature of the header and payload fields.

For example, the following:

{
"alg": "HS256",
"typ": "JWT"
},
{
"sub": "1234567890",
"name": "John Doe",
"admin": true
}

would become the encoded JWT:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ

where the last field is the signature of the first two fields. The fields in the payload are application dependent, although some standard fields are defined in the RFC.

At Namely, we use the JWTs as the source of authentication information. With this standardized method of propagating authentication information, we do not need to maintain multiple headers or multiple calls to the session store to get the user information.

Authenticated vs Unauthenticated

We use a custom session library built on top of the openresty session management extension which allows us to store session information in a configurable store, such as a cookie (client side) or Redis (server side). Upon intercepting a request, the lua code checks for the presence of two properties to verify the authenticity of the request. The request, if authenticated, would need to have a valid custom session cookie or a session header. If either of these two are present, openresty calls our custom session store to validate the session using a set criteria including, but not limited to, expiration and user validity.

Authentication Flow

Implementation Details

We used the auth_request nginx module to authenticate requests. The catch-all block for the nginx location looks like this

location / {
auth_request /auth-service;
proxy_pass $ingress_upstream;
}

The auth_request module requires the /auth_service endpoint to evaluate the request and return with either a HTTP Status code 401 for unauthenticated requests or a status code 200 for authentication requests. The ingress_upstream variable points to the kubernetes ingress controller which resolves the upstream service using the ingress rules.

The auth_service location encapsulates a content_by_lua_block, which acts as a “content handler” and executes the Lua code specified. Upon interception of the request, the lua code checks for the presence of a valid custom session cookie or a session header to verify the authenticity of the request. If either of these properties are present, openresty calls our custom session store to validate the session using a set criteria including, but not limited to, expiration, user validity and other domain information.

location /auth-service {
internal;
content_by_lua_block {
local session_id = session.get_cookie()
-- If no session cookie
if not session_id or session_id == nil then
ngx.status = 401
ngx.exit(ngx.OK)
end
-- If session cookie is nil, check request headers
if session_id == nil then
session_id = ngx.var.http_x_shared_session_id
end
-- Obtain the JWT and check validity
local res, err = session.get_JWT(session_id)
if not jwtutil.validate_jwt(res.jwt.token) then
ngx.status = 401
ngx.exit(ngx.OK)
end
add_bearer_header(res.jwt.token) ngx.status = 200
ngx.exit(ngx.OK)
}
}

Secure By Default

Secure by default means that the default configuration of our proxy server provides implicit security. By handling authentication for the default routing location /, we decided that all routes should be secure and require a valid session. However, there is a need to allow unauthenticated requests to flow through to public resources and to maintain the ability to alter the public resource behavior if the user is authenticated.

To facilitate this use case, we maintain a list a public_routes which can be accessed without an authenticated request. With these unauthenticated requests, the lua code checks for the presence of the route being accessed in the public_route and redirects the request to the downstream service.

Constructing this list of public_routes was an uphill battle. Namely did not have explicit lists of public resources. This was due to the fact that all end services were previously responsible for their own authentication. Thus, that logic was buried within those services. It would have helped if, from the start, Namely had maintained a list of public routes to allow for easier extraction into a microservice architecture.

Initial Results

We have already started seeing the benefits of the strategy of authenticating at the Edge. Now, individual service developers do not have to worry about authentication. This, in turn, means fewer programming errors and a faster turnaround time for productionalizing a new service. There has been a reduction in the tight coupling of several services by removing large blocks of authentication code from existing applications.

Now, individual service developers do not have to worry about authentication. This, in turn, means fewer programming errors and a faster turnaround time for productionalizing a new service.

Future Work

One of the challenges we will tackle later this year is around providing the same authentication mechanism for the Namely API. The API is publicly used by multiple clients, including our mobile app, and uses the OAuth flow for authentication. This is something we are planning to build as an openresty module for the API requests to follow similar internal authentication standard.

In closing, we are thrilled at the flexibility that Authentication at the Edge has provided us. Openresty has been highly performant, and placing the onus of authentication at the outermost layer made our ecosystem boundaries clean and secure. Namely engineers are now able to develop services more rapidly and grow while continuing to adhere to best security practices.

Interested in working on other projects like this? Check out our Careers Page!

This would not have been possible without the help of multiple teams at Namely that enabled us to develop this efficiently and collaboratively. Additionally, I’d like to thank Nicholas Narh and Martin Kess as the other two primary contributors to this project. Finally, I would like to thank Sid Gopinath and Mike Hamrah for their immense help in writing this article!

--

--