Autonomous Identification System Part 2

Cloud services offer great economic value to organisations. Typically, as these services proliferate and become interconnected within an organisation’s business systems, a security risk develops when credentials must be stored in the service for a long duration without rotation.

We describe our Autonomous Identification System, which can be used to mitigate this security risk by issuing these services with short lived temporary credentials.

This allows organisations to get the desired economic benefit, while controlling risk.

  • Part 1 describes the problem, and the solution this system provides
  • Part 2 discusses a number of common use cases, and introduces our open source reference implementation

Common Use Cases

1. CI/CD Pipeline

A continuous deployment system must have valid credentials to deploy to a cloud environment. It is common practice to configure these credentials into the CI/CD system. For example, when using CircleCI to deploy to AWS the access key ID and secret access key must be stored in the CircleCI environment variables.

With autonomous identification in place we can avoid storing these credentials in CircleCI altogether; allowing us to benefit from CircleCI’s product without needing it to trust it with cloud infrastructure keys. To do this we configure the deployment scripts to retrieve the credentials using this system. The scripts must perform three steps:

  1. Retrieve the autonomous identification entropy file and server’s public key
  2. Receive the required credentials from the server
  3. Configure the local system, using aws configure, for example

2. Engineer Access

Engineers require access to deployed environments as part of the development process or for troubleshooting. Commonly, this means developers and support engineers will have credentials stored in their local environment.

These privileged users are increasingly targeted for their credentials. By using the autonomous identification system in this use-case, there are two immediate benefits:

  1. No credentials are distributed to engineer environments
  2. Credentials can be rotated without the need to update the local environments.

3. Auto Login

Auto-login can be useful in many circumstances.

One example of this is using Amazon CloudWatch dashboards to monitor infrastructure. To view the dashboard a user must authenticate into the AWS console and navigate to the CloudWatch dashboard. The session will expire within an hour (in the case of an assumed role) or within a maximum of twelve hours. When the session expires the dashboard is closed or, if the maximum session timeout has been reached, the AWS console logs out.

We can automate this process with auto-id in combination with AWS assume role and a custom federation broker. The custom federation broker creates a URL that can be used to access the AWS Management Console.

The process, using auto-id, is as follows:

  1. Obtain valid AWS access credentials through auto-id
  2. Assume a suitable role to access the management console
  3. Generate a login URL. The login URL takes as an argument a destination URL.
  4. Visit the login URL which in turn: authenticates to AWS; opens the management console; and navigates to the destination URL

In the case of CloudWatch Dashboards the destination URL would be the target dashboard.

Using this process we can fully automate the presentation of dashboards and ensure the session will not expire.

4. Third Party Access

Providing third parties access to your cloud account generally requires distribution of credentials. Best practice for AWS dictates that this should be an account that can assume a role with least privilege policies attached.

The benefit of the autonomous identification system presented in this use-case is that the principal account credentials need not be distributed, and can therefore be rotated frequently without the need to update third-party environments.

Figure 1 shows how a third party might use the autonomous identification server to access credentials stored in AWS Secrets Manager. Here the entropy file and public key used by the client is stored in a separate repository while the server stores these files in S3.

Figure 1 Third party access to credentials through the autonomous identification server.

Reference Implementation

We have open-sourced a reference implementation with the following components:

  1. Server — accepts credential requests, verifies client identity, delivers credentials. The server has several sub-components: an entropy file, a shared secret to verify client identity; an RSA key pair, the server key pair for encryption and server identity; an entropy file repository, a private repository accessible to the server; and the credential store, AWS Secrets Manager.
  2. Client — requests credentials from the server. The client includes an entropy file repository which is a private repository accessible to the client.


The server reference implementation is comprised of several components:

  1. An AWS Lambda function
  2. An S3 bucket to store the entropy file and the server RSA keys
  3. Credentials stored in AWS Secrets Manager

The project provides a docker container to deploy the server and scripts to store secrets in Secrets Manager. See the README for detailed instructions.

Figure 2 Autonomous identification server components


The client implements two use cases — engineer access and auto login.

Engineer Access

This client is a command line environment that requires access to an AWS account to execute CLI commands.

Implemented as a Docker container with a Python client script this is intended to simulate an engineer’s local environment where AWS CLI (or SDK) access is required to test or troubleshoot a cloud deployment.

The client performs three actions:

  1. It retrieves the entropy file and server’s public key.
  2. It requests credentials from the autonomous identification server, which in turn accesses Secrets Manager.
  3. It configures the local AWS CLI using the aws configure command with the retrieved credentials.

Once the CLI is configured in this way, the client shell can be used to execute AWS commands.

The figures below show how the client accesses the auto-id service. Figure 3 shows the client sharing the S3 bucket used by the server to store the entropy file. Figure 4 shows the client retrieving the entropy file from a private NPM repository.

A separate entropy file repository for the client is preferred over the shared repository. See the project README for details on how to configure the NPM repository.

Figure 3 Client — retrieving entropy file from S3

Figure 4 Client — retrieving entropy file from private NPM repository

Auto login

The auto-login client demonstrates the ability to fully automate an AWS Management Console login with navigation to a specific service dashboard.

Implemented as a bash script running in a docker container this solution performs a number of actions:

  1. It retrieves the auto-id entropy file and server public key
  2. It requests credentials from the autonomous identification server, which in turn accesses Secrets Manager.
  3. It uses the AWS credentials to assume a given role
  4. Using the assumed role it requests a login URL from the AWS federation endpoint.

This could be used to automate dashboard displays for example. To do so a script is deployed along with the auto-id client to the dashboard display. When the dashboard system is booted the script is executed, retrieving the login URL and opening the browser to the dashboard page within the AWS management console. The script can be run on a schedule to refresh the login session periodically avoiding expired session logouts and keeping the dashboard display alive.

Windows Implementation

The reference implementation includes a Windows Batch file implementing the auto-login flow. The batch file executes several steps:

  1. Installs the auto-id keys from a private NPM repository
  2. Uses the auto-login assume role script to: call AWS STS to assume role; call AWS Federation service to get a sign in token; and call AWS Federation service to construct a signed URL. For details see Creating a URL that Enables Federated Users to Access the AWS Management Console (Custom Federation Broker)
  3. Open the default Windows browser at the signed URL

This script can be added to the Windows startup group to run after login or a scheduled task can be configured to refresh the session every twelve hours.

Extending and Customising the Implementation

Server Deployment

While scenarios presented here are based on AWS Lambda deployments the server could be deployed into any environment that supports public requests and provides private storage for the entropy file and RSA keys.

Credential Store

The reference implementation uses AWS Secrets Manager as the credential store. This could be replaced or the server could be enhanced to use any number of credential stores.

One way to accomplish this is to specify the credential identifier and a credential store identifier in the request. The server can then connect to the specified credential store. Figure 5 shows what this might look like.

In AWS the Security Token Service could act as a credential store meaning only temporary credentials would be supplied by the auto-id server.

Figure 5 Server with multiple credential stores.

Entropy File Repository

The entropy file must be stored in a private repository. In the examples presented here we use S3 and NPM but any private repository could be used including git repositories, file systems, etc.

Bear one requirement in mind when considering alternatives. The system must support frequent key rotation. It should be possible to rotate the entropy file and RSA keys at any time with the server and clients remaining synchronised.

In the reference implementation the key rotation script generates the new keys and then simultaneously updates the S3 bucket for the server and the NPM repository for the client. The server must obtain fresh keys for each request or be made aware when new keys are available.

Wrap up

We believe autonomous identification has many benefits including:

  1. Avoiding widespread distribution of secret credentials
  2. Enabling frequent rotation of keys and secrets
  3. Facilitating automation without the complication of managing secrets
  4. Providing fully secure communication over insecure channels

Overall this is a tool that enhances security and efficiency in build and deployment processes.