Actor-Based Authentication

An Alternative Way To Think About Secrets

Brett Neese
5 min readDec 10, 2018

Secrets are hard because humans are hard. They get leaked to GitHub, they get echo’d out to the console by lazy developers, they get skimmed by attackers. They’re a fundamental weakness in our security infrastructure.

And yet, secrets are everywhere. API Keys, database credentials, even certificates: all rely on this fundamentally broken authentication system. It’s no longer considered best practice to use single-factor authentication methods to access critical services online and yet underlying all of these sites are single-factor authentication methods (if you’re lucky, maybe you’ve got an IP whitelist.)

Maybe the problem with secrets… is secrets.

It’s unfortunately probably not possible get rid of secrets entirely. Legacy systems still rely on them, and there aren’t many great alternatives. Even if a database is configured use client certificates to authenticate, a string has to be passed around somewhere.

But it is possible to begin to think about them differently; particularly in cloud environments. I’ve been working on systems that begin to think about them differently — that abstract secrets into policies. This is already a fairly common way of understanding secrets, but I asked a few friends what it’s called, and while they understood the pattern, it didn’t appear to have a name. I call this way of thinking about secrets “actor-based authentication.”

The idea is that instead of thinking about secrets as text strings, even if that is what they ultimately are, your infrastructure allows you to define to define policies that transparently pull secrets into your application from an external source, ideally by dynamically generating them per connection, based entirely on what the entity/actor — container, service, instance, application, user, etc — that needs the secret is. This way the only level of abstraction that developers or DevSecOps needs to worry about is “X has access to Y with Z permissions.”

To effectively accomplish actor-based authentication, tooling needs to be written that grabs and injects the credentials into the actor, which can be done in a variety of ways, most commonly through environment variables or a file in a temporary directory. But once it’s built, you stop having to worry about .env files or accidentally leaking secrets, though without dynamic generating secrets, secret skimming is still a concern.

A great example of actor-based authentication that already exists in the wild at massive scale is IAM Roles for Amazon EC2. IAM roles for Amazon EC2 allow EC2 instances to gain access to other AWS resources through defining a policy explaining what the services running on the EC2 instance should have access to. The policy language itself defines what the applications running on the instance can and cannot do with what are technically other external services within AWS and that control is enforced by the infrastructure itself and IAM’s policy language is provably secure.

Underneath IAM instance policies, AWS is actually generating and rotating limited-use access key/secret key pairs, but any application on EC2 that uses the AWS SDK doesn’t need to worry about this — it happens automatically and transparently. This is a perfect example of the power of actor-based authentication; developers can focus on “who has access to what and when” instead of “how do I secure this random text string and vary it appropriately across environments.” Other AWS services, such as Lambda, use similar constructs, and Amazon’s own database service, Aurora, can also use IAM roles for user authentication — another great example of actor-based authentication.

But we don’t have to wait for AWS to write all the tooling. Another example of tooling that allows actor-based authentication is my PR for node-convict alongside my AWS Parameter Store provider. For a project I worked on while developing these patterns, we stored all our secrets in the Parameter Store, under different paths for staging/production/etc. Because we ran on Lambda, we give each of our functions an execution role with a policy that gives them access to only the secrets on the Parameter Store that service needs by cleverly using wildcards and paths.

For example, a config.js file looks like:

var config = {
PRINT_USER: {
doc: 'Print server username',
format: 'String',
env: 'PRINT_USER',
default: 'printing',
providerPath: '/api/' + process.env.STAGE + '/arctokens/PRINT_USER'
},
PRINT_PWD: {
doc: 'Print server password',
format: 'String',
env: 'PRINT_PWD',
default: 'null',
providerPath: '/api/' + process.env.STAGE + '/arctokens/PRINT_PWD'
}
};

var convict = require('@hbkapps/convict');
convict.configureProvider(require('@hbkapps/convict-provider-awsssm'));
module.exports = convict(config)
.validate()
.getProperties();

And the PolicyDocument that allows permission to the Parameter Store looks something like:

PolicyDocument:
Version: '2012-10-17'
....
- Effect: "Allow"
Action:
- ssm:GetParametersByPath
- ssm:GetParameters
- ssm:GetParameter
Resource: arn:aws:ssm:*:*:parameter/chartroom/api/${self:custom.stage}/core*
- Effect: "Allow"
Action:
- ssm:GetParametersByPath
- ssm:GetParameters
- ssm:GetParameter
Resource: arn:aws:ssm:*:*:parameter/chartroom/api/${self:custom.stage}/arctokens*

Local developers have different roles on AWS and different and thus get different secrets when loading the app locally — actually, they don’t have any access to any of the Parameter Store secrets locally, so node-convict falls back to default values. In different environments, the application’s STAGE is injected into requests to the Parameter Store through interpolating the only variable we set on each Lambda function, the STAGE. And because we deploy our stages separately from each other, we’re able to interpolate the proper STAGE variable into the policy upon deployment — even if an attacker were able to update the STAGE, from “staging” to “production,” for instance, the Lambda function would not have permission to read from the “production” values anyway — it’s set via policy.

While there is an order of precedence and a mechanism to override parameters using environmental variables, injecting the actual values happens almost magically during application startup and does not rely on managing environmental variables or an .env file. This remains compliant with the twelve-factor app because configuration values are stored separately from the code -- just in the Parameter Store instead of the environment -- and are defined declaratively in any given service's config.js. At the application level, secrets (and all config values) are simply exposed as config.super_secret_db_connection_string. For instance, in pseudocode:

var config = require("./config");requestObject = { 
username: config.PRINT_USER,
password: config.PRINT_PWD
}
request.send("https://abcd.efg", requestObject);

It’s secrets management based on what you are and what you need not “what random strings are in the environment.” In theory, those values could even be generated uniquely per connection through an isolated proxy with root credentials to the target service, meaning that even if an attacker skims a secret, it’s worthless anyway (I’m working on this.)

Other tooling could be built to handle different deployment scenarios, such as containers. And tooling could be built to allow for dynamic secrets, using something like Vault. But ultimately under actor-based authentication development tooling and infrastructure should declaratively handle the question of “who has access to what,” not “how do I access this text string.”

Admittedly, more tooling needs to be built out to handle this pattern effectively, and that will take time and effort to do correctly. What exists so far is very nascent but works very well — it is somewhat transparent to the developer and it Just Works(TM), so this is actually a very exciting approach to an old, boring, and yet surprising hard problem.

Ultimately, changing the mindset around secrets and credential management through the addition of abstractions to manage access to secrets declaratively as a policy will lead to more secure and easy to manage services and microservices.

Brett Neese wants to help developers and artist build and ship the right things, the right way. He’s currently looking to work with anyone else who shares a passion for making the right thing the easy thing. Learn more at https://brett.neese.rocks.

--

--