Photo by Christopher Gower from Unsplash

Setting Up Programmatic AWS Access for MFA Protected Federated Identities

I am currently working on improving the security of cloud operations for one of my clients and wanted to share an interesting solution I developed to help provide programmatic access to AWS from local developer environments using Federated Identities only.

The Challenge

Accessing AWS resources from outside VPC usually requires some sort of AWS credentials. There are two forms of access credentials — long term and short term keys. To create long term credentials, you use IAM users. Each member on the team gets their own IAM user that they can then use to generate these long term credentials.

However, my goal was to completely migrate our team to using purely Federated Identities for all forms of AWS access. This approach comes with several great benefits described in detail in this chapter of IAM service documentation. One additional challenge we were facing was that our corporate login portal was protected by an MFA layer.

Note: If you don’t have a corporate portal for federated user authentication, please take a look at this AWS blog post to see how you can set it up.

The Solution

The solution that I came up with consists of multiple parts.

1. Automating Corporate Portal Sign-In

The implementation of this step heavily depends on how your corporate sign-in portal is implemented. You will need to use the Selenium web driver to programmatically interact with the sign-in page. On top of that, you will also likely need to implement some sort of command-line prompt to collect ADFS credentials from the user, notify them of the current sign-in process status, whether they should grab and enter an OTP code or take action on whatever other MFA mechanism you might be using.

2. Setting Up a Proxy, Extracting a SAML Assertion

We will be using browsermob-proxy for this step since it supports HTTPS (full MITM). It will run as a separate process in the operating system. Setting up this proxy was a little frustrating to me in the beginning and you might feel the same way, so let me explain how it works:

The following example will create a browsermob daemon on localhost port 9900 and a MITM proxy on localhost port 27960. It will also instruct your new proxy to send all incoming requests through proxy.corp.company.com upstream proxy effectively creating a proxy chain.

# start a browsermob daemon
# to install it, follow instructions on it's GitHub homepage
# https://github.com/lightbody/browsermob-proxy/
/usr/bin/browsermob-proxy -port 8080
# create a MITM proxy
curl -X POST localhost:9900/proxy?port=27960&httpProxy=proxy.corp.company.com

After proxy is instantiated you will need to instruct the browser instance to use it. If you’re using Chrome pass the following switch when launching it:

--proxy-server localhost:27960

With Selenium that can be done by using ChromeOptions class. You can read more about configuring proxy settings in Chrome over CLI here.

Now to extract a SAML assertion we will need to do a few things. First, we will need to start recording network activity before the console sign-in event. Second, after the sign-in is successfully completed, we will need to fetch the recorded activity back from the proxy server:

# initiate recording by sending a PUT request to browsermob daemon;
# captureContent param will enable recording of POST data;
# number between /proxy and /har is port where MITM proxy is running
curl -X PUT localhost:9900/proxy/27960/har?captureContent=true# user signs-in to AWS console through corporate sign-in portal...
# ...success!
# run the same exact command to fetch the data recorded so far
curl -X PUT localhost:9900/proxy/27960/har?captureContent=true

This command will return a pretty hefty JSON response in HAR format which you will need to programmatically iterate through, looking for a POST request to https://signin.aws.amazon.com/saml. The request will contain a field named SAMLResponse and data within that field is what we need.

At this point, you may also choose to shut down the proxy to prevent potential memory leaks. That can be done by sending a DELETE request to the daemon:

curl -X DELETE localhost:9900/proxy/27960

3. Fetching and Using Temporary Access Credentials

Now that we have the SAML assertion it’s time to turn it into a pair of temporary access credentials. As I already mentioned we’re going to be using STS class from AWS SDK (read more on assumeRoleWithSAML here).

// javascript codeconst params = {
SAMLAssertion: "AssertionYouGotFromHarDump",
PrincipalArn: "arn:of:saml:provider:for:federated/identity",
RoleArn: "arn:of:federated:identity:that:fetched/samlAssertion"
}
sts.assumeRoleWithSAML(params, () => ...);

Function call responds with STS credentials that can be used in our local environment applications now.

// successful response
{
...
Credentials: {
AccessKeyId: "AAAAAABBBBBBBCC...",
Expiration: <Date Representation>,
SecretAccessKey: "sEcReTaCcEsSkEy...",
SessionToken: "sEsSiOnToKeN..."
}
...
},

To use them, first, you need to make sure that you don’t specify any AWS access keys in your OS, shell or application environment variables, including injection of environment variables into your application from .env files.

Once verified you can plug in your newly obtained credentials over to ~/.aws/credentials file by either modifying it programmatically or by using AWS CLI:

# set credentials on the default AWS CLI profileaws configure set aws_access_key_id AAAAAAAAAA... --profile default
aws configure set aws_secret_access_key bQbGAb... --profile default
aws configure set aws_session_token aaABAaabAA... --profile default

This will provide access to AWS for all applications or shell scripts that use AWS CLI or AWS SDK because both tools will read from configuration stored in the ~/.aws directory. In fact, there is a whole fallback system that these tools follow which is why I asked you to make sure you don’t specify any environment variables first!

Note: You might prefer to go an extra step and assume a different role with more restrictive permissions depending on the application that you’re trying to provide programmatic access for. In that case, you will need to run another STS call using credentials that you have already obtained:

const params = {
AccessKeyId: "...keyYouObtained",
SecretAccessKey: "...keyYouObtained",
SessionToken: "...keyYouObtained",
RoleArn: "arn:of:role:that:you:want/toUse",
RoleSessionName: "user.email@company.com"
}
sts.assumeRole(params, ...);

Keep in mind though that with this approach you’re going to have to work through an additional challenge of trying to figure out how to insert credentials specifically into applications that are going to be using them, as we won’t be able to simply edit ~/aws configuration with a single pair of credentials anymore.

Configuring Git To Use Temporary Credentials

So far we were able to grab a set of temporary credentials and use them in applications running on our local environments. However, if you’re using CodeCommit you might want to also update Git configuration to work over temporary STS credentials. This will allow you to avoid having to use SSH or HTTPS credentials which are essentially just another form of long term credentials, that we’re trying to get rid of.

To switch to temporary credentials, disable any current Git credential managers in use, then run following git commands:

git config --global credential.helper \
'!aws codecommit credential-helper $@'
git config --global credential.UseHttpPath true

Note: If you’re running these git commands on Windows, omit ‘\’ with a new line, and use double-quotes instead of single-quotes to wrap [!aws … $@] part.

After that make sure to update origin URLs in every local Git repository pulled from CodeCommit to HTTPS format if you have been using SSH format. Also, make sure to do the same inpackage.json files across all projects that use CodeCommit packages as NPM dependencies.

If these package.json updates, however, break your build or deployment pipelines in the cloud, check out the article I wrote about using temporary credentials in CI/CD pipelines in AWS cloud!

Conclusion

Depending on your use case you might end up automating this process even further to better improve the developer experience. I decided to not dive into that much detail since this article is getting pretty long at this point :), but I can absolutely imagine packaging up this Selenium application along with browsermob daemon and all of its dependencies in a Docker image and even integrating it as a micro-service into existing containerized applications.

Switching to short term credentials improves the overall security of cloud operations of your team. An additional benefit in using purely Federated Identities to provide AWS access is that access is revoked as soon as ADFS credentials bound to the identity cease to exist. So if someone leaves the company you don’t leave yourself a chance to forget to clean up your IAM users list and accidentally leave a hole in your infrastructure security!

Thank you for reading this article! If you would like to see more content like this in the future please leave a like and share this article. Till next time!

Writing full-stack React.js applications and building cloud solutions on AWS. Find me on GitHub and LinkedIn @iamarkadyt or at www.arkadyt.dev