Configuring Role Based Access for the Amazon S3 connector

Saagnik Adhikary
Another Integration Blog
10 min readSep 18, 2023

With the growing concerns of security violations and breaches, associated unintended accesses; and a paradigm shift towards Zero-Trust Policy and least-privilege permissions, every organization is tightening its belts towards a leaner and more granular approach, especially at permission boundaries.

Context

One such use case that I came across while working with the Amazon S3 connector provided by MuleSoft, is that it requires an access key and an access secret (secret key) for its configuration. This requires the creation of Access keys on AWS, which is not a security best practice; as it opens up your entire AWS stack to whoever is using that key, with no control over its usage whatsoever. Even AWS prompts this impending breach, while trying to create those Access keys, as seen below.

Warning, asking for leveraging IAM instead

This led me to configure the Amazon S3 connector to use the “Role based access” flavor, that does not require creation of Access keys, at all! :)

Let’s look at how to go about configuring that and we’ll start by looking from the MuleSoft side of things, and then come back to the AWS side of things. This is for the purpose of completeness.

Explorations and Trekking on CloudHub 2.0

As a pre-requisite, this entire blog requires the ability for the creation of Private Spaces and then deploy your mule applications inside that private space. In other words, it requires you to use the latest cloud based deployment offering, “CloudHub 2.0”. Let’s see what’s new in there then!

Creating a Private Space

After you have created the Private Space (test-ps) and also its Network (i.e., you have obtained both Inbound and Outbound Static IPs), switch over to the Advanced tab, where part of the magic lies!

To do that, follow the below given steps:

  1. From Anypoint Platform, select Runtime ManagerPrivate Spaces
  2. Click the name of the private space to manage (In our case, it’s named test-ps)
  3. Now select the Advanced tab, as shown below

In that page scroll past all the details, keeping everything to their defaults and stop right at the end of the page where it mentions, AWS Service Role. You need to enable this check-box, Enable AWS Service Role in-order to obtain a Role Name (role arn) for later incorporating it on the AWS side.

4. To do that click Enable AWS Service Role

5. Finally, click on Save Changes

Keep that role name obtained via the above steps copied, and store it for use later.

The Role Name for our Private Space

There’s that magic you just configured. Congratulations, you mesmerizing magician!!

Rendezvous with AWS

Under this umbrella we need to know a bit about IAM Roles, AWS STS Service and Trust Policies. Brace yourself for it’s going to be a wonderful ride of learning with many WOW moments!!

  • IAM Roles — An IAM role is an IAM identity that you can create in your account that has specific permissions. It is an AWS identity with permission policies that determine what the identity can and cannot do in your AWS account. A role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session. You can use roles to delegate access to users, applications (like those, in our MuleSoft environment!!), or services that don’t normally have access to your AWS resources.
  • AWS Security Token Service (STS) — It is a web service that enables you to request temporary, limited-privilege credentials for users. That’s exactly what we need for zero-trust!! We will be using the AssumeRole action of this STS service, that returns a set of temporary security credentials that you can use to access AWS resources. These temporary credentials consist of an access key ID, a secret access key, and a security token. When you create a role, you create two policies: “a role trust policy” that specifies “who” can assume the role, and “a permissions policy” that specifies “what” can be done with the role. You specify the trusted principal that is allowed to assume the role in the role trust policy. We will configure all of that in this blog, so hold on!
  • Trust Policy — It is a JSON policy document in which you define the principals that you trust to assume the role. A role trust policy is a required resource-based policy that is attached to a role in IAM. The principals that you can specify in the trust policy include users, roles, accounts, and services. A common use case is when you need to provide access to a role in account “A” to assume a role in Account “B”. To facilitate this, you add an entry in the role in account B’s trust policy that allows authenticated principals from account A to assume the role through the sts:AssumeRole API call.

With the above concepts in mind, we will proceed to create a new role (and name it as, s3-MuleSoft-role) in our owned AWS account.

The Role Creation is a three step process, as outlined below :-

First, Switch over to IAM → Roles after logging in to your AWS console. From here choose Create role and then for the “Select trusted entity”, from the “Trusted entity type” choose as Custom trust policy and configure it as shown below

Edit the “Custom trust policy” section

The Principal AWS arn that you see configured above is the role arn you copied before from your Private Space’s Advanced section of Enable AWS Service Role.

In the next step ofAdd permissions”, we need to add the AmazonS3ReadOnlyAccess permission policy to this role being created, as shown below

Adding the “AmazonS3ReadOnlyAccess” permission policy

This is the level of access granularity that IAM brings. Every other access to any other AWS resource in your AWS account will be prohibited with flair and poetic panache!

Finally review and name your created role, as shown.

Name your newly created Role

Congratulations, on successfully creating your Role!

Note: — The “Trusted entities” above will have the Account as the AWS Account Id from the role arn that was obtained from the C2.0 Private Space’s Enable AWS Service Role feature.

Let’s review the two important tabs, Permissions and Trust relationships that this role has which makes all the magic possible.

Has only the “AmazonS3ReadOnlyAccess” permission attached
This is the “Trust Policy” with C2.0 Private Space’s AWS Service Role ARN, as the Principal

(Extra content!!) Now a question might bother you that well, for how long will my temporary credentials be valid then?

Well, you can configure that duration, called Duration (DurationSeconds) to anywhere between 900 seconds (15 minutes) up to the “maximum session duration”, called Maximum session duration (SessionDuration) set for the role. Simply click on Edit from your Role’s Summary to modify that “Maximum session duration” as shown below.

Back to familiar territory, our beloved Anypoint Studio!

Let’s have a quick glimpse at my simple flow that I have for listing all s3 buckets that are there in my AWS account.

As you all must be quite aware by now that, all we are interested here is to understand how to configure the Amazon S3 connector using Role based access. First let’s see the setup for the List Buckets operation that I have.

Now comes the heart of this blog! Let’s check how that “Amazon_S3_Configuration” is configured.

Pay very close attention to how :-

  • I have chosen the Role and provided its corresponding ARN (value shown later).
  • I have selected the Default AWSCredentials Provider Chain as True. It is of utmost importance to set this enigmatic property as True, for the entire process to work as desired.

Please note that the above Duration (DurationSeconds) parameter is entirely separate from the duration of a console session, Maximum session duration (SessionDuration) that you might request using the returned credentials.

Rest all properties are left to their defaults.

Now the trickiest part was figuring out that we still need to provide the “Access Key” and “Secret Key”, but the interesting fact is that they no longer hold any relevance and we just need to provide dummy values for them going ahead!

Note:- If “Default AWSCredentials Provider Chain” is not enabled, the Amazon S3 connector gets credentials from “Access Key” and “Secret Key”. Once the “Default AWS Credentials Provider Chain” is enabled, it will get credentials in the order as specified here.

  • As we are using “Role ARN”, with “Default AWSCredentials Provider Chain” set as True, the Amazon S3 connector will now use the credentials from “Default AWSCredentials Provider Chain” using the “Role ARN” to assume the role first, then get the credentials from the role (s3-MuleSoft-role) to access the s3 resource in your AWS account.
  • But it still checks if the value of “Access Key” and “Secret Key” are presented (as these two fields are marked as Required, by this connector). But we can just put dummy values in those boxes now, as the role will be assumed using the “Role ARN”, as per the order hierarchy.

The values for the placeholders, ARN, Access Key and Secret Key are fetched from the configuration property file and are stored as follows.

Please note that the AWS_S3_ROLE holds the value of the ARN field that was obtained after successfully creating the role (s3-MuleSoft-role) in AWS IAM Roles section. Simply visit that role once again to get this value (arn:aws:iam::xxxxxxxxxxxx:role/s3-MuleSoft-role), in case you missed copying it earlier from the AWS console.

Be advised and prepared that the “Test Connection..” will no longer work for this connector from your local Studio (as there are no AWS credentials configured there!) and running the application will give you a HTTP 500 error as follows, once invoked locally.

Error when invoked locally

Observing the behavior in CloudHub 2.0 Private Space

We will now deploy the application jar on C2.0 Private Space, test-ps that we have already configured for this very purpose.

Let’s hit the same endpoint /s3List but from this CloudHub host and see what we get.

Voila! We obtain exactly what we were looking for, with a HTTP 200 success :)

The CloudHub 2.0 logs also show this from its Logger component, that prints the payload.

Endnotes, Considerations and Add-Ons

This section is devoted to make you aware of the expectations, limitations and way-forwards from this process discussed.

  • The above procedure is supported only with the Amazon S3 Connector version 5.8.4 and below or version 6.2.0 and above, which supports the Default AWSCredentials Provider Chain property.
  • The Role type is supported only for standalone or Runtime Fabric deployments and CloudHub 2.0 applications in Private Space deployments. The Role type is not supported for CloudHub 1.0 deployments.
  • The “Maximum session duration” setting determines the maximum session duration (SessionDuration) that you can request when you get the role credentials. For example, when you use the AssumeRole* API operations to assume a role, you can specify a session length using the Duration (DurationSeconds) parameter.
  • If you invoke the endpoint /s3List after the session duration has expired you’ll be able to see a HTTP 400 Bad Request with the DEBUG logs as, <Error><Code>ExpiredToken</Code><Message>The provided token has expired.</Message> … </Error>. However, the Amazon_S3_Configuration REQUESTER immediately and elegantly makes another POST request with the following parameters

Action=AssumeRole&
Version=2011–06–15&
RoleArn=arn%3Aaws%3Aiam%3A%3Axxxxxxxxxxx%3Arole%2Fs3-MuleSoft-role&
RoleSessionName=mule-s3-connector-role-cf4f79db-cb59–41e7–9b4e-9caa8e64f070&
DurationSeconds=3600&
Tags=

  • With the above request body, the sts.us-east-1.amazonaws.com service returns another set of temporary credentials in the response with their associated Expiration time, as shown below

<Credentials>
<AccessKeyId>AM7IAPWO4VESJCGYJACLR5Z</AccessKeyId>
<SecretAccessKey>D8BrUj5cOwEoNfc57GqJS/+tn6zroO</SecretAccessKey>
<SessionToken>IQoJb3JpZ2luX2 … WUI8HE3WWPffFw==</SessionToken>
<Expiration>2023–09–04T17:40:37Z</Expiration>
</Credentials>

This enables us to seamlessly access the s3 service with the “principle of least privilege” applied at its core, and over and above that; get the same feeling as though we were using permanent long-term credentials, for doing that!

That’s the end of our magic show. Thank you for stopping by and I earnestly hope that you learnt a useful trick or two! Go use them now and get mesmerized.

Feel free to reach out to me at my LinkedIn profile for more insights, and please post your queries.

--

--

Saagnik Adhikary
Another Integration Blog

Eclectic learner, proficient in the AWS Cloud, delivers REST APIs & EDAs by leveraging MuleSoft to its core. Most likely, to stop by for a verse!