Point to Note When Using AWS S3 Pre-Signed URL #aws #s3

Manabu Uchida
10 min readAug 25, 2019

Summary

Everybody loves AWS S3. You can use a awesome feature Pre-Signed URL.

It is a very popular feature. The following Amazon CloudFrontofficial documents mention Signed URL.

We will share points to be aware of when using this S3 function with AWS Lambda for Python.

For the original, please refer to the following Japanese blog.

Check Boto3 document

I checked the document of AWS SDK Python (Boto3) used this time.

The document states that

generate_presigned_url (ClientMethod, Params = None, ExpiresIn = 3600, HttpMethod = None)

Specify the permitted operation for Pre-Signed URL such as get_object or put_object in ClientMethod argument. Specify bucket name and key name in Params argument. It is a complete construction that can obtain Pre-Signed URL as a return value.

Usually, I think that there is no problem if it is used without particular concern. Be careful if Pre-Signed URL is due for a few days instead of a short time. Be careful when creating Pre-Signed URLusing AWS Lambda for Python.

Also, this time, I also had to consider AWS signature version in Amazon S3, so I had to make various trials and errors until the solution, so I will summarize the results.

Point.1: About execution authority

When using AWS Lambda I think that it will be in the form of using an IAM policy that has been granted the minimum required privileges at runtime using an IAM role.

I also use my favorite Chalice (Python Serverless Microframework for AWS). Please check the following for details.

IAM policy auto-generation, which is a benefit of Chalice, is of course used, so I thought it would be okay to just generate_presigned_url(ClientMethod, Params=None, ExpiresIn=3600, HttpMethod=None)function.

However, the IAM policy created by auto-generation was actually created without taking into account the method specified in ClientMethod. It is not allowed in terms of authority.

I thought that if I executed generate_presigned_url in this state, the execution would fail due to insufficient privileges, but in reality, the execution ended without any problems and the signed URL could be obtained.

If you access this signed URL, an error message indicating that you are not authorized at this stage will be displayed and you will not be able to operate anything.

In that case, it would be better to have an error specification at the stage of issuing a signed URL. The current specifications are a little disappointing.

Therefore, If you use generate_presigned_url to obtain a signed URL and you are unable to access it due to an error such as lack of access permission, make sure that the IAM policy issuing the signed URL has the necessary permissions in the first place. It's a basic part, but it's easy to overlook.

This time I used automatic generation of IAM policy, so it was much slower to notice this part and wasted time. Since it is a simple place, it is easy to miss it. Please be careful not to do the same thing.

Point.2: About expiration date

With the support of Point.1, you can now access safe by explicitly adding an IAM policy. But, we faced the new problem of signed URLs expiring soon than the specified expiration date.

At this stage, I specified ExpiresIn=864000 (10 days). There were no specific restrictions on the document, so I wasn't sure why.

However, it seems that there are many inquiries when searching the Web. Therefore, there was the following official document.

AWS Identity and Access Management (IAM) instance profile: Valid up to 6 hoursAWS Security Token Service (STS): Valid up to 36 hours when signed with permanent credentials, such as the credentials of the AWS account root user or an IAM userIAM user: Valid up to 7 days when using AWS Signature Version 4https://aws.amazon.com/premiumsupport/knowledge-center/presigned-url-s3-bucket-expiration/?nc1=h_ls

From the above document, we found that there is an expiration date of up to 7 days. So I tried ExpiresIn=604800, but unfortunately it didn't improve the situation where it expires before the expiration date.

If you read the document carefully, you can see that the maximum value of the expiration date is determined by the authentication information you are using, and the expiration date expires at that maximum value.

Since AWS Lambda uses IAM role, it means that the temporary token is obtained from AWS STS with the AssumeRole action, and authentication is performed using the temporary token.

Therefore, as described in the above document, when using STS, it expired in a maximum of 36 hours.

And check the AssumeRole documentation

By default, the temporary security credentials created by AssumeRole last for one hour. However, you can use the optional DurationSeconds parameter to specify the duration of your session. You can provide a value from 900 seconds (15 minutes) up to the maximum session duration setting for the role. This setting can have a value from 1 hour to 12 hours. To learn how to view the maximum value for your role, see View the Maximum Session Duration Setting for a Role in the IAM User Guide. The maximum session duration limit applies when you use the AssumeRole API operations or the assume-role CLI commands. However the limit does not apply when you use those operations to create a console URL. For more information, see Using IAM Roles in the IAM User Guide.https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html

Since there is the above description, the expiration date is 12 hours at the maximum, so this time the URL was also expired at the same time.

This time I want to achieve the longest possible expiration date, so I will check the document again…

To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials (the access key and secret access key) to the SDK that you're using. Then, generate a presigned URL using AWS Signature Version 4.https://aws.amazon.com/premiumsupport/knowledge-center/presigned-url-s3-bucket-expiration/?nc1=h_ls

As described in the document, you need to use an IAM user, so you need to generate a signed URL by specifying the access key and secret access key in AWS Lambda.

Since it is not good for security to have an access key and secret access key in the AWS Lambda function source code, we used the AWS Systems Manager parameter store this time.

From the user’s point of view, it would be nice to have a way to extend the STS expiration date to 7 days or to create a URL without including this information in the URL string. Currently, neither of these methods can be realized, so we are looking forward to the future.

For this time, if you use the snippet on the following page as it is, you can generate a signed URL valid for 7 days without any problem.

import boto3
from botocore.client import Config

# Get the service client with sigv4 configured
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))

# Generate the URL to get 'key-name' from 'bucket-name'
# URL expires in 604800 seconds (seven days)
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'bucket-name',
'Key': 'key-name'
},
ExpiresIn=604800
)

print(url)

Therefore, if the signed URL expires before the specified expiration date, it is recommended that you confirm that the authentication information you are using does not use a temporary token.

In addition, it is recommended to check the signed version because it is necessary to use SigV4 (AWS Signature Version 4). The signature version is described in detail in the next item.

This time, there were two problems: using a temporary token acquired with an IAM role and signing with SigV2 (AWS Signature Version 2).

I thought it was just a problem not using SigV4. Therefore, it took time to solve the problem of authentication information.

Please be careful not to get the same result.

Finally, we contacted AWS support to solve the problem. AWS support is excellent, so I felt that it would take less time to resolve a problem if we contacted us quickly.

Thank you for your AWS support.

Point.3: About signature version

In the future, SigV4 will be the standard. AWS documentation states that SigV2 will be deprecated.

Signature Version 2 is being turned off (deprecated) in Amazon S3. Amazon S3 will then only accept API requests that are signed using Signature Version 4.This section provides answers to common questions regarding the end of support for Signature Version 2.https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#UsingAWSSDK-sig2-deprecation

In the above document, we confirmed the compatibility of Boto3 (Python) used this time.

It says that you need to upgrading the Boto3 version as well as change the code on the client . This is shown in the code snippet introduced in Point.2.

Reprinted below.

import boto3
from botocore.client import Config

# Get the service client with sigv4 configured
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))

This is because it is necessary to explicitly declare the use of SigV4 when creating S3 client.

I think the following applies to Boto3 code.

The signature version was confirmed using the S3 log.
A recent service update output a signature version to the S3 log.

S3 server access logs are delivered on a best effort basis, so they are often delivered after about an hour. Consideration is necessary when checking.

This time, we used Datadog, a third-party SaaS, as a way to check logs. If log integration was performed, it was easy to search, and it was very convenient to check with one service together with other metrics of AWS environment.

Therefore, we found that the signature version of Boto3 (Python) needs to be specified explicitly in the program to use SigV4, otherwise it use SigV2.

Since it cannot be handled simply by updating the SDK, it is necessary to modify the program that accesses S3.

In the future, it will be easy if the default on the SDK side can be changed to SigV4, but it seems that it will be no direction because the influence range seems to be large.

The official AWS blog states that existing specifications will be preserved.

Revised Plan – Any new buckets created after June 24, 2020 will not support SigV2 signed requests, although existing buckets will continue to support SigV2 while we work with customers to move off this older request signing method.https://aws.amazon.com/jp/blogs/aws/amazon-s3-update-sigv2-deprecation-period-extended-modified/

Finally

  1. Amazon Simple Storage Service (Amazon S3) is not simple at all.
  2. AWS support is great so let’s use it early.
  3. I didn’t know that SigV2 was the default. Let’s check the specifications properly.
  4. For STS deadline, I want workaround so I can extend it.
  5. I think that there is a problem with the specifications that can issue URLs without authority. I want to return an error when the URL is issued.
  6. I thought it would be nice if the SigV4 documentation was added to the Boto3 documentation.
  7. Datadog was easy to search using log integration. It was convenient because other AWS metrics can be checked in one place.
  8. If you don’t understand the document well, it will add extra time. Read the documentation properly.

That’s all.

--

--

Manabu Uchida

The remarks are not affiliated with any company or organization. It is a personal opinion. Interest and concern Tags: AWS/golang/Python/Docker/GCP/Azure ...more