Securing EFS File Systems with Terraform: IAM
As the official AWS documentation states, managing access to resources in AWS is done by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
Something I find lacking in a lot of AWS services is the ability to define access policies - it is not a universal attribute (and that's a shame). There are services like S3 or SNS that support this feature and allow for a higher level of security. Luckily, AWS seem to have noticed - and introduced support for access policies in more services.
So What? AWS Is Secure!
AWS operates with the Shared Responsibility Model in mind, where Security and Compliance is a shared responsibility between AWS and the customer.
In cases where the is option of access policy is lacking, the calling resource can define arbitrary permissions so it can interact with a policy-less resource without proper restriction. This may expose your resources to unintended access; it means that extra steps need to be taken into consideration when creating such resources.
Prior to AWS adding support for access policies to EFS file systems, any machine in the AWS account the file system was created in could mount it and access it insecurely by default - no IAM role needed.
Now, we can define access policies that can restrict access to certain IAM identities. This allows creating specific policies based on technical scenarios like a read only policy, or forcing resources to secure transport with encryption in transit. Callers will now have to have a corresponding policy attached to their IAM Identity that explicitly allows them to perform actions such as mounting and writing to the file system, otherwise these actions will be rejected by the file system.
Terraform In Action!
The following is a detailed example in Terraform; the extensie level of detail is done to give a full picture on how to properly configure your environment for secure access to an EFS file system.
A VPC with the
enable_dns_hostnames setting enabled is necessary for this setup to work; the example VPC includes a single subnet and security group.
The security group defines ingress/egress rules for port 2049 (standard NFS port) to allow access to an EFS file system from any resource that is also assigned this security group.
Note that I’ve omitted any network resources that allow access to the internet - you would probably require those in your setup as bootstrapping the EC2 will need to download some packages from a linux repo.
Next are the EFS resources: a file system with a basic policy that allows mounting and writing, but requires the use of TLS for encryption in transit.
I created a single mount target in the same subnet where I'll also provision the EC2 that mounts the file system.
In a real use case, you would want to create several of these mount targets in different Availability Zones for high availability.
Finally, provision an EC2 with an appropriate IAM role that allows mounting the file system (Securly!):
When passing the File System ID to the bootstrapping script, make sure to use either the mount target DNS name or the FS ID attribute from the mount target resource, as it takes time to propagate the DNS name. More info on mounting EFS can be found here.
When bootstrapping the instance to mount the file system, use the following script. It will install the efs-utils package and mount the file system.
Is That It?
Another article on EFS Access Points is due as a continuation to this one — this will give another layer of security. Meanwhile, feel free to reach out if you have any questions.