AWS : Solutions Architect Associate Exam — Part 3

A Quick Review to Design Secure Applications and Architectures

Pisit J.
Sum up As A Service
10 min readJul 5, 2021

--

Part 3 : Design Secure Applications and Architectures (24% of exam)

  • Design secure access to AWS resources.
  • Design secure Application tiers.
  • Design appropriate Data Security.

1. The EC2 instances with IPv4 addresses launched in a private subnet.

Which AWS service that can provide a highly available solution to safely fetch the software patches from the Internet but prevent outside network from initiating a connection ?

NAT Gateway — AWS-managed NAT services with high availability & bandwidth, IPv4 supported.

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html

2. A company launched a EC2 instance in private subnet that uses IPv6. Due to the financial data that the server contains, the system should be secured to prevent any unauthorized access and to meet the regulatory compliance requirements.

In this scenario, which VPC feature allows the EC2 instance to communicate to the Internet but prevents inbound traffic?

Egress-only Internet Gateway — a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the internet, and prevents the internet from initiating an IPv6 connection with your instances.

https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

3. A Solutions Architect created a new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to Amazon S3, DynamoDB, Lambda, and other AWS resources of the company’s cloud infrastructure.

Which of the following must be done to allow the user to make API calls to the AWS resources ?

Create Access Key for IAM User with necessary permissions.

When you use the AWS Management Console to create a user, you must choose to at least include a console password or access keys. But new IAM user created using the AWS CLI or AWS API has no any credentials. You must create the type of credentials for an IAM user based on the needs of your user.

Access Key is long-term credentials for an IAM user or the AWS account root user. You can use Access key to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). When you create an Access Key, IAM returns the Access Key ID and Secret Access Key. You should save these in a secure location and give them to the user.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html#id_users_creds

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html

5. A company has a UAT and production EC2 instances running on AWS. They want to ensure that employees who are responsible for the UAT instances don’t have the access to work on the production instances to minimize security risks.

What would be possible method to solve this security concern ?

Define the tags on the UAT and production servers and add a condition to the IAM policy which allows access to specific tags.

Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you’ve assigned to it.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

By default, IAM users don’t have permission to create or modify Amazon EC2 resources, or perform tasks using the Amazon EC2 API. (This means that they also can’t do so using the Amazon EC2 console or CLI.) To allow IAM users to create or modify resources and perform tasks, you must create IAM policies that grant IAM users permission to use the specific resources and API actions they’ll need, and then attach those policies to the IAM users or groups that require those permissions.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-policies-for-amazon-ec2.html

6. A company has several unencrypted EBS snapshots in their VPC. The Solutions Architect must ensure that all of the new EBS volumes restored from the unencrypted snapshots are automatically encrypted.

What should be done to accomplish this requirement ?

Enable the EBS Encryption By Default.

Note that encryption by default has no effect on existing EBS volumes or snapshots, so you can get the encryption by creating new EBS volumes and snapshots from the old unencrypted.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default

7. An application is hosted on an EC2 instance with multiple EBS volumes attached. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes.

What are true about encrypted EBS volumes ?

When you create an encrypted EBS volume and attach it to EC2 instance, the following types of data are encrypted:

  • Data-at-rest inside the volume
  • Data-in-transit between the volume and the instance
  • Snapshots created from the volume
  • Volumes created from those snapshots

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#how-ebs-encryption-works

8. A company uses CloudHSM, Hardware Security Module in AWS, for secure key storage of their web applications. A support staff mistakenly attempted to log in as the administrator three times using an invalid password. This has caused the CloudHSM to be zeroized, which means that the encryption keys on it have been wiped. Unfortunately, you did not have a copy of the keys stored anywhere else.

How can you obtain a copy of the keys that you have stored on CloudHSM ?

No. In case of CloudHSM, Amazon does not have access to your keys or credentials and therefore has no way to recover your keys if you lose your credentials.

Amazon strongly recommends that you use two or more HSMs in separate Availability Zones in any production CloudHSM Cluster to avoid loss of cryptographic keys.

https://aws.amazon.com/cloudhsm/faqs/

9. A company needs secure access to its Amazon RDS for MySQL database that is used by multiple applications. Each IAM user must use a short-lived authentication token to connect to the database.

What is the most suitable solution in this scenario ?

Use IAM DB Authentication.

With IAM DB Authentication, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token — An unique string of characters that Amazon RDS generates on request. Each token has a lifetime of 15 minutes.

IAM DB Authentication provides the following benefits:

  • Network traffic to and from the database is encrypted using SSL/TLS.
  • You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html

10. The Solutions Architect is building a web filtering solution that allows or blocks web requests based on the country that the requests originate from. However, the solution should still allow specific IP addresses from that country.

Which AWS services should the Architect implement to satisfy this requirement ?

AWS WAF — Web Application Firewall

https://docs.aws.amazon.com/waf/latest/developerguide/how-aws-waf-works.html

11. A company policy requires IAM users to change their passwords’ minimum length to 12 characters. After a random inspection, you found out that there are still employees who do not follow the policy.

Which AWS services can help you automatically evaluate whether the current password for an account complies with the company policy ?

AWS Config.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

https://aws.amazon.com/config/

12. A financial services company wants to identify any sensitive data stored on its Amazon S3 buckets. The company also wants to monitor and protect all data stored on S3 against any malicious activity.

As a solutions architect, what AWS services would you recommend ?

Use Amazon Macie to identify any sensitive data — Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data. Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers.

https://aws.amazon.com/macie/

And use Amazon GuardDuty to monitor any malicious activity — GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.

https://aws.amazon.com/guardduty/

13. A company want to track and log every request access to their S3 buckets including the requester, bucket name, request time, request action, referrer, turn-around time, and error code information. The solution should also provide more visibility into the object-level operations of the bucket.

Which is the best solution among the following options that can satisfy the requirement ?

Enable server access logging for all required Amazon S3 buckets.

Note that AWS CloudTrail logs provide a record of actions taken by a user, role, or an AWS service in Amazon S3, while Amazon S3 server access logs provide more detailed records for the requests that are made to an S3 bucket including the referrer and turn-around time information. These two records are not available in AWS CloudTrail.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html#log-record-fields

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-examples.html

14. The Solutions Architect is assigned to choose authentication/authorization mechanisms for API Gateway. The company would prefer a solution that offers built-in user management.

What AWS services would you suggest ?

Amazon Cognito User Pools.

  • Sign-Up and Sign-In services.
  • Built-in, customizable web UI to sign in users.
  • Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, as well as sign-in with SAML identity providers.
  • User directory management and user profiles.
  • Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification.

https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/identity-and-access-management.html

https://aws.amazon.com/cognito/

15. A company needs to integrate the Lightweight Directory Access Protocol (LDAP) directory service from the on-premises data center to the AWS VPC using IAM. The identity store which is currently being used is not compatible with SAML.

What is the most valid approach to implement the integration ?

You can build a custom identity broker application to perform a similar function. The broker application authenticates users, requests temporary credentials for users from AWS, and then provides them to the user to access AWS resources.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html

16. A company has clients all across the globe that access product files stored in several S3 buckets, which are behind each of their own CloudFront web distributions. They currently want to deliver their content to a specific client, and they need to make sure that only that client can access the data.

Currently, all of their clients can access their S3 buckets directly using an S3 URL or through their CloudFront distribution. The Solutions Architect must serve the private content via CloudFront only, to secure the distribution of files.

Which combination of actions should the Architect implement to meet the above requirements ?

Require that your users access your private content by using special CloudFront signed URLs or signed cookies.

Require that your users access your Amazon S3 content by using CloudFront URLs, not Amazon S3 URLs by setting up an origin access identity (OAI) for your Amazon S3 bucket.

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

17. A travel photo sharing website is using Amazon S3 to serve high-quality photos to visitors of your website. After a few days, you found out that there are other travel websites linking and using your photos. This resulted in financial losses for your business.

What is the most effective method to mitigate this issue ?

Configure your S3 bucket to remove public read access and use S3 pre-signed URLs.

In Amazon S3, all objects are private by default. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html

18. A company has 3 DevOps engineers that are handling its software development and infrastructure management processes. One of the engineers accidentally deleted a file hosted in Amazon S3 which has caused disruption of service.

What can the DevOps engineers do to prevent this from happening again ?

Enable S3 versioning and Multi-Factor Authentication on Delete.

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

If the MFA (Multi-Factor Authentication) on Delete is enabled, it requires additional authentication for either of the following operations:

  • Change the versioning state of your bucket
  • Permanently delete an object version

http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

19. You need to improve the data security of Amazon ElastiCache for Redis by requiring the user to enter a password before they are granted permission to execute Redis commands.

What should you do to meet the above requirement ?

Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled.

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/encryption.html

20. An IT company wants to review its security best-practices after an incident was reported where a new developer on the team was assigned full access to DynamoDB.

Which is the most effective way to prevent such incidents to recur ?

Use IAM permissions boundary to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.

The effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined.

https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/

21. What are true about the EC2 user data configuration ?

By default, scripts entered as user data are executed with root user privileges — do not need the sudo command in the script. Any files you create will be owned by root

By default, user data runs only during the boot cycle when you first launch an instance. However, user data can be explicitly configuring to executed every time an EC2 instance is re-started.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

--

--