(1) my notes on AWS Certified Solutions Architect Associate 2021 SAA-C02
network adaptors
Published in
6 min readDec 4, 2021
EFA is a ENA with added capabilities.- EFA (elastic fabric adaptor) is mostly used for high performance computing; An EFA is an Elastic Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with an additional OS-bypass functionality.- ENA (elastic network adaptor) is used for high bandwidth and low latency requirement.- ENI (elastic network interface) is the basic type.
EC2
By default, AWS has a limit of 20 instances per region.EC2 has three RI (reserved instance) types: Standard, Convertible, and Scheduled.* standard is cheaper in cost; standard types are locked to certain instance family, where convertible allows for change and will benefit from price reduction.* Scheduled, reserved for specific periods of time, accrue charges hourly, billed in monthly increments over the term (1 year).* if you can withstand interruption and the task is computing intensive, use spot instances.* if you need business critical, and have continuous demand, use reserved instances; production usually requires reserved instance.- if your R&D uses a few hours which can’t be interrupted, use on-demand.- if your task runs a few hours daily/weekly, use scheduled reserved instances.- if you need per-socket license, use dedicated host- if you need per instance billing, security sensitive app, use dedicated instances; dedicated instance will be placed on the hardware delicated to your aws accounts.[note] you can only reserve instance in single region or AZ.Nitro-based EC2 offers very high IOPS like 80,000; In order to achieve the 64,000 IOPS for a provisioned IOPS SSD, you must provision a Nitro-based EC2 instance; other instances guarantee up to 32,000 IOPS only.General purpose SSD (gp2) provides IOPS from 100 to 16,000 IOPS.Placement groupif you gets an insufficient capacity error while adding new instance to existing placment group, try stop and restart the instances in the group; it is best practice to place all instances in a single launch request and use the same instance type.
S3
S3 lifecyle policy defines when S3 objects will be moved to different storage class or deletion.S3 standard is good for short storage.s3 together with Route 53 provides a cost-effective solution to provide static fail-over site.S3 select operates based on bucket's name and object's key.
AWS storage gateway
AWS Storage Gateway is primarily used for connecting on-premises storage to cloud storage.— storage gateway volume gateway is used to replace block storage
— storage gateway file gateway is used to replace NFS storagewe could use AWS storage gateway VTL (virtual tape library) to replace on-prem tape storagestorage gateway is solution to provide on-prem to cloud backup.AWS Storage Gateway supports local caching without any development overhead making it suitable for low-latency applications.
signed URL and OAI (original access identity)
— access to s3 is locked to OAI via cloudfront— access limited to certain IP addresses— this is only applicable to s3— this is not applicable when s3 is configured as website endpoint— use doesn’t login to s3 with OAI, cloudfront does.
RDS
encryption
-- if original DB is not encrypted, you have to take a DB snapshot, encrypt the snapshot, redeploy the DB with encryped snapshot, and use the new endpoint in applications.SCT (scheme conversion tool) is used to convert one DB type to the other.Max retention period for automated snapshot backup is 35 days.
EKS
fully compatible with standard k8s
endpoint
An endpoint is a network component that connects EC2 instances in a VPC to certain AWS services without requiring public IP addresses.There are two types of VPC endpoints:
- Interface endpoints An interface endpoint is an elastic network interface that allows a private IP address in a subnet to connect VPC resources to a number of AWS services, such as CloudFormation, Elastic Load Balancers (ELBs), SNS, and more.- Gateway endpoints
In contrast, a gateway endpoint is a target for a route in a route table to connect VPC resources to S3 or DynamoDB.** VPC endpoints are region-specific only and do not support inter-region communication.
SQS/SNS
- SQS works by polling, i.e. store and then forward.
- SQS is useful to decouple the incoming and processing modules by creating a queue in between.
- SQS is more durable, persistent and distributed / loose-coupled.- SNS works for pushing, i.e. notify (multiple) subscribers immediately.
- and SNS is more for fan out purpose, one incoming event may trigger multiple subscribers to respond.- SQS can configure the message retention period to a value from 1 minute to 14 days. The default is 4 days.- SQS doesn't automatically delete the message.type of queue:- Standard queues provide at-least-once delivery, which means that each message is delivered at least once.- FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it. Duplicates are not introduced into the queue.- use SNS/SQS for fan-out
By default, an Amazon SNS topic subscriber receives every message published to the topic. You can use Amazon SNS message filtering to assign a filter policy to the topic subscription, and the subscriber will only receive a message that they are interested in. Using Amazon SNS and Amazon SQS together, messages can be delivered to applications that require immediate notification of an event. This method is known as fanout to Amazon SQS queues.
SWF (simple workflow service) and step function
SWF could also help to guarantee non-duplicationStep function provides serverless orchestration for modern applications.
AWS Glue
AWS Glue Studio makes it easy to visually create, run, and monitor AWS Glue ETL jobs.
Kinesis
Kinesis is useful to ingest real time stream data.Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools.Firehose Destinations include:
- Amazon S3
- Amazon Redshift
- Amazon Elasticsearch Service
- Splunk
(Note:lambda is not a destination of Firehose)Amazon Kinesis Data Streams is used to collect and process large streams of data records in real-time.Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources. You can use an AWS Lambda function to process records in Amazon KDS.The producers continually push data to Kinesis Data Streams, and the consumers process the data in real-time. Consumers (such as a custom application running on Amazon EC2 or an Amazon Kinesis Data Firehose delivery stream) can store their results using an AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3.Amazon Knesis FIrehose is near real-time.
IAM
it is a good practice to use IAM policy to grant access instead of sharing username/password.Attribute-based access control (ABAC):When you create an IAM policy that grants IAM users permission to use EC2 resources, you can include tag information in the Condition element of the policy to control access based on tags.
password policy
# most password policy are only effective the next time you create/change password; however, password expiry policy are enforced immediately.# AWS Config rule can used to audit the current password, however, it can't be used to enforce policy at time of password creation.
AWS OU (Organization Units) and SCP
SCP (service control policy) can be used to allow creation of specific type of aws resources.