Nerd For Tech
Published in

Nerd For Tech

AWS Certified Solutions Architect Professional (SAP-C01) — Cheatsheet

AWS Certified Solutions Architect Professional (SAP-C01) is one of the most sought-after certifications in the IT industry. It is regarded as one of the most difficult IT certifications to acquire, as syllabus of the exam is incredibly vast and compasses a wide range of concepts and topics related to the field of cloud computing and touches almost every services AWS has to offer.

Earning AWS Certified Solutions Architect — Professional validates the ability to design, deploy, and evaluate applications on AWS within diverse, complex requirements.

AWS Certifications Continuum

Introduction

I’ve recently earned the AWS Certified Solutions Architect — Professional certification, as well as AWS, Certified Solutions Architect — Associate certification. You can read my blog post for Solutions Architect certification here.

And in your pursuit of earning this prestigious professional-level AWS certification. I have prepared this cheat sheet to help you elevate your AWS game, and propel your cloud career to the next level.

https://www.credly.com/badges/f5e39fa7-6d7c-4083-a77f-5f8f8c53ab55/public_url
https://www.credly.com/badges/dc31c1fc-a3e1-4f3e-a94a-5d211bbc7c07

About Exam

Why is AWS Solutions Architect Profesional exam so hard?

The failure rate of the exam is well above 72%. That means, only about 28% of the candidates who take the AWS Solutions Architect Professional exam manage to clear it. Now, this is a daunting number. This statistic clearly demonstrates how high the difficulty level of the AWS Solutions Architect Professional exam is.

The AWS Solutions Architect Professional exam is so difficult as to be nearly impossible to pass.

But, for those of us wanting to earn this certification, we have the mindset of nothing is impossible #nimsdai. The exam tests the determination, grit, intelligence, memory, brainpower, and planning capabilities of the candidate.

Exam Cost

The exam costs 300 USD and the total length to complete the exam is 180 minutes. There will be 75 questions either multiple choice or multiple responses. The score of the exam is somewhere between 100 and 1000,
with a minimum passing score of 750–75%.

If English is not your native language, you can enroll for additional 30 mins, giving you 220 minutes in total to take complete the exam.

Who should take this exam?

AWS recommends that you have a Certified Solutions Architect Associate Certification or at least two or more years of hands-on experience designing and deploying cloud architectures on AWS.

Exam Domains Breakdown

The aim of the certification is to validate your knowledge across a number of different key areas, which have been defined by AWS as being able to:

  • Design and deploy dynamically scalable, highly available, fault-tolerant, and reliable applications on AWS.
  • Select appropriate AWS services to design and deploy an application based on given requirements.
  • Migrate complex, multi-tier applications on AWS.
  • Design and deploy enterprise-wide scalable operations on AWS.
  • Implement cost-control strategies.

As per the AWS official exam guide, the exam will test you across 5 different domains, with each domain contributing to a total percentage of your overall score.

Exam Domain Breakdown and Their Weighting

How to approach exam questions?

As I’ve said, AWS Certified Solutions Architect Professional is one of the most difficult and challenging exams in all of the IT industry. So, make sure to get thoroughly prepared, and build up your confidence level.

For each question, keep these 3 things in mind as you approach them:

  1. Determine the requirement
  2. Strikeout the obviously false answer
  3. Choose the best of the rest

As I have mentioned before, the syllabus of the exam is incredibly vast and filled with complex, obscure, and esoteric topics about the AWS platform. So, the candidate should good grasp of theoretical concepts of AWS as well as all the practical skills.

It will be impossible for me to cover the practical as of the exam through this blog post. Thus, this cheat sheet only covers the most important theoretical concepts, needed for the exam.

But to acquire the practical knowledge you’ll have to create a free-tier account on the AWS platform and practice calling upon and operating each of the hundreds of individual services on the AWS platform.

Let’s get started and happy learning!

AWS Well-Architected Framework

It is of utmost importance for you to fully understand the AWS Well-Architected Framework and the Six Pillars for the certification.

Six Pillars of the AWS Well-Architected Framework

No trade-off pillars:

Always aim for high operational excellence and security.

  • Operational Excellence, Security.

Have trade-off pillars:

Focusing on any two pillars, and third pillar will always suffer.

  • Reliability, Performance Efficiency, Cost Optimization.

Resources:

AWS IAM — Identity and Access Management

Multi-Factor Authentication

Multi-factor authentication (MFA) in AWS is a simple best practice that adds an extra layer of protection on top of your user name and password.

IAM Policy to Enforce MFA
How do MFA temporary credentials work?
IAM Policy to Enforce IP Whitelisting for Access

Resources:

IAM Roles

  • Grant AWS resources access to users, applications, or services via short-term temporary credentials, rather than permanent AWS credentials.
EC2 Instance Assuming Role to Access S3 Bucket
Retrieving STS credentials from inside EC2 Instance
Viewing the temporary security credentials for the instance role session

Resources:

IAM Role Trust Policies

  • Trust policies define who is allowed to assume the IAM role.
Security risk — Any role or IAM user in the account can assume a role
Better security — Restricting assume a role to onlyEC2 service

Resources:

Federated IAM Role

  • Allows assuming the IAM role by an external identity provider outside of AWS.
How Federated IAM Role Works
  • Requires trust policy for the role to be assumed.
Trust Policy Role for IAM Federation

Policies and permissions in IAM

AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.

Policy Evaluation

  • Starts with default deny.
  • An explicit allow is needed.
  • Conflicting policies — Deny always takes precedence.
  • Deny explicit > Organizational Units SCP > Resource-based Policies -> IAM permissions bounderies > Session policies > Identity-based policy.

Identity-based policies

  • Attach managed and inline policies to IAM identities (users, groups to which users belong, or roles).
  • Identity-based policies grant permissions to an identity.

Resource-based Policies

  • Attach inline policies to resources.
  • Grant permissions to the principal that is specified in the policy.
  • Principals can be in the same account as the resource or in other accounts.
  • The bucket policy is a resource-based policy.
Example of Resource-based Policy

Permission boundaries

  • Defines the maximum allowed permission to user or role.
  • If a permission boundary is enabled — only explicit allow are permitted.
  • Permission boundary never grants permissions.

Organizations SCPs (Service Control Policies)

  • Similar to permission boundaries, but applied at the account level.
  • Uses AWS Organization — with “all features enabled”
  • If an SCP is enabled — only explicit allow are permitted.
  • SCPs never grant permissions, similar to permission boundaries.
SCP allows root (master) account in AWS organization to define what child accounts are allowed to do.
Example #1 — SCP takes precedence
Example #2 — Is this user allowed to create an EC2 instance? — Answer: No

Structure of IAM policies

Structure of IAM policy

Top-level elements

  • Version — Specify the version of the policy language that you want to use. As a best practice, use the latest 2012-10-17 version.
  • Statement — Use this main policy element as a container for the following elements. You can include more than one statement in a policy.

Statement elements

  • Sid (Optional) — Include an optional statement ID to differentiate between your statements.
  • Effect — Use Allow or Deny to indicate whether the policy allows or denies access.
  • Principal (Required in only some circumstances) — If you create a resource-based policy, you must indicate the account, user, role, or federated user to which you would like to allow or deny access. If you are creating an IAM permissions policy to attach to a user or role, you cannot include this element. The principal is implied as that user or role.
  • Action — Include a list of actions that the policy allows or denies.
  • Resource (Required in only some circumstances) — If you create an IAM permissions policy, you must specify a list of resources to which the actions apply. If you create a resource-based policy, this element is optional. If you do not include this element, then the resource to which the action applies is the resource to which the policy is attached.
  • Condition (Optional) — Specify the circumstances under which the policy grants permission.

Resources:

Data Stores

General concepts

  • Lazy writing is typical of eventual consistency rather than perpetual consistency.
  • Availability is considered a higher priority for the BASE model; however, this is not true for the ACID model, which values consistency over availability.
  • Eventual consistency could result in stale data.
  • Row locking attempts to ensure consistency by keeping updates atomic.
  • The ACID consistency model is Atomic, Consistent, Isolated, and Durable.

EC2 Instance Store

  • Instance Store is locally attached and ephemeral and provides the fastest disk I/O than EBS, IOPS, and EFS.

AWS S3

Features:

  • Object Storage, Accessible via HTTP, Highly available, and durable.
  • Multiple storage classes.
  • Object size: 0B-5TB
  • Max PUT size: 5GB, use multi-part upload for large file uploads.
  • Lifecycle management.
  • Versioning.
  • Access control and tight integration with the AWS ecosystem.

Pricing:

  • Storage, Requests, Data Transfer

S3 Storage Cost (Expensive to Cheap)

  • S3 Standard, S3 Standard — IA, S3 One Zone — IA, S3 Glacier, S3 Galcier Deep Archive

S3 PUT requests cost (Expensive to Cheap)

  • S3 Glacier Deep Archive, S3 Glacier, S3 One-Zone — IA, S3 Standard — IA, S3 Standard.

S3 GET requests cost (Expensive to Cheap)

  • S3 Glacier Deep Archive, S3 Glacier, S3 One-Zone — IA, S3 Standard — IA, S3 Standard.

S3 Transfer Cost

  • FREE: All data transfer IN, Out from S3 to CloudFront
  • COST: Out from S3 to Internet (expensive), Out from CloudFront to Internet (expensive), Out from S3 to AWS regions (cheap)

S3 replication

  • Async. replication to another bucket (same account, different account, or different region)
  • S3 replication uses an IAM role that needs read access to the source bucket and write access to the target bucket (and KMS keys if used)
  • NOTE: Versioning must be enabled on the source and destination bucket.

S3 Security

  • S3 Resource — Bucket ACL, Object ACL, Bucket Policy (recommend AWS)
  • Users/Roles — IAM Policy (recommend by AWS)
Deny access if the request is not SSL encrypted.
Ways to enforce access to resources. Allow with Principle or Deny with Conditions
Enforce users to always use encrypted uploads.
Object ownership can be enforced with bucket policies

NOTE: We can apply default encryption to always encrypt on rest with default AWS keys.

  • Glacier Vault Lock is an immutable way to set policies on a Glacier vault such as retention or enforcing MFA before deletion.

Amazon DynamoDB

  • NoSQL database
  • Fully serverless, and API access only, pay as you go.
  • Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key.
  • To address this, you can create one or more secondary indexes on a table and issue Query or Scan requests against these indexes.
  • Improving Data Access with Secondary Indexes — Global and Local Secondary Index.

Pricing:

  • Based on Read/Write Capacity Unit (RCU, WRU)

Capacity modes:

  • Each RCU supports 1 consistency read per second up to 4 KB in size or 2 eventual consistency read per second up to an item of 4 KB in size.
  • Each WCU supports 1 write per second up to an item of 1 KB in size.
  • Capacity modes — Provisioned RCU/WCU (with auto-scaling), On-Demand RCU/WCU.

Backups

  • Variants: PITR (Point-in-Time-Recovery) — 35 days, Manual snapshots
  • Cross-region restore: Both variants

DynamoDB Accelerator (DAX)

  • In-memory caching of DynamoDB.
  • To increase the speed of read operations use “Secondary Indexes and “DynamoDB Accelerator (DAX)” — works as an in-memory cache in front of DynamoDB.

DynamoDB Streams

  • Capture and process all changes in DynamoDB tables.
  • Integration with Lambda to trigger based on events
  • Use cases: Replication (off-site backup), Notifications, Analytics

DynamoDB Global Tables

  • Based on DynamoDB Streams
  • Multi-master, Multi-region

Amazon DocumentDB

  • NoSQL database — Compatible with MongoDB (3.6 API)
  • DocumentDB cluster provides Writer endpoint (primary instance) and Reader endpoint (load balanced read-replicas).
  • Automatic storage scaling (up to 64 TB)
  • Read replicas (up to 15) — same as Aurora
  • Shared data volume

AWS Storage Gateway

AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage.

  • Modes: File gateway mode — Local NFS or SMB mount point backed by S3. Volume gateway stored mode — iSCSI: Stores all data locally then async replication to S3. Volume gateway cached mode — iSCSI interface: Primary data is stored in S3 with frequently accessed data cached locally on-prem, thus local disk storage requirement is less.
  • Storage gateway also supports bandwidth throttling for on-prem.
  • Storage Gateway’s Volume gateway stored volumes mode (stored Volume Gateway mode) would be a way to maintain a full local copy of the data and have it replicated asynchronously to S3.
  • Use cases: Disaster recovery, Cloud Migration.

Amazon WorkDocs

Amazon WorkDocs is a fully managed, secure content creation, storage, and collaboration service.

  • Amazon version of Dropbox or Google Docs.
  • Fully managed file collaboration service.
  • HIPPA, PCI DSS, and ISO compliance.

Amazon Elastic File Service

  • Implementation of NFS file share.
  • Elastic storage capacity, and pay for only what you use (in contrast to EBS — which is space-based pricing).
  • Distributed across multi-AZ, and supports mount-points configuration across one, or many AZs using common mount target FQDN.
  • EFS is more expensive than EBS and even more expensive than S3 — Alternative — AWS DataSync is a secure, online service that automates and accelerates moving data between on-premises and AWS storage services
Fig: Use case of EFS

Amazon ElastiCache

Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Elasticache is only a key-value store and cannot therefore store relational data.

  • In-memory data store supports Memcache and Redis.
  • Memcached — Simple data types, Multithreaded (more performant), No encryption (rest or transit). Use cases: availability of cache is not important.
  • Redis — Complex data types, Encryption, HA cluster-mode, Backup and restore. Use cases: availability of cache is important.
  • Only ElastiCache Redis v3.2.6 and 4.0.10 and later supports encryption at rest and in transit. EMR and Dynamo are not in-memory caches and ElastiCache Memcached doesn’t support encryption.

Amazon Redshift

  • Fully managed data warehousing platform.
  • Columnar storage.
  • RedShift cluster can only be in one AZ.
  • Based on Postgres 8.0.2 (AWS proprietary). Thus, compatible with JDBC and ODBC drivers; compatible with most BI tools out of the box.
  • Features parallel processing and columnar data stores which are optimized for complex queries.
  • RedShift Spectrum — allows to query directly from S3. — Similar to Athena.

Suited for:

  • OLAP — not OLTP (online transaction processing)
  • Business Intelligence (BI), Analytics, Reporting, Big Data
Data Lake

Amazon Neptune (very unlikely to come in Exam)

  • Managed Graph Database (other databases include RDBMS, NoSQL, Columnar)
  • Graph databases are optimized to deal with relationships between objects, e.g: Social networks, and product recommendation engines.

Amazon Aurora — MySQL and PostgreSQL-compatible

Aurora uses a shared storage volume, whilst RDS have each their own volume and uses replication that writes to their own volume.

In Aurora, every node in a cluster connects to the shared volume which allows for completely new database solutions tailored toward cloud-native applications.

  • Aurora is compatible with MySQL and PostgresSQL.
  • Every data written to Aurora is automatically replicated to 6x physical. locations. Thus, high read performance and high data reliability than RDS.
  • Up to 15 read replicas (load balanced using single reader endpoint)
  • Read replica auto-scaling on the fly
  • Aurora is 5x performance than MySQL, 3x performance than PostgreSQL.
  • Aurora also supports serverless, it’s pay-per query. — but have performance, and reliability impacts.
  • Aurora Global Database — supports replication to another region via physical storage replication, reducing replication lags.
  • Aurora multi-master — Two nodes that can receive writes at the same time, but users should take care to prevent conflicting inserts leading to complex application logic. But allows higher availability.
  • Aurora supports fault injection, backtrack, parallel query (best use for analytics)
  • Aurora can act as Read replica to RDS, allowing frictionless migration from RDS to Aurora.
  • Allows triggering Lambda from the stored procedure (MySQL)
  • Storage autoscaling can be enabled but unlike RDS can’t be disabled.

Networking

AWS Networking Technologies

  • VPC, VPC Peering, Transit Gateway
  • Direct Connect, Site-to-Site VPN, Client VPN
  • PrivateLink, CloudFront, Global Accelerator

Implicit and explicit networking technologies:

  • Implicit has no ENI attached and is not part of VPC, such as Internet Gateway, Virtual Private Gateway, VPC Peering, and Gateway Endpoints.
  • Explicit have ENI attached and are routed via route tables and are part of VPC, such as NAT Gateway, VPC Endpoint Services, Client VPN, Transit Gateway, and Global Accelerator.
Implicit Networking Technologies

VPC Peering

  • VPC requester and VPC accepter
  • CIDR range cannot overlap.
  • Routes must be manually added to both VPC main route tables or specific subnets for more isolated peering connections.
VPC peering do not allow Transitive Peering

VPC Mesh

  • Ideal for a small number of VPCs (~10 VPC), but can quickly get messy for a large number of VPCs.
VPC Mesh

Site-to-site VPN using Virtual Private Gateway (Private Connectivity to On-Prem)

  • All traffic goes through the Internet — slower bandwidth and congestion.
  • Virtual Private Gateway is deployed on AWS, and Virtual Customer Gateway is deployed on-prem — both connect via VPN and route via the public Internet.
  • BGP supports dynamic routing and automatic local network route propagation between AWS and on-prem networks.
  • If not BGP is used, static routes must be added to both AWS and on-prem networks.
  • Site-to-Site VPN is not redundant, thus redundancy needs to be on-prem with 2 customer gateway connecting to VPN.
Site-to-Site VPN

Direct Connect

  • Direct Connect connections consist of a single connection between your network and AWS with no inherent redundancy. Additionally, traffic coming from on-prem via a Direct Connect connect is restricted from internet access.
  • Direct connect uses Virtual Private Gateway. But traffic is routed via a dedicated connection between AWS data center and on-prem.
  • Direct Connect may be a more complex and costlier option to set up, but it could save big on bandwidth costs.
  • Grantantuee bandwidth of 100Gbps max and dedicated connection.
  • Direct Connect also allows access to AWS services (S3, DynamoDB) without traversing through the Internet.
Direct Connect
  • Private VIF can connect to Direct Connect Gateway, which can then connect up to 10 Virtual Private Gateway and 3 transit gateway.
  • Direct Connect is not redundant, thus redundancy needs to be on-prem with 2 customer gateway connecting to Direct Connect.

Transit Gateway

  • Mostly better than VPC peering, VPC mesh, Site-to-Site VPN, or Direct Connect.
Multi-VPC Networking via Transit Gateway
VPC connects to the transit gateway via the Transit Gateway Attachment
  • You deploy Transit gateway ENI on subnets you want, which connect to Transit Gateway.
  • The route table needs to be updated to send specific network traffic via the Transit gateway.
  • Site-to-Site VPN and Direct Connect can also be associated with Transit Gateway.
  • Multiple Transit Gateways in different regions can be peered.

PrivateLink

  • Allows publishing application to other VPCs.
  • By default, AWS services communicate via the Internet.
  • VPC endpoints can be deployed for AWS services to allow private communication within the AWS network.
  • VPC endpoints type: Gateway endpoint (free), Interface Endpoints (cost)
  • Customer can deploy their own service via VPC endpoint services, similar to AWS provided VPC endpoint services (secrets manager, and so on)
  • PrivateLink is the only technology that can connect two VPCs having their CIDR overlap.
  • PrivateLink is uni-directional, not bi-directional.
Comparison between VPC Peering, Transit Gateway, and PrivateLink

Client VPN

  • Client VPN is more for mobile users, e.g: administrator, end-users with changing public IPs.
  • Similar to OpenVPN.
Client VPN — Centralized Multi-Account VPC Access

CloudFront

  • Global CDN backed by AWS Edge Locations.
  • CloudFront Behaviors allow defining different origins based on URL path. This is useful when we want to serve up static content from S3 and dynamic content from an EC2 fleet for example for the same website.
  • CloudFront Signed Cookies / Signed URLs, CloudFront Origin Access Identity allows users to have secure access to private files located in S3.
  • CloudFront in conjunction with AWS WAF can be an effective way to create DDoS resilience at Layer 7. Network Load Balancers are Layer 4 solutions and would have no visibility of Layer 7 DDoS. CloudTrail and GuardDuty are focused on the security of the AWS account, and would not be suitable in isolation for securing at Layer 7.
  • An Origin Access Identity is a virtual user identity that is used to give the CloudFront distribution permission to fetch a private object from an S3 bucket.
CloudFront — Path-based Origin Routing
Lambda@Edge — Similar to Cloudflare Workers

Global Accelerator

  • AWS Global Accelerator to get static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions.
  • Uses Anycast, which it broadcasts to all AWS PoP edge locations.
  • Global Accelerators supported targets: Network Load Balancers, Application Load Balancers, EC2 Instances, Elastic IP addresses.
End-user to AWS network connectivity over public internet vs on Global Accelerator

AWS NLB

  • Network Load Balancer automatically provides a static IP per Availability Zone.

NAT Gateway & NAT Instance

  • NAT Instances can cost less for very small deployments where high availability is not required.
  • NAT Instances can use security groups as they are just EC2 instances.
  • NAT Instances allow you to detach and attach EIPs while NAT Gateways do not allow you to detach.
  • NAT Instances” and “NAT Gateways” explicitly do not support IPv6 traffic and a

Egress-Only Internet Gateway

  • Prevents IPv6 based Internet resources from initiating a connection into a VPC
  • Allows VPC-based IPv6 traffic to communicate to the Internet.

Amazon VPC

  • You can use DHCP Options Sets to configure which DNS is issued via DHCP to instances. This can be any DNS address. So long as it’s reachable from the VPC, instances can use it to resolve. Reference: DHCP options sets.
  • Only two components allow Internet communication using IPv6 addresses — “Internet Gateways” (inbound) and “Egress-Only Internet Gateways” (outbound).
  • The IP address of the DNS in a VPC is always the base of the subnet range plus two.
  • Multicast and Broadcast aren’t supported in VPCs.
  • Internet Gateway is horizontally scaled, redundant, and with no bandwidth constraints.

Security

Security and Compliance is a shared responsibility between AWS and the customer.

  • AWS's responsibility “Security of the Cloud”
  • Customer’s responsibility “Security in the Cloud”
Shared Responsibility Model Diagram

Security baseline

  • Disable root user, or enable MFA.
  • Enable MFA for IAM users, use the least privilege principle
  • Setup billing alert.
  • Enable CloudTrail for logging API calls. The best practice is to log into the S3 bucket in a dedicated audit account.
  • Cost vs Impact Reduction
Higher the cost higher the impact reduction

AWS Secrets Manager

  • Securely encrypt, store, and retrieve credentials for your databases and other services.
  • Instead of hardcoding credentials in your apps, you can make calls to Secrets Manager to retrieve your credentials whenever needed.
  • Protect access to your data by enabling you to rotate and manage access to your secrets.
AWS Secrets Managers — Key rotation mechanism
AWS Secrets Managers — IAM key rotation using AWS Lambda

IAM Access Analyzer

AWS Inspector

  • Amazon Inspector is an automated vulnerability management service that continually scans Amazon Elastic Compute Cloud (EC2) and container workloads for software vulnerabilities and unintended network exposure.
  • Requires AWS Systems Manager Agents (SSM Agents) for vulnerability scanning of Amazon EC2 instances.
  • No agents are required for network reachability of Amazon EC2 instances and vulnerability scanning of container images.
  • Pricing is based on the number of EC2 instances and container images scanned per month.

AWS Security Hub

  • Centralizes and prioritizes security findings from across AWS accounts, services, and supported third-party partners to help you analyze your security trends and identify the highest priority security issues.
  • AWS Security Hub collects findings from the security services enabled across your AWS accounts, such as intrusion detection findings from Amazon GuardDuty, vulnerability scans from Amazon Inspector, and sensitive data identification findings from Amazon Macie.

Amazon Detective

  • Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data.
  • Enables faster and more efficient security investigations.
  • Amazon Detective pricing is based on the volume of data ingested (GB) from AWS CloudTrail logs, Amazon VPC Flow Logs, and Amazon GuardDuty findings.
Amazon Detective

AWS Key Management Service (AWS KMS)

AWS Key Management Service (AWS KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications.

  • Encrypt data up to 4KB
  • AWS KMS API requests have rate limits which can be increased via Support Case.
  • Customer managed CMK — Automatic or manual rotation, expires every year.
  • Amazon managed CMK — Key rotation is every 3 years.
  • Min. 7 days waiting period for full deletion.
  • Max. 30 days waiting period for full deletion.
Cloud HSM vs AWS KMS

Security Best Practices

Security Best Practices By AWS Services

What is Intrusion Detection System (IDS) vs Intrusion Prevention System (IPS)?

  • IDS watches the network and systems for suspicious activity that might indicate someone trying to compromise a system.
  • IPS tried to prevent exploits by sitting behind firewalls and scanning and analyzing suspicious content for threats.
Example of IDS/IPS Appliance in AWS

AWS CloudWatch and CloudTrail

Difference between AWS CloudWatch and CloudTrail

Multi-Accounts Management and Strategies

AWS Organizations

AWS Organizations is an account management service that lets you consolidate multiple AWS accounts into an organization that you create and centrally manage.

Root account and child accounts under AWS Organizations

Additional Features of AWS Organizations:

  • Control Tower — easily deploy and manage AWS organization and accounts. Provides blueprints and bests practics guidance for multi-account.
  • Security Hubs — centralized dashboard for viewing and remediating security alerts raised by CloudTrail events.
  • Resource Access Manager (RAM) — sharing resources across AWS accounts. E.g: subnets, transit gateway.
  • Systems Manager — manage EC2 instances across your accounts.
  • AWS Config — provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are related to one another, and how the configurations and their relationships have changed over time.
  • Service Catalog — allow child accounts to manually deploy custom services (bundled pre-configured resources) that are shared from master accounts.
  • CloudFormation StackSets — Enforce certain resources to be present on all AWS accounts.
  • Tag Policies — Enforce tag policies on child accounts from the master account.
  • Backup Policies — Enforce backup policies on child accounts from the master account.

Organization Units:

  • You can use organizational units (OUs) to group AWS accounts together to administer as a single unit.
  • For example, you can attach a policy-based control to an OU, and all accounts within the OU automatically inherit the policy.
  • You can create multiple OUs within a single organization, and you can create OUs within other OUs. Each OU can contain multiple accounts, and you can move accounts from one OU to another. However, OU names must be unique within a parent OU or root.
  • OU is limited to 1000.
Organizational Units

Organization Account operations:

  • Create, Invite, Remove account

Organization Modes:

  • There are two modes for AWS organization — these modes define what the master (root) account can do in or for its child accounts.
  • Mode 1 — Consolidated billing only
  • Mode 2 — All features enabled — Service Control Policies (SCP), Tag Policies, Backup Policies, etc.

NOTE: When changing organization modes, from Mode 1 to Mode 2, all child account needs to accept that change explicitly.

AWS Organization Solves Billing Nightmare

  • Detailed billing per account managed by the AWS organization is accessible from the Master account, as shown below.
Consolidated Billing

AWS Organization Solves Multi-Account Security and Access Management

Bad solution — Manage multiple users across multi-accounts
Good solution: A central account with IAM user, and uses assume the role to access other AWS accounts. Permission boundaries can be set to limit permission regardless of permission attached via policies.
Best solution — Centralizally create and manage identifies

AWS SSO has four key components

  • User — either managed by AWS SSO itself, Active Directory or External Identity Provider, Okta
  • Group — Logical group of SSO users
  • Permission Sets — IAM managed policies or in-line policies (max. 10000 characters)
  • Accounts — AWS accounts in your AWS Organization.
AWS SSO User Portals

Multi-Account and Structure

One account isn’t enough to set up a well-architected environment. By using multiple accounts, you can best support your security goals and business processes.

Benefits of using a multi-account approach:

  • Security controls
  • Isolation
  • Many teams
  • Data Isoluation
  • Business process
  • Billing
  • Quota allocation

Multi-Access Structure

Identity Structure
Logging Structure
InfoSec Structure
Central IT Structure

AWS Service Catalogs

  • Framework allowing admins to create pre-defined products and landscapes for their users.
  • Granular control over which users have access to which offerings
  • Make use of adopted IAM roles so users don’t need underlying service access.
  • Allows end-users to be self-sufficient while upholding enterprise stands for deployments
  • Based on CloudFormation templates
  • Admins can version and remove products. Existing running product versions will not be shutdown
Service Catalog Constraints
Multi-Account Structure — Publishing Structure
Multi-Account AWS Service Catalog

Migrations

  • From on-prem datacenter to AWS
  • Large datasets migration: Storage, Database and VMs.

Migration Strategies

The 6 most common application migration strategies are:

Migration Strategies

Migration services offered by AWS

  • Server Migration Service (SMS)
  • Snowball
  • Database Migration Server (DMS) and Schema Conversion Tool (SCT)
  • DataSync
  • Storage Gateway

Server Migration Service

  • Automates migration of on-prem VMware vSphere or Microsoft Hyper-V VM to AWS
  • Replicates VMs to AWS, syncing volumes and creating periodic AMIs.
  • Minimizes cutover downtime by syncing VMs incrementally.
  • Supports Windows and Linux VMs only.
  • The server migration connector is downloaded as a virtual appliance into your on-prem vSphere or Hyper-V setup.
AWS SMS Architecture

Amazon Snow Family

  • Snowball allows transferring large amounts of data to the AWS cloud via a physical device.
  • AWS Snowball — Ruggedized NAS in box AWS ships to you. You copy over 80TB of data and ship it back to AWS. They copy the data over to S3.
  • AWS Snowball Edge — Same as a snowball, but with onboard Lambda and clustering.
  • AWS Snowmobile — A literal shipping container full of storage (up to 100PB) and a truck to transport it.
Snowball Edge Device

Use cases:

  • Massive amount of data transfer.
  • Edge Compute: Remote (offline) locations, Harsh environments
Copying data to a snowball device

Database Migration Service

  • DMS along with SCT helps customers migrate databases to AWS RDS or EC2- based databases.
  • SCT can copy database schemas for homogenous migration (same database) and convert schemas for heterogeneous migrations (different databases)
  • DMS is used for smaller, simpler conversions and also supports MongoDB and DynamoDB.
  • SCT is used for larger, more complex databases like data warehouse.
  • DMS has a replication function for on-prem to AWSS or to Snowball or S3.
AWS DMS architecture
AWS DMS — Migrating database from source to target
  • Replication instance is managed EC2 instance in VPC and can connect to other VPC source endpoints via VPC peering, VPN, or Direct Connect.
AWS DMS — Replication Instance for multiple source to target migration

Types of replication:

  • Full replication — once
  • Full replication+ change data capture (CDC) — ongoing replication
  • CDC only

Schema Converter Tools (SCT)

  • SCT is a desktop tool to convert schema for use in AWS
  • Once SCT converts the schema, AWS DMS is used to migrate from source to target database.
SCT GUI
Large database migration architecture using SCT and AWS DMS

AWS DataSync

  • Deployed in the on-premise environment as an agent in a VM.
  • Continuous synchronization of data to AWS via S3, EFS or FSx for Windows File Server.

Use cases:

  • Live data, In-cloud processing, data archiving, data protection

AWS Application Discovery Service

  • Gathers information about on-prem data centers to help in cloud migration planning.
  • Often customers don’t know the full inventory or the status of all their data center assets, so this tool helps with that inventory.
  • Collects config, usage, and behavior data from your serves to help in estimating the TCO (Total Cost of Ownership) of running on AWS.
  • Can run as agent-less (VMware environment) or agent-based (non-VMware environment)

AWS Migration Hub

  • AWS Migration Hub (Migration Hub) provides a single place to discover your existing servers, plan migrations, and track the status of each application migration.
AWS Migration Hub — Console

Network Migration Planning

  • Ensure your IP addresses will not overlap bnetweebn VPC and on-prem.
  • VPCs support IPv4 netmasks range from /16 (255.255.0.0 = 65,536 addresses) to /28 (255.255.255.240 = 16 addresses). NOTE: 5 IPs are reserved in every VPC subnet by AWS.
  • Most organizations start with a VPN connection to AWS. As usage grows they might choose Direction Connecto but keep the VPN as a backup.
  • The transition from VPN to Direct Connect can be relatively seamless using BGP.
  • Once Direct Connect is set up, configure both VPN and Direct COnnection within the same BGP prefix. From the AWS side, the Direct Connect path is always preferred. But you need to be sure the Direct Connect path is the preferred route from your network to AWS and not VPN through BGP weighting or status routes.

Streaming Data

Kinesis Data Stream

  • Streaming data buffer
  • Continous data intake (from data producers)
  • Aggregation from many sources
  • Fan-out (to data consumers)
  • Decouples producers from consumers
Kinessis Data Streams — How it works
  • Manually scaling (shards)
  • Write capacity per shard: 1 MB/s, 1000 records/s
  • Read capacity per shard: 2 MB/s, 5 transactions/s, 10,000 records per transaction
  • Enhanced fan-out: 2 MB/s per consumer
  • Data retention: 24 hours (default) — up to 7 days.
  • Max record size: 1 MB
AWS Kinesis Data Stream intakes the data and streams for further processing

Kinesis Data Analytics

  • Aggregate and analyze data as it streams
  • Using time windows (e.g: 1 minute)
  • SQL or Apache Flink (Java)
  • Output to other sources: Kinesis Data Stream, Kinesis Data Firehose, Lambda.
  • Provide running insights into a generalized aspect of a stream.
  • Autoscaling based on: Kinesis Processing Unit — 1 vCPU, 4 GB memory per KPU

Kinesis Data Firehose

  • Streams data, like Kinesis Data Streams
  • Limited targets: S3, ElasticSearch, Redshift, Splunk.
  • Buffering: Min. 64 MB / 60 seconds, Max. 128 MB / 900 seconds
  • Transformation with Lambda
  • Record format conversion for S3.
  • Autoscaling
Kinesis Data Firehose — Storing all ingested data in S3
Kinesis Data Firehose — Convert to parquet format
Kinesis Data Firehose — Filter and transform the data. ElasticSearch — View the transformed data in Kibana.

AWS Glue

  • Three components: Glue Data Catalog, Glue Crawlers, and Classifier, Glue ETL Jobs.
AWS Glue — How it works
AWS Glue components in action

AWS Athena

  • Query large dataset from S3 without transformation using SQL.
  • Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3, using standard SQL commands. It will work with a number of data formats including JSON, Apache Parquet, Apache ORC amongst others, but XML is not a format that is supported.
Querying files with Athena

Amazon EMR

  • EC2 based Hadoop cluster
  • Scales to 1000’s of nodes
  • Supports reserved / spot instances
  • Supports many frameworks: HBase, Hive, Presto, Tensorflow, Spark, ZooKeeper, etc.
  • Complex to configure/manage and maintain.
  • After an EMR cluster is terminated, the data stored on HDFS is lost, due to it being ephemeral.
  • If persistence is required, S3 might be an option using the EMRFS file system.

Amazon QuickSight

  • AWS Managed BI and dashboarding tool.

Application decoupling

Microservices Pro’s and Con’s

Microservices Pro’s and Con’s
Synchronous processing in a monolith architecture
Asynchronous processing in a decoupled architecture
YouTube Upload as an example of decoupled architecture

Decoupling Technologies in AWS

  • SNS
  • SQS
  • CloudWatch Events
  • EventBridge
  • Kinesis
  • Kafka / MSK
  • ElastiCache for Redis
Message Pull vs Pull

Amazon Simple Notification Service (SNS)

Topics types:

  • Standard and FIFO topics

Pricing:

  • API requests: First 1 million Amazon SNS requests per month are free, $0.50 per 1 million requests thereafter
  • Notification deliveries to mobile, SMS, HTTP, email.
  • No charge for deliveries to SQS Queues, Lambda.
  • Note: With the exception of SMS messages, each 64KB chunk of delivered data is billed as 1 delivery. For example, a single notification with a 256KB payload is billed as four deliveries.

Payload Limit:

  • Amazon SNS Extended Client Library for Java enables you to publish messages that are greater than the current SNS limit of 256 KB, up to a maximum of 2 GB. It saves the actual payload in S3 and publishes the reference of the stored S3 object to the topic.

Main use cases:

  • Push, Realtime, Fanout, System and User Notification
AWS SNS targets
AWS SNS — Delivery failer send to SQS Dead Letter Queue (DLQ)

Amazon Simple Queue Service (SQS)

Standard queues

  • Deliver messages at least once.
  • Deliver messages in loose-FIFO order

FIFO queues

  • Delivery messages exactly once.
  • Deliver messages in guaranteed FIFO order.

Features:

  • Visibility timeout: 0 seconds — 12 hours.
  • Delivery delay: 0 seconds — 15 minutes.
  • Message retention: 1 minute — 14 days.
  • Maximum message size: 1KB — 256KB
  • Receive message wait time: 0 seconds — 20 seconds
  • Amazon SQS provides in-transit encryption by default. To add at-rest encryption to your queue, enable server-side encryption.

Pricing:

  • 1 million Amazon SQS requests for free each month.
  • 0.40$ per million requests for the standard queue, 0.50$ per million requests for the FIFO queue.
  • Each 64 KB chunk of a payload is billed as 1 request.
  • A single request can have from 1 to 10 messages, up to a maximum total payload of 256 KB ((for example, an API action with a 256 KB payload is billed as 4 requests).

Main use cases:

  • Pull, Delayed, Processed by workers
Amazon SQS — Queue for decoupled order processing system
Amazon SQS — Queue for decoupled video processing system

Lambda as consumer for SQS

  • Event source mappings make Lambda a special consumer to SQS queues. Thus, Lambda can be used as auto-scaling workers to queues for irregular workloads.
  • Poll mechanism — Immediate, Buffered

AWS EventBridge

  • NOTE: CloudWatch Event is now EventBridge.
  • Extension of CloudWatch Events
  • Message bus for your applications
  • Three sources: AWS events, SAAS events, Custom events
  • 90 AWS event sources
  • 15 AWS targets
Message bus architecture
CloudWatch Events

EventBridge vs SNS overlap

  • Latency is lower than SNS: 0.5s vs 30 ms latency on SNS.
  • Fan-out: EventBridge 5 targets per rule vs SNS supports millions of subscribers. But EventBridge does support SNS, so you can further fan out.
  • Less throughput than SNS: EventBridge 400 write events/s and 750 consume events/s whist SNS is virtually unlimited.
  • Filtering: SNS allows attribute-based vs EventBridge allows content-based filtering.
  • Targets: EventBridge supports more targets than SNS.

AWS X-Ray

  • Distributed tracing for decoupled systems.
  • Traces requests through Trace-ID — Microservices, AWS services.
  • X-Ray displays service maps (graphs)
  • Pinpoints issues in distributed services.
AWS X-Ray — Service Maps

Data Processing

  • Data lake — raw data, unfiltered data. S3 is usually used as a data lake.
  • Data is the new gold.

ETL Technologies on AWS

  • Kinesis Firehose — Real-time processing.
  • EMR — Large and complex data processing.
  • Glue ETL — Simple and less complex data processing.
  • AWS Batch — Run any workload in the EC2 instance.

EMR — Managed Hadoop cluster

EMR mapping (left) and reducing (right)
Data is not real-time. EMR loads, transforms, and saves to the Redshift data warehouse
Real-time data analytics within minutes
Static data analysis
Batch data processing using AWS Batch
EMR loads data and transforms data into Redshift. Lambda then queries the data from RedShift and stores the results in CloudWatch metrics. If the metric threshold exceeds we use SNS to send alerts.

Workflow/Task co-ordination

  • AWS Data Pipeline
  • AWS Step Functions — Triggers — Lambda, API Gateway, State Machin, Time-based (scheduled), Event-based.

Logging and monitoring

CloudTrail can be enabled on the Organizational level to monitor all child accounts.

CloudWatch Logs and Metrics

Centralized logging using CloudWatch, Lambda, and Kinesis Data Streams from child accounts to master accounts.

Deployment

Infrastructure as Code

AWS CloudFormation

Deployment Pipelines

  • CI (Continous Integration) — Unit tests, static checks, etc.
  • CD (Continous Delivery) — manual deployment to production.
  • CD (Continous Deployment) — automatic deployment to production.

AWS Developer Tools

AWS Developer Tools
  • CodeCommit — Hosted Git repositories, HTTPS access (via IAM permission), Monitoring via CloudWatch Events (Pull Requests, etc), CloudTrail.

CodePipeline

CodePipeline actions:

  • Source
  • Build
  • Test
  • Deploy
  • Approval
  • Invoke
Full CI/CD Pipeline

Security

Security baseline

  • Disable root user, or enable MFA.
  • Enable MFA for IAM users, use the least privilege principle
  • Setup billing alert.
  • Enable CloudTrail for logging API calls. The best practice is to log into the S3 bucket in a dedicated audit account.

Secrets Managers

  • Store sensitive data used in cloud formation template or application
Key rotation mechanism
IAM key rotation using AWS Lambda

Costs and benefits

  • Legal/compliance requirements, Business risk, Cost risk.
Impact vs Cost

Architecting to Scale and High-Availability

Auto-Scaling Group

  • If your scaling is not picking up the load fast enough to maintain a good service level, reducing the cooldown can make scaling more dramatic and responsive. A shorter cooldown means there will be a shorter interval until the Auto Scaling service determines that another x number of servers are required to service a spike in demand.

Auto-scaling:

  • EC2 — Add Instances
  • ELB — Deploy larger nodes
  • DynamoDB — Add capacity units
  • Aurora — Add read replicas
  • ECS & EKS — Add containers, Add nodes

Serverless

  • Fully managed, Pay-per-use
  • Always multi-AZ — Always highly available
  • Always (auto) scaling
  • Perfect fit for HA use-cases.

Backups

  • RDS — Automated & manual snapshots
  • Aurora — Backtrack
  • EFS — AWS Backup to S3
  • FSx for WFS — Automated & manual snapshots, Shadow copies
  • DynamoDB — Point-in-time recovery, Manual snapshots

Disaster Recovery

Disaster Recovery Scenarios

Recovery Point Objective (RPO)

  • Amount of data lost
  • Shorter RPO = more expensive

Recovery Time Objective (RTO)

  • The time it costs to restore an environment
  • Shorter RTO = more expensive

Disaster prevention and recovery

  • Multi-account setups
  • Multi-region setups
  • Both

Failover and routing solution

Route53:

  • Manual switch via Route53 DNS CNAME change. Set lower TTLS for faster RTO.
  • Automatic switch with Route53 doing health checks. Much faster RTO.
  • Weighted routing — Percentage-based routing for zero-downtime.
  • Latency-based routing — Route53 determines the lowest latency path to route traffic to a given region.
  • Geo-based routing — Route53 determines the user connecting from and routes to the closest GEO location region.

NOTE: For geolocation routing, you need to be sure you have a default route in the case that the location cannot be determined.

Global Accelerator:

  • Anycast IP, faster access, health checks weighted routing — all supported by Global Accelerator.

Thank you!

  • If you found this material interesting and useful, hit that clap icon 👏 and share it! 🙏
  • Follow me on Twitter (my DMs are open) and connect on exam topics or chat about AWS and cloud topics, I have interesting blogs coming up next!
  • Wish you the very best for your exam!

--

--

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To know more about us, visit https://www.nerdfortech.org/.