How to Master the AWS DevOps Professional Exam
A few days ago I passed the AWS DevOps Professional exam and in this write-up I will share what I found useful in getting ready for the exam.
Why Should You Do the Exam?
The exam focuses on automating software development lifecycle, policies, standards, high availability and DR. All of those areas are very much needed for enterprise cloud solutions. The knowledge you gain during the preparation will help you to leverage the automation capabilities in the right way and keep your deployments maintainable, secure, scalable and resilient.
The exam focuses on the following six domains:
How to Prepare?
I will go through all the steps I took and explain which ones worked for me:
- Free exam readiness course:
I started my preparation with the free seven hour AWS Certified DevOps Engineer — Professional course. The online course gives you a great overview of all the domains. It also has some guidance on how to read the exam questions and how to eliminate answers. - Hands-on:
As a next step I started creating my own labs to get a good understanding of all the integration steps that are required to achieve the exam objectives. This helped me the most in the entire preparation time. Some examples of labs I built are:
- Blue-green deployments in Elastic Beanstalk
- Build lifecycle hooks for autoscaling groups and trigger notification
- Build entire pipelines for Lambda, ECS, EC2 using CodeCommit, CodeBuild, CodePipeline, CodeDeploy and CloudFormation - AWS user guide:
I used the AWS user guide a lot while I was doing my labs and below are some example links that helped me in the exam:
- Invoke an AWS Lambda Function in a Pipeline in CodePipeline
- Using AWS Logs driver for ECS
- CodeDeploy — AppSpec ‘hooks’ Section
- Pipeline for ECS deployments - AWS DevOps blog:
The AWS DevOps blog has some really useful things and good diagrams that help you visualise the automation process. - Free AWS sample questions:
I did the free sample questions and they give you a good idea what to expect in the exam. - AWS practice exam:
For other exams I found the official AWS practice exams useful, but not so much for the DevOps Professional exam; mainly because I thought some questions were ambiguous. - Practice exam:
I found Jon Bonso’s AWS Certified DevOps Engineer Professional Practice Exams helpful. All the answers are accurate and have really good explanations why answers are wrong or correct.
What Did the Exam Ask?
All questions combine a couple of different AWS services and test your knowledge on how services can be integrated — e.g. in a given scenario you needed to know if you can use out-of-the-box remediation actions in AWS Config or if you need to intercept the events with CloudWatch rules and trigger a Step Function. If there is one service that comes up more than any other, it is CloudWatch — i.e. rules, alarms, custom metrics and targets.
Most questions also touch on several of the exam domains. However, I will use the domain structure to make it easier for you to navigate through my exam observations. Let’s start with domain number one:
Domain 1 — SDLC Automation
You will need hands-on experience to pass this domain. I got questions around cross-region deployments and you have to know which resources need to be in the same region. For example the SNS topic for CodeCommit notifications needs to be in the same region as your repository.
- CodeCommit — you need to know how GIT branching works — this could for example be used for different feature branches or for different environments. Also remember that permissions can be restricted on the IAM group or user level, not on the GIT project level. Notifications provide more event options than triggers, but only support SNS and not Lambda.
- CodeBuild — know how CodeBuild works and how to push a docker container to either ECR or Docker Hub. You always need to push the container image to your repository during the build before you deploy it later on.
- CodeDeploy — the exam asked about the structure of the appspec.yml file and how to use it for Lambda, EC2 (on-premise and in AWS), ECS (classic and Fargate), autoscaling and for all deployment types: e.g. mutual and blue-green. Don’t forget that CodeDeploy always deploys from S3 no matter what the source is.
- CodePipeline — understand the best practices around GIT integration and what you can actually trigger out of CodePipeline. If you need two different versions of code in two different environments.
Domain 2 — Configuration Management & Infrastructure as Code
This domain also includes lifecycles, which I cover in domain 6.
- CloudFormation — there were not many questions and you needed to understand the use cases for referencing SSM Parameter Store, and mappings.
- Lambda & StepFunctions — be aware of the 15 minute Lambda timeout, hence StepFunctions are the right fit when you need to wait for some tests as part of your pipelines.
- API Gateway — there were questions how to achieve canary and blue-green deployments for Lambda.
- OpsWorks — was hardly quizzed in the exam and you needed to know how to separate your deployment by layers and how to use standard recipes.
- Elastic Beanstalk — the exam asked for deployment models and the diagram below visualises the different deployment methods in Elastic Beanstalk:
Source:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html
Domain 3 — Monitoring and Logging
CloudWatch is your absolute bread & butter in this domain and you will need hands-on experience to understand all your integration options.
- CloudTrail — understand how to enable log integrity.
- Kinesis — Kinesis Data Stream and Data Firehose came up in a couple of the questions. Understand consumers and producers very well. If real-time is important then Data Stream is your choice, whereas Kinesis Firehose is a fully managed service and reduces operational complexity.
- CloudWatch Events — you can use it together with CloudTrail and intercept AWS API Call via CloudTrail. This does not support read-only APIs, such as List, Get, Describe. You can use EC2 actions as a target and create EC2 snapshots or reboot, stop and terminate instances as a target. Don’t forget that CodeDeploy cannot be a target.
- ElasticSearch — the exam calls ElasticSearch “ES” and it was a potential option to index your log files.
Domain 4 — Policies and Standards Automation
This domain had many questions on how to automate your patching or automatically remediate configuration drift.
- Config — you need to be familiar with some of the managed rules such as “s3-bucket-public-read-prohibited” and how to trigger remediation actions. There was also a multi-account Config Aggregator scenario.
- SSM — there were questions how to set it up for a hybrid environment. For patch management don’t forget that the “Patch Group” tag is case sensitive and you can separate your patch groups by a variety of tags, for example for environments or operating systems. SSM Automations as well as Parameter Store for storing secrets and AMI IDs were covered as well.
- Inspector — you needed to know the use cases for Inspector — e.g. EC2 security assessments as part of your CI/CD pipeline or for regular checks in your environments.
Domain 5 — Incident and Event Response
This domain is tightly coupled to domain 6. For example in case of a RDS outage you need to automatically detect that incident (domain 5) and then fail-over to another region (domain 6).
- CloudTrail — there were questions about how to set up custom metrics and utilising them for alerts.
- Logging — know the difference between access logs, system logs and application logs. You need to know how to get log files of EC2 or ECS instances (AWS log driver) in real-time to a storage of your choice and come up with an analytics solution, such as QuickSight, or a searching capability like ElasticSearch or Athena.
- S3 — a cross-region S3 bucket synchronisation scenario asked how to set up the permissions. Remember, when source and destination buckets are owned by different accounts, you can change the ownership of the replica to the AWS account that owns the destination bucket by adding the AccessControlTranslation element.
Domain 6 — High Availability, Fault Tolerance and Disaster Recovery
As mentioned earlier — this domain goes hand in hand with the previous domain:
- ASG (auto-scaling group) — auto-scaling lifecycle hooks were important in the exam — for example how to take an EC2 instance out of an ASG for troubleshooting and re-attach it later on.
- Databases — there were questions around global deployments for DynamoDB and Aurora, as well as keeping data in one region for regulatory reasons. Remember that Amazon RDS uses SNS to provide notification when an Amazon RDS event occurs and you can subscribe a Lambda to the SNS topic to promote the read-replica to primary.
- Lifecycles — you will need to know lifecycles for Lambda, EC2, ECS, ASG, CodeDeploy in depth. This is crucial to pass the exam. The table below pictures the lifecycle event hook availability for CodeDeploy:
Source: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
Summary
The DevOps Professional exam is challenging and requires a lot of in-depth knowledge about integrating AWS services and automating literally everything. There are some trade-off questions where you need to balance cost vs simplicity or performance degradations during deployments. Take your time when preparing to ensure you get all the hands-on experience that you will need for the exam.