Creating a Full CI/CD Pipeline on AWS with Jenkins, Slack, and GitHub: Part 6— Jenkins and Web App User Data
In this awesome final part of our journey, we’re getting down to business! We’re going to bring our CI/CD pipeline to life using a Jenkinsfile. As mentioned in the first part of this guide, this pipeline has different components that kick into action whenever there’s a push on the GitHub repository. We’ll also set up a webhook to let GitHub tell Jenkins to start a new build whenever there’s a push. 🚀🛠️👋
Part 1 (here) → We’ll kick things off by setting up our project. We’ll download a Web App to test our infrastructure and pipeline. We’ll also create and test some Dockerfiles for the project and upload it all to GitHub.
Part 2 (here) → We’ll get Slack in on the action. We’ll create a Bot for Jenkins to keep us posted on how the pipeline’s doing.
Part 3 (here) → It’s time to build the AWS Infrastructure with Terraform. We’ll whip up some EC2 instances, set up SSH keys, create the network infrastructure, and lay the foundation for IAM roles.
Part 4 (here) → We’re not done with AWS yet. In this step, we’ll make S3 buckets, and ECR repositories, and finish defining the IAM roles with the right policies.
Part 5 (here) → We’ll fine-tune our Jenkins and Web App instances by making sure the user data is just right.
Part 6 (Right now) → We’ll put the icing on the cake by implementing the pipeline in a Jenkinsfile. We’ll run the pipeline and see everything come together smoothly. Then, we’ll wrap things up with some final thoughts.
Let’s get started!
Open a new web page and navigate to:
http://<your_public_dns_name>:8080
After hitting enter, you should be prompted with the Jenkins Login:
And you can put your credentials to log in to the Jenkins dashboard.
After login goto -> manage jenkins -> manage plugin -> available plugin ->
and search and install Multibranch Scan Webhook Trigger
After install plugin
go to -> dashboard-> CI-CD pipeline -> configure ->
and check Scan by webhook
option and provide a token name.
Save and apply
and now one thing has to be disabled host key varification
so,
please go to 'Manage Jenkins' -> 'Configure Global Security' -> 'Git Host Key Verification Configuration' and configure host key verification.
As for ease select no varification.
and save and apply:
Got to GitHub nodejs-web-app
repository -> setting->webhooks
and click on add webhook.
In the payload URL
paste the following URL:
JENKINS_URL/multibranch-webhook-trigger/invoke?token=myToken
You have to replace JENKINS_URL to your jenkins ip address and look like following url:
example:- > http://34.195.65.40:8080/multibranch-webhook-trigger/invoke?token=myToken
Select content type application/json
and click on add webhook
Also, make sure that your GitHub account is ssh setup-based and that the key you have mentioned in Terraform has to be configured on the GitHub account.
Let’s first lay down the skeleton of our pipeline. In the Jenkinsfile
write the following:
(Create a Jenkins file named Jenkinsfile
under nodejs-web-app
root directory)
pipeline {
agent any
stages {
stage("Set Up") {
steps {
echo "Logging into the private AWS Elastic Container Registry"
script {
sh """
echo "Hello World"
"""
}
}
}
stage("Build Test Image") {
steps {
echo 'Start building the project docker image for tests'
script {
sh """
echo "Hello World"
"""
}
}
}
stage("Run Unit Tests") {
steps {
echo 'Run unit tests in the docker image'
script {
sh """
echo "Hello World"
"""
}
}
}
stage("Run Integration Tests") {
steps {
echo 'Run Integration tests in the docker image'
script {
sh """
echo "Hello World"
"""
}
}
}
stage("Build Staging Image") {
steps {
echo 'Build the staging image for more tests'
script {
sh """
echo "Hello World"
"""
}
}
}
stage("Run Load Balancing tests / Security Checks") {
steps {
echo 'Run load balancing tests and security checks'
script {
sh """
echo "Hello World"
"""
}
}
}
stage("Deploy to Fixed Server") {
steps {
echo 'Deploy release to production'
script {
sh """
echo "Hello World"
"""
}
}
}
stage("Clean Up") {
steps {
echo 'Clean up local docker images'
script {
sh """
echo "Hello World"
"""
}
}
}
}
}
Finally, push to GitHub and see the magic:
After pushed you can see the pipeline in blue-ocean:
If this does not work then check Scan multibranch pipeline log
:
If it gives an access error then create credentials again and configure Jenkins to use it.
So, all is good now create a Jenkins pipeline to see real magic:
Stage 0 — Environment Variables
Before starting to code the different stages, we will need some variables to keep our builds consistent and keep track of the Docker images that we are going to build. Before the start of the pipeline, in the Jenkinsfile, we are going to define the following variables:
Variables in a Jenkinsfile
can be defined by using the def
keyword.
Such variables should be defined before the pipeline block starts.
When a variable is defined, it can be called from the Jenkins declarative pipeline using ${...}
syntax.
def testImage
def stagingImage
def productionImage
def REPOSITORY
def REPOSITORY_TEST
def RESPOSITORY_STAGING
def GIT_COMMIT_HASH
def INSTANCE_ID
def ACCOUNT_REGISTRY_PREFIX
def S3_LOGS
def DATE_NOW
def SLACK_TOKEN
def CHANNEL_ID = "C0554FYNSA3"
Stage 1 — Set up
First of all, we are going to set the environment variables. The syntax is the following:
VARIABLE = sh (script: "echo 'hello'", returnStdout: true)
What this will be is to store hello
in the variable VARIABLE
. The returnStdout: true
is necessary to tell Jenkins to actually return the stdout.
script {
// Set environment variables
GIT_COMMIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
REPOSITORY = sh (script: "cat \$HOME/opt/repository_url", returnStdout: true)
REPOSITORY_TEST = sh (script: "cat \$HOME/opt/repository_test_url", returnStdout: true)
REPOSITORY_STAGING = sh (script: "cat \$HOME/opt/repository_staging_url", returnStdout: true)
INSTANCE_ID = sh (script: "cat \$HOME/opt/instance_id", returnStdout: true)
S3_LOGS = sh (script: "cat \$HOME/opt/bucket_name", returnStdout: true)
DATE_NOW = sh (script: "date +%Y%m%d", returnStdout: true)
// To parse and extract the Slack Token from the JSON response of AWS
SLACK_TOKEN = sh (script: "aws secretsmanager get-secret-value --secret-id nodejs-web-app6 --region us-east-1 | python -c \"import sys;import json;print(json.loads(json.loads(sys.stdin.read())['SecretString'])['slackToken'])\" ", returnStdout: true)
REPOSITORY = REPOSITORY.trim()
REPOSITORY_TEST = REPOSITORY_TEST.trim()
REPOSITORY_STAGING = REPOSITORY_STAGING.trim()
S3_LOGS = S3_LOGS.trim()
DATE_NOW = DATE_NOW.trim()
SLACK_TOKEN = SLACK_TOKEN.trim()
ACCOUNT_REGISTRY_PREFIX = (REPOSITORY.split("/"))[0]
// Log into ECR
sh """
/bin/sh -e -c 'echo \$(aws ecr get-login-password --region us-east-1) | docker login -u AWS --password-stdin $ACCOUNT_REGISTRY_PREFIX'
"""
}
Stage 2 — Build Test Image
In this stage, we just build the Docker test image and push it to the remote AWS ECR repository:
stage("Build Test Image") {
steps {
echo 'Start building the project docker image for tests'
script {
testImage = docker.build("$REPOSITORY_TEST:$GIT_COMMIT_HASH", "-f ./Dockerfile.test .")
testImage.push()
}
}
}
Stage 3 — Run Unit Tests
We need to spin up a docker container from the image created before and run the mocha
unit tests inside it. Since we also would like to save the tests logs, we need to mount a volume in the docker container to be able to share a portion of the file system to save the reports to. The Jenkins docker plugin allows us to do that by simply using the following syntax:
testImage.inside(‘-v $WORKSPACE:/output -u root’) {
…
}
stage("Run Unit Tests") {
steps {
echo 'Run unit tests in the docker image'
script {
def textMessage
def inError
try {
testImage.inside('-v $WORKSPACE:/output -u root') {
sh """
cd /opt/app/server
npm run test:unit
# Save reports to be uploaded afterwards
if test -d /output/unit ; then
rm -R /output/unit
fi
mv mochawesome-report /output/unit
"""
}
// Fill the slack message with the success message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has passed unit tests"
inError = false
} catch(e) {
echo "$e"
// Fill the slack message with the failure message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has failed on unit tests"
inError = true
} finally {
// Upload the unit tests results to S3
sh "aws s3 cp ./unit/ s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/unit/ --recursive"
// Send Slack notification with the result of the tests
sh"""
curl --location --request POST 'https://slack.com/api/chat.postMessage' \
--header 'Authorization: Bearer $SLACK_TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
"channel": \"$CHANNEL_ID\",
"text": \"$textMessage\"
}'
"""
if(inError) {
// Send an error signal to stop the pipeline
error("Failed unit tests")
}
}
}
}
}
Stage 4 — Run Integration Tests
This part is completely analogous to the previous one, the only changes are that we are going to run npm run test:integration
instead of the unit tests. Also, we will upload the integration tests in a different folder in the bucket. This stage will then be:
stage("Run Integration Tests") {
steps {
echo 'Run Integration tests in the docker image'
script {
def textMessage
def inError
try {
testImage.inside('-v $WORKSPACE:/output -u root') {
sh """
cd /opt/app/server
npm run test:integration
# Save reports to be uploaded afterwards
if test -d /output/integration ; then
rm -R /output/integration
fi
mv mochawesome-report /output/integration
"""
}
// Fill the slack message with the success message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has passed integration tests"
inError = false
} catch(e) {
echo "$e"
// Fill the slack message with the failure message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has failed on integration tests"
inError = true
} finally {
// Upload the unit tests results to S3
sh "aws s3 cp ./integration/ s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/integration/ --recursive"
// Send Slack notification with the result of the tests
sh"""
curl --location --request POST 'https://slack.com/api/chat.postMessage' \
--header 'Authorization: Bearer $SLACK_TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
"channel": \"$CHANNEL_ID\",
"text": \"$textMessage\"
}'
"""
if(inError) {
// Send an error signal to stop the pipeline
error("Failed integration tests")
}
}
}
}
}
Stage 5 — Build Staging Image
At this point, we can build and push the Staging image. This step will be analogous to the building of the test image. It will look like:
script {
stagingImage = docker.build("$REPOSITORY_STAGING:$GIT_COMMIT_HASH")
stagingImage.push()
}
Stage 6 — Run Load Balancing tests / Security checks
This stage could be divided into two stages:
First, we run the load tests inside the container:
stagingImage.inside('-v $WORKSPACE:/output -u root') {
sh """
cd /opt/app/server
npm rm loadtest
npm i loadtest
npm run test:load > /output/load_test.txt
"""
}
After that we need to run some security checks with Arachni In order to do that, we need to have the Web App active but we need to be outside the docker container. In order to do that, we use the stagingImage.withRun
method, which allows us to execute commands while the docker container is running (and so the server is actually serving the Web App), in order to expose the Web App which will run on port 8000 inside the docker container, we will map that port to the external port 8000 of the Jenkins instance by making use of the syntax: -p 8000:8000
. The code will then be:
stagingImage.withRun('-p 8000:8000 -u root'){
sh """
# run arachni to check for common vulnerabilities
\$HOME/opt/arachni-1.5.1-0.5.12/bin/arachni http://\$(hostname):8000 --check=xss,code_injection --report-save-path=simple-web-app.com.afr
# Save report in html (zipped)
\$HOME/opt/arachni-1.5.1-0.5.12/bin/arachni_reporter simple-web-app.com.afr --reporter=html:outfile=arachni_report.html.zip
"""
}
The final complete stage should then look like this:
stage("Run Load Balancing tests / Security Checks") {
steps {
echo 'Run load balancing tests and security checks'
script {
stagingImage.inside('-v $WORKSPACE:/output -u root') {
sh """
cd /opt/app/server
npm rm loadtest
npm i loadtest
npm run test:load > /output/load_test.txt
"""
}
// Upload the load test results to S3
sh "aws s3 cp ./load_test.txt s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/"
stagingImage.withRun('-u root'){
sh """
# run arachni to check for common vulnerabilities
\$HOME/opt/arachni-1.5.1-0.5.12/bin/arachni http://\$(hostname):8000 --check=xss,code_injection --report-save-path=simple-web-app.com.afr
# Save report in html (zipped)
\$HOME/opt/arachni-1.5.1-0.5.12/bin/arachni_reporter simple-web-app.com.afr --reporter=html:outfile=arachni_report.html.zip
"""
}
// Upload the Arachni tests' results to S3
sh "aws s3 cp ./arachni_report.html.zip s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/"
// Inform via slack that the Load Balancing and Security checks are completed
sh"""
curl --location --request POST 'https://slack.com/api/chat.postMessage' \
--header 'Authorization: Bearer $SLACK_TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
"channel": \"$CHANNEL_ID\",
"text": "Commit hash: $GIT_COMMIT_HASH -- Load Balancing tests and security checks have finished"
}'
"""
}
}
}
Stage 7 — Deploy to Fixed Server
This is absolutely not the ideal way to make a deployment. During this ‘deployment,’ the server will be down and this is not optimal for a Web App (it may be okay for other kinds of services). What we are doing in this stage is:
- Build the production;
- Push the production image to the AWS ECR;
- Reboot the EC2 instance serving the Web App.
stage("Deploy to Fixed Server") {
steps {
echo 'Deploy release to production'
script {
productionImage = docker.build("$REPOSITORY:release")
productionImage.push()
sh """
aws ec2 reboot-instances --region us-east-1 --instance-ids $INSTANCE_ID
"""
}
}
}
stage 8 — Clean Up
At the end of the pipeline, we should do some housekeeping, in particular, we need to get rid of old images, avoiding them stacking up and occupying memory:
stage("Clean Up") {
steps {
echo 'Clean up local docker images'
script {
sh """
# Change the :latest with the current ones
docker tag $REPOSITORY_TEST:$GIT_COMMIT_HASH $REPOSITORY_TEST:latest
docker tag $REPOSITORY_STAGING:$GIT_COMMIT_HASH $REPOSITORY_STAGING:latest
docker tag $REPOSITORY:release $REPOSITORY:latest
# Remove the images
docker image rm $REPOSITORY_TEST:$GIT_COMMIT_HASH
docker image rm $REPOSITORY_STAGING:$GIT_COMMIT_HASH
docker image rm $REPOSITORY:release
# Remove dangling images
docker image prune -f
"""
}
echo 'Clean up config.json file with ECR Docker Credentials'
script {
sh """
rm $HOME/.docker/config.json
"""
}
}
}
First, we tag our images with :latest
and then we remove the old ones. This will allow us to have only the last images available. The previous images which were tagged :latest
will become dangling images with TAG <none>
and in order to get rid of these, we use the docker image prune -f
.
We also delete the config.json
file in the .docker
folder which would store the docker credentials. This is just a security precaution so that we are not leaving any key in the instance.
The final Jenkins file looks like the following :
def testImage
def stagingImage
def productionImage
def REPOSITORY
def REPOSITORY_TEST
def RESPOSITORY_STAGING
def GIT_COMMIT_HASH
def INSTANCE_ID
def ACCOUNT_REGISTRY_PREFIX
def S3_LOGS
def DATE_NOW
def SLACK_TOKEN
def CHANNEL_ID = "#ci-cd-pipeline"
pipeline {
agent any
stages {
stage("Set Up") {
steps {
echo "Logging into the private AWS Elastic Container Registry"
script {
// Set environment variables
GIT_COMMIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
REPOSITORY = sh (script: "cat \$HOME/opt/repository_url", returnStdout: true)
REPOSITORY_TEST = sh (script: "cat \$HOME/opt/repository_test_url", returnStdout: true)
REPOSITORY_STAGING = sh (script: "cat \$HOME/opt/repository_staging_url", returnStdout: true)
INSTANCE_ID = sh (script: "cat \$HOME/opt/instance_id", returnStdout: true)
S3_LOGS = sh (script: "cat \$HOME/opt/bucket_name", returnStdout: true)
DATE_NOW = sh (script: "date +%Y%m%d", returnStdout: true)
// To parse and extract the Slack Token from the JSON response of AWS
SLACK_TOKEN = sh (script: "aws secretsmanager get-secret-value --secret-id nodejs-web-app7 --region us-east-1 | python -c \"import sys;import json;print(json.loads(json.loads(sys.stdin.read())['SecretString'])['slackToken'])\" ", returnStdout: true)
REPOSITORY = REPOSITORY.trim()
REPOSITORY_TEST = REPOSITORY_TEST.trim()
REPOSITORY_STAGING = REPOSITORY_STAGING.trim()
S3_LOGS = S3_LOGS.trim()
DATE_NOW = DATE_NOW.trim()
SLACK_TOKEN = SLACK_TOKEN.trim()
ACCOUNT_REGISTRY_PREFIX = (REPOSITORY.split("/"))[0]
// Log into ECR
sh """
/bin/sh -e -c 'echo \$(aws ecr get-login-password --region us-east-1) | docker login -u AWS --password-stdin $ACCOUNT_REGISTRY_PREFIX'
"""
}
}
}
stage("Build Test Image") {
steps {
echo 'Start building the project docker image for tests'
script {
testImage = docker.build("$REPOSITORY_TEST:$GIT_COMMIT_HASH", "-f ./Dockerfile.test .")
testImage.push()
}
}
}
stage("Run Unit Tests") {
steps {
echo 'Run unit tests in the docker image'
script {
def textMessage
def inError
try {
testImage.inside('-v $WORKSPACE:/output -u root') {
sh """
cd /opt/app/server
npm run test:unit
# Save reports to be uploaded afterwards
if test -d /output/unit ; then
rm -R /output/unit
fi
mv mochawesome-report /output/unit
"""
}
// Fill the slack message with the success message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has passed unit tests"
inError = false
} catch(e) {
echo "$e"
// Fill the slack message with the failure message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has failed on unit tests"
inError = true
} finally {
// Upload the unit tests results to S3
sh "aws s3 cp ./unit/ s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/unit/ --recursive"
// Send Slack notification with the result of the tests
sh"""
curl https://slack.com/api/chat.postMessage -X POST -d "channel=$CHANNEL_ID" -d "text=$textMessage" -d "token=xoxb-5182477046070-5189093730674-ZlFDWcMwvJgkdueJJoVrdccq"
"""
if(inError) {
// Send an error signal to stop the pipeline
error("Failed unit tests")
}
}
}
}
}
stage("Run Integration Tests") {
steps {
echo 'Run Integration tests in the docker image'
script {
def textMessage
def inError
try {
testImage.inside('-v $WORKSPACE:/output -u root') {
sh """
cd /opt/app/server
npm run test:integration
# Save reports to be uploaded afterwards
if test -d /output/integration ; then
rm -R /output/integration
fi
mv mochawesome-report /output/integration
"""
}
// Fill the slack message with the success message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has passed integration tests"
inError = false
} catch(e) {
echo "$e"
// Fill the slack message with the failure message
textMessage = "Commit hash: $GIT_COMMIT_HASH -- Has failed on integration tests"
inError = true
} finally {
// Upload the unit tests results to S3
sh "aws s3 cp ./integration/ s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/integration/ --recursive"
// Send Slack notification with the result of the tests
sh"""
curl --location --request POST 'https://slack.com/api/chat.postMessage' \
--header 'Authorization: Bearer $SLACK_TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
"channel": \"$CHANNEL_ID\",
"text": \"$textMessage\"
}'
"""
if(inError) {
// Send an error signal to stop the pipeline
error("Failed integration tests")
}
}
}
}
}
stage("Build Staging Image") {
steps {
echo 'Build the staging image for more tests'
script {
stagingImage = docker.build("$REPOSITORY_STAGING:$GIT_COMMIT_HASH")
stagingImage.push()
}
}
}
stage("Run Load Balancing tests / Security Checks") {
steps {
echo 'Run load balancing tests and security checks'
script {
stagingImage.inside('-v $WORKSPACE:/output -u root') {
sh """
cd /opt/app/server
npm rm loadtest
npm i loadtest
npm run test:load > /output/load_test.txt
"""
}
// Upload the load test results to S3
sh "aws s3 cp ./load_test.txt s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/"
// stagingImage.withRun('-u root'){
// sh """
// # run arachni to check for common vulnerabilities
// \$HOME/opt/arachni-1.5.1-0.5.12/bin/arachni http://\$(hostname):8000 --check=xss,code_injection --report-save-path=simple-web-app.com.afr
// # Save report in html (zipped)
// \$HOME/opt/arachni-1.5.1-0.5.12/bin/arachni_reporter simple-web-app.com.afr --reporter=html:outfile=arachni_report.html.zip
// """
// }
// // Upload the Arachni tests' results to S3
// sh "aws s3 cp ./arachni_report.html.zip s3://$S3_LOGS/$DATE_NOW/$GIT_COMMIT_HASH/"
// Inform via slack that the Load Balancing and Security checks are completed
sh"""
curl --location --request POST 'https://slack.com/api/chat.postMessage' \
--header 'Authorization: Bearer $SLACK_TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
"channel": \"$CHANNEL_ID\",
"text": "Commit hash: $GIT_COMMIT_HASH -- Load Balancing tests and security checks have finished"
}'
"""
}
}
}
stage("Deploy to Fixed Server") {
steps {
echo 'Deploy release to production'
script {
productionImage = docker.build("$REPOSITORY:release")
productionImage.push()
sh """
aws ec2 reboot-instances --region us-east-1 --instance-ids $INSTANCE_ID
curl --location --request POST 'https://slack.com/api/chat.postMessage' \
--header 'Authorization: Bearer $SLACK_TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
"channel": \"$CHANNEL_ID\",
"text": "Deployed successfully"
}'
"""
}
}
}
stage("Clean Up") {
steps {
echo 'Clean up local docker images'
script {
sh """
# Change the :latest with the current ones
docker tag $REPOSITORY_TEST:$GIT_COMMIT_HASH $REPOSITORY_TEST:latest
docker tag $REPOSITORY_STAGING:$GIT_COMMIT_HASH $REPOSITORY_STAGING:latest
docker tag $REPOSITORY:release $REPOSITORY:latest
# Remove the images
docker image rm $REPOSITORY_TEST:$GIT_COMMIT_HASH
docker image rm $REPOSITORY_STAGING:$GIT_COMMIT_HASH
docker image rm $REPOSITORY:release
# Remove dangling images
docker image prune -f
"""
}
echo 'Clean up config.json file with ECR Docker Credentials'
script {
sh """
rm $HOME/.docker/config.json
"""
}
}
}
}
}
Wonderful! We should have everything setup and we can now try for the first time the completed pipeline! We just have to add
, commit
and push
the changes. Enter the Jenkins instance at http://<jenkins_instance_dns>:8080
, log in and click on the Open Blue Ocean
(as at the beginning of this part). Enter the CI-CD Pipeline and then in the terminal let’s push the changes to GitHub:
You can check the web application by typing your application-web-server URL:
COOL!
COOL!
Enhancements:
We’ve got some nifty ideas to make our project even better. Take a shot at these to test your skills:
1. Clean Up After a Fail: When things take a downturn in the pipeline, we need a clean-up crew to remove those unused Docker images that can pile up.
2. Clean Up Web App Instance Docker Images: After a reboot, let’s ensure the old images don’t clutter our space.
3. Ditch the Hard-Coded Secret Name: Let’s make things flexible by defining the secret’s name as a variable. No more hard coding.
4. ECR Repositories Clean Up: Are old images lying around? Time to tidy up. You can store them in an S3 Glacier, a cost-effective option for artifacts.
5. S3 Bucket Clean Up: Logs in the S3 bucket should also have an expiry date. Move old logs to an S3 Glacier.
6. Send Reports to Slack: Get those unit, integration, and load balancing test results to your developers quickly. Set up your bot to upload reports to Slack.
7. Securing Slack Bearer Token: Let’s keep secrets secret. Wrap that Slack Bearer Token command to avoid it showing up in logs.
Final Considerations:
Our project isn’t a complete web app setup, but it’s a solid introduction to these technologies. If you’re up for more, start with the above tweaks. Then, dive into enhancing the infrastructure with load balancers, auto-scaling groups, Blue/Green deployments, CloudWatch, branch handling, Jira integration, and more.
If you made it through this whole tutorial, hats off to you! It’s been a journey filled with tech and techniques. You can find the completed project here:
And that’s a wrap! 🚀🛠️😎
“If you’ve got feedback, spotted an error, or have any ideas to make this tutorial even better, please, pretty please, share your thoughts with me!
Thanks for sticking around, and catch you later!” 😊👋