In this project we’ll set up a continuous integration pipeline on AWS. The first benefits, of course, we get is short MTR, it’s going to be agile and we’re going to have no human intervention in this.
If there is any fault that can be isolated very quickly and if we are setting up on it by using cloud services, then there is no operational overhead.
Okay, So let’s see the services that we are going to use to set up this continuous integration pipeline.
Code Commit : VCS
Code Artifact : Maven repo for dependencies
Code Build : Build service from AWS
Code Deploy : Artifact deployment service
Sonar cloud:
Check style : Code analysis
Code Pipeline : Service to integrate all jobs together
Before we proceed, let’s examine the architecture of the continuous integration pipeline.
Initially, our developers will make code changes using their preferred Integrated Development Environment (IDE), such as IntelliJ or any other of their choice. These code changes will be associated with a Git repository. Specifically, a local Git repository will be linked to a remote repository known as “Code Commit.” Whenever a code commit is initiated, it triggers the pipeline. The committed code is pushed to a code repository, analogous to GitHub on AWS. As soon as a new commit is detected, it initiates the next task — the code build job. This job encompasses running the Sonar Scanner for code analysis and executing Check style. For any required dependencies, the job downloads them from a code artifact repository. Additionally, it uploads the reports to a location like SoundCloud and retrieves the results, which, in turn, triggers another code build job. This subsequent job is responsible for building the artifact. Within this task, we build the artifact, assign a version, and store it in an S3 bucket. Should there be any dependencies necessary for the MAVEN build, they will be downloaded from the code artifact service. In summary, this constitutes a relatively straightforward architectural diagram.
Flow of Execution
I’m currently in my management console, and my first step is to navigate to the Code Commit service. From there, I’ll access the Code Commit repository and select the option to create a new repository. During this process, I’ll assign a name to the repository, and I also have the option to provide a description and add a few tags for reference. When it comes to accessing this repository, there are two available methods: HTTPS and SSH connections. However, it’s important to note that using HTTPS involves transmitting your data with a username and password, which carries a certain level of risk in terms of security. For this reason, the preferred method is to use SSH, which we’ll opt for SSH in order to enhance security and minimize potential password exposure.
In order to accomplish that, we also require an IAM user. To create this IAM user, I will navigate to the IAM service. The IAM user we’re about to create will be granted access to a specific service. Let’s proceed by selecting “Add User” to create a new user and assign a username. For now, we will choose “Programmatic access,” although it’s important to note that we won’t be using access keys. Instead, we’ll be utilizing SSH credentials and attaching a policy. If desired, you can directly grant full access to Code Commit.
To enable SSH keys for Code Commit, we need to upload our SSH public key. To get started, open Git Bash or your preferred terminal to generate the necessary keys. Use the command “ssh-keygen” to initiate the key generation process. You can keep the keys at their default location but assign them a different name if necessary. After the keys have been successfully created, navigate to your directory, and you will be able to locate the private key.
Next, we will create a configuration file, specifically, a search configuration file. When we utilize the Code Commit service, it will refer to the information contained in this config file. If the hostname is “code commit,” it will use the corresponding access key, username, and key for authentication. Ensure that the username is updated to match the intended user. Copy the user’s key ID and include it in the file with the previously created filename. It’s important to double-check the filename to ensure accuracy.
Host git-codecommit.*.amazonaws.com
User ***********************
IdentityFile ~/.ssh/coderepo_rsa
We also have to make sure that our config file permission should be 600.
Once you are comfortable with these steps, the next task is to remove the existing remote repository, which is currently linked to GitHub as “origin.” In its place, we will add our Code Commit repository using the command “git remote add origin” followed by the repository’s URL. Afterward, you can inspect your configuration file to ensure that the Code Commit repository is correctly listed under remote repositories. Finally, you can push all the branches to the Code Commit repository using the command “git push origin master.” This will ensure that all branches are now located in the Code Commit repository.
root@ubuntu2010:~/awsci/coderepo# cat .git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "orgin"]
url = ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/coderepotest
fetch = +refs/heads/*:refs/remotes/orgin/*
Now, let’s create a repository in Code Artifact, which will be utilized by our Maven build job to download dependencies. To create the repository, navigate to the repository section. Upon clicking, you should see two repositories listed. These repositories are designated for storing dependencies. By selecting one, you can access instructions on how to connect to it, including view connection instructions.
To use MVN as the package manager, generate an access token by running the “aws codeartifact get-authorization-token” command with appropriate IAM user privileges. Ensure you have the AWS CLI installed and correctly configured with credentials. This token is vital for accessing dependencies. Be cautious not to expose these credentials. Once you have the token, update your settings file.
With this setup complete, you can copy and run the provided command. This command should generate a token, which will be stored in the System Parameter Store. Your code job will retrieve this token when needed, and it’s time to update your settings file accordingly.
Now, we’ll proceed to set up a code analysis job, which will perform Sonar code analysis within the Code Build environment. To accomplish this, we need to have a Sonar Cloud account. Start by visiting SoundCloud.IO in your browser, and you can log in using your GitHub or Bitbucket credentials, or you can create a new account if you prefer. The initial step in this process is to generate an access token.
At this point, we should have gathered some details. We’ll utilize these details and store them in a parameter store. Let’s search for the specific service — it should fall under the Systems Manager category. This is where we’ll store these variables within the Systems Manager Parameter Store. Alright, let’s proceed with creating the parameter.
Now is the time to create the build project. To begin, let’s navigate to the build project section. The initial task is to set up a code analysis job for our code. Create a project by selecting “Create Build Project.” This process is akin to creating a Jenkins job, similar in nature.
To proceed, we need a build spec file. You can use a build spec file from version control, but in this case, we’ll select “Insert Build.” The build spec file is located in our source code. You can retrieve it and copy its content.
With this setup, we are almost there, but we need to make a few changes to the parameters’ values, allowing the build process to access the parameter store and retrieve these variable values.
GitHub: https://github.com/arunma076/ci-aws/tree/master
Let’s click on Start build. This will initiate the code scanning process for the branch. Be patient, as it may take some time to complete. Once it’s finished, go to “Phase Details” to review the results.
To access the results, go to Sonar Cloud, navigate to your projects, and you will find your project listed there.
Now, it’s time to create our next job, one that will handle building the artifact. Begin by navigating to the build project section. The setup for this job will be almost identical to the previous one, with the only significant difference being the build spec file used. This job requires the parameter of a Code Artifact token for access. You can locate this build spec file in the source code under “build_buildspec.yml.” Simply copy the entire content from there.
With all the necessary details copied, select the same log group that you used for the previous project, and click on “Start build.” Just like previous processes, this will also take some time to complete. Once it’s finished, you will see that this job has been successfully completed.
Now, there’s one more thing we need to do before setting up our pipeline. We should also configure notifications for it, and we’ll use Amazon SNS for this purpose. To proceed, let’s navigate to the SNS service. First, access the “Topics” section and create a new topic. Once the topic is created, we can establish a subscription for this topic. In this case, I’ll use an email subscription, and I’ll provide the email address where I wish to receive my notifications.
Let’s return to the Code Build service, where we’ll bring everything together by setting up the pipeline. Afterward, we’ll proceed to upload the artifact to an S3 bucket. This step will come after we’ve created the pipeline.
To initiate the process, I’ll navigate to the “Pipeline” section and select “Create Pipeline.” The pipeline’s role will encompass pulling the code, running tests, and executing the build job to generate the artifact.
For the deployment stage, I’ll add a stage name and edit the action group. The action name will be “deploy to S3,” and the action provider will be Amazon S3. The input artifact for this action will be the one generated by the build job.
It’s essential to ensure the S3 bucket is created in the same region as specified. Our build job generates the artifact, which serves as the input for this action, and the output will be directed to the specified S3 bucket.
Okay, there’s one last thing remaining on our checklist — setting up notifications. To do this, navigate to the settings, then proceed to the “Notifications” section. Here, you’ll create a notification rule. You’ll need to provide some basic information, and you can also choose to provide additional details based on your preferences. Don’t forget to give this notification rule a name. This rule will specify when notifications should be sent — for job failures, when jobs start, and when they successfully complete.
And now, it’s showtime. Click on “Release Change.” After a few minutes, the process will be completed.
The test job has been completed, and it has successfully deployed the artifact to the S3 bucket. Let’s now go ahead and check the bucket within the directory. There, you’ll find the artifact with a timestamp.
I hope you have enjoyed this and learned a lot.
Thank you.