Jenkins automation I have been using lately
I have been using Jenkins for automation (in a customer environment) for a while with following use cases:
- Continuous deployment
- Recurrent network and API verification
- API warm up
This have been the major use case, the old way of deployment would mainly manual work, and that would be challenging to avoid all human mistakes.
In deployment, there could be following actions:
- Get source code from repository
- Compile and link (for some programming language)
- File copy
- Change of configurations (depends on destination or build)
- Restart some services
My setup of a Jenkins project for deployment
First, under General section, I setup some parameters, like:
- the chosen git tag
- login and destination server to be used
- where the file to be deployed to destination servers.
For source code management, we would reuse the git tag parameter defined (which would be presented as a drop down list a user can choose from)
Finally the main step is running a script (which is included in the source code, but one can also directly write the script here)
The script logic for deployment is as follow:
The script executed several tasks:
- manage destination server file system (create folder)
- manage configuration (using secure-env to encrypt the .env file in source)
- copy file (as Jenkins prepare the source and run the script on Jenkins server, it would need to copy the files over to destination server)
- post copy action — running the service with PM2
Running this Jenkins project with parameter:
Introducing Jenkins pipeline project
Given the Jenkins project defined above is parameterized, so if there are multiple destination servers, we would need to run the same project again with choosing different servers (which is another source of human error).
So how I use Jenkins to automate this is through pipeline project.
When configurating a pipeline, our focus would be at the pipeline section, the section allow us to use “Pipeline script” to define what to be done.
There is a handy link down below as “Pipeline Syntax”, and it allow us to generate the necessary script.
Copy the script generated and paste to the pipeline section in previous screen. So what I did is prepare multiple “build job” script with different parameters (e.g. destination servers)
Another use case for me is when working with network team on checking network connectivity between servers.
Firstly I defined a multiline string parameter as follow:
The script would perform:
- Set +xe at first line is to not printing the line being executed in console result and allow the script to continue running even with error
- Build an array of server by breaking the values defined in web_servers parameter (comma separated)
- Change internal field separator (IFS) to “\n” to break the multiline parameter into array of lines, then restore the old IFS
- For each web server, run SSH command to telnet (getting the hostname/ip and port from each line)
API Check and Warm up
API check is similar, but instead of running with telnet in ssh command, we use curl.
One problem is curl sometimes curl is more tolerating with standard switch (e.g. certificate error) and result differently from programming language library (e.g. axios).
For warm up scenario, my use case was to handle IIS idle timeout (20 min) and so I would run a regular job to curl to the API endpoint, to configure a regular run in Jenkins, we can configure at build triggers section with build periodically.
Jenkins have a lot more usage than I first anticipated, but at least I feel like the above use case cover a lot of deploy and support task that need to be run periodically, even though the building of script and job take times, but it would paid off well when I use it many times.