Mainframe HLASM Continuous Integration Testing with Github and Drone
Here I’ll show a simple example of Continuous Integration (CI) testing for a mainframe HLASM project. The project itself is organized similar to this Metal C project. That is, it uses:
- Zowe CLI for mainframe interaction
- npm scripts to encapsulate allocation, build, deployment, and execution of HLASM source
- Jest snapshots to verify control blocks and other output
We’ll work the setup and configuration of CI towards one end goal:
For every push of HLASM source to GitHub, an automated process should perform a clean build and test of the code. (Bonus goal: get a cool badge on the repo 😎)
There are two core tools needed to accomplish this goal:
- Source management → git and GitHub Enterprise
- CI platform → Drone Enterprise
Why Drone?
Drone, like Concourse CI, Jenkins, and TeamCity, offers an on-premises CI/CD platform. Having tried the others (and many hosted solutions like CircleCI, Travis CI, AppVeyor), I ventured to learning something new.
Drone Setup
I opted to run Drone on Linux Lite under VirtualBox from my Windows-hosted developer machine since I’m only working towards a PoC. In “production” I’d run Drone on some Linux servers on my network.
Prior to starting the Linux Lite virtual machine (VM) in VirtualBox, I disabled my wireless network connections and setup for a bridged connection.
Drone is installable via a docker image. Once I installed Linux Lite and booted my VM, I followed these instructions to install docker there.
After Docker, I went along with the single machine setup for Drone. The only specific info I needed was the IP address from which to host my Drone server. To get this, I open a terminal in my VM and issued ifconfig
:
The IP address is used to configure the Drone server as a GitHub OAuth application.
Lastly, I start Drone on my VM, using the GitHub Client ID
and Client Secret
along with the IP address from my VM (ellipses are used for hidden values). Here is the [huge] run
command:
docker run --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/lib/drone:/data --env=DRONE_GITHUB_SERVER=https://github.../ --env=DRONE_GITHUB_CLIENT_ID=6e... --env=DRONE_GITHUB_CLIENT_SECRET=99... --env=DRONE_RUNNER_CAPACITY=2 --env=DRONE_SERVER_HOST=138... --env=DRONE_SERVER_PROTO=http --env=DRONE_TLS_AUTOCERT=false --env=DRONE_OPEN=true s--publish=80:80 --publish=443:443 --restart=always --detach=true --name=drone drone/drone:1.0.0-rc.3
Once the container started, I navigated to the IP address in my browser, logged in, and eventually got to a UI like this:
Gotchas
There were two GitHub configurations I initially missed. The first was that I needed to adjust the /hook
route for http
(not https
in my case):
The second was to enable status checks for my master
branch:
Credentials
Most CI tools provide a credential management solution, and Drone is no different (see Drone’s Secrets). In the Drone UI, I added MF_USER
and MF_PASSWORD
secrets whose values hold an automation ID for mainframe access.
Continuous Integration
Drone is now setup and configured with GitHub. The last piece to kick things off is a .drone.yml
config file to describe a CI pipeline (a.k.a “steps”).
A .yml
file is pretty standard with CI tools (excluding a Jenkinsfile that’s Groovy-based😞). For other examples, Travis uses .travis.yml
and CircleCI uses .circleci/config.yml
.
Here’s a minimal config for Drone with the HLASM project:
Drone is “Container Native”, meaning the pipeline steps execute inside of containers. The container used here is identified on line 6, dkelosky/zowe-cli
It’s a small image that I created to contain Node.js, npm
, the core Zowe CLI and nothing else. When this pipeline runs, the dkelosky/zowe-cli
container is pulled, and the commands starting on line 13 begin to run.
The commands use MF_USER
and MF_PASSWORD
Drone secrets as environmental variables. Line 13 uses these to create a Zowe CLI profile. Longer term, I’d refine this process to eliminate creating a profiles altogether since they’re not needed and not particularly ideal in the CI use case. For example, you can fully qualify a Zowe command to specific host, user and password (without a profile), e.g.:
$ zowe jobs submit lf ./build/custom.jcl --directory ./output --user $MF_USER --password $MF_PASSWORD --host some.domain.to.access.zosmf
(Drone masks away MF_USER
and MF_PASSWORD
from appearing in the UI console output — see the recording below.)
Line 14 — Line 25 are the last pieces of the Drone config. These lines create a local.ts
(for use by the npm
scripts), and then issue commands to allocate z/OS data sets, upload source, build, test, and cleanup.
Final Result
Below is a recording showing assembler source being edited within VS Code on my workstation (on the right-hand side of the recording).
The Drone UI (left-hand side) initially shows a failure on commit #66 (yes, it took me 66 tries to get this right 😞). The commit is titled “cause assembler error”.
In the recording, I correct the assembler error in VS Code, push my changes to GitHub which kicks off the build and test pipeline for my HLASM project.
Summary
In the end, I can easily determine the current build status of my HLASM project:
There are a lot of steps and plenty more refinement necessary to build out a robust CI, but hopefully you can see how you might get started with CI in a very basic way and eventually, iteratively improve your development process (even for HLASM).
The complete project and HLASM source can be found here: https://github.com/dkelosky/assembler-metal-template/tree/v4