Mainframe HLASM Continuous Integration Testing with Github and Drone

Dan Kelosky
Zowe
Published in
5 min readFeb 13, 2019
Drone logo over HLASM (not a Poke ball 😉)

Here I’ll show a simple example of Continuous Integration (CI) testing for a mainframe HLASM project. The project itself is organized similar to this Metal C project. That is, it uses:

  • Zowe CLI for mainframe interaction
  • npm scripts to encapsulate allocation, build, deployment, and execution of HLASM source
  • Jest snapshots to verify control blocks and other output

We’ll work the setup and configuration of CI towards one end goal:

For every push of HLASM source to GitHub, an automated process should perform a clean build and test of the code. (Bonus goal: get a cool badge on the repo 😎)

There are two core tools needed to accomplish this goal:

  1. Source management → git and GitHub Enterprise
  2. CI platform → Drone Enterprise

Why Drone?

Drone, like Concourse CI, Jenkins, and TeamCity, offers an on-premises CI/CD platform. Having tried the others (and many hosted solutions like CircleCI, Travis CI, AppVeyor), I ventured to learning something new.

Drone Setup

I opted to run Drone on Linux Lite under VirtualBox from my Windows-hosted developer machine since I’m only working towards a PoC. In “production” I’d run Drone on some Linux servers on my network.

Prior to starting the Linux Lite virtual machine (VM) in VirtualBox, I disabled my wireless network connections and setup for a bridged connection.

Now, I can access servers on my VM from my host machine (and elsewhere on my network)

Drone is installable via a docker image. Once I installed Linux Lite and booted my VM, I followed these instructions to install docker there.

After Docker, I went along with the single machine setup for Drone. The only specific info I needed was the IP address from which to host my Drone server. To get this, I open a terminal in my VM and issued ifconfig:

It’s blacked out, but I used the “inet addr” for GitHub OAuth application Homepage URL and Authorization callback URL (with /login) — it started with “138.”

The IP address is used to configure the Drone server as a GitHub OAuth application.

Snippet of my GitHub Application Settings

Lastly, I start Drone on my VM, using the GitHub Client ID and Client Secret along with the IP address from my VM (ellipses are used for hidden values). Here is the [huge] run command:

docker run --volume=/var/run/docker.sock:/var/run/docker.sock  --volume=/var/lib/drone:/data --env=DRONE_GITHUB_SERVER=https://github.../ --env=DRONE_GITHUB_CLIENT_ID=6e... --env=DRONE_GITHUB_CLIENT_SECRET=99... --env=DRONE_RUNNER_CAPACITY=2 --env=DRONE_SERVER_HOST=138... --env=DRONE_SERVER_PROTO=http --env=DRONE_TLS_AUTOCERT=false --env=DRONE_OPEN=true s--publish=80:80 --publish=443:443 --restart=always --detach=true --name=drone drone/drone:1.0.0-rc.3

Once the container started, I navigated to the IP address in my browser, logged in, and eventually got to a UI like this:

Screenshot after logon to Drone and activating a sample repo “metalc-drone” — “metalc-drone” contains all HLASM, I just didn’t get around to changing the repo name

Gotchas

There were two GitHub configurations I initially missed. The first was that I needed to adjust the /hook route for http (not https in my case):

The default protocol was https — changed to http in the GitHub UI

The second was to enable status checks for my master branch:

Enable Drone / GitHub integration from the GitHub UI

Credentials

Most CI tools provide a credential management solution, and Drone is no different (see Drone’s Secrets). In the Drone UI, I added MF_USER and MF_PASSWORD secrets whose values hold an automation ID for mainframe access.

Example defining “secrets” of a user name and password

Continuous Integration

Drone is now setup and configured with GitHub. The last piece to kick things off is a .drone.yml config file to describe a CI pipeline (a.k.a “steps”).

A .yml file is pretty standard with CI tools (excluding a Jenkinsfile that’s Groovy-based😞). For other examples, Travis uses .travis.yml and CircleCI uses .circleci/config.yml.

Here’s a minimal config for Drone with the HLASM project:

.drone.yml for my HLASM project

Drone is “Container Native”, meaning the pipeline steps execute inside of containers. The container used here is identified on line 6, dkelosky/zowe-cli It’s a small image that I created to contain Node.js, npm, the core Zowe CLI and nothing else. When this pipeline runs, the dkelosky/zowe-cli container is pulled, and the commands starting on line 13 begin to run.

The commands use MF_USER and MF_PASSWORD Drone secrets as environmental variables. Line 13 uses these to create a Zowe CLI profile. Longer term, I’d refine this process to eliminate creating a profiles altogether since they’re not needed and not particularly ideal in the CI use case. For example, you can fully qualify a Zowe command to specific host, user and password (without a profile), e.g.:

$ zowe jobs submit lf ./build/custom.jcl --directory ./output  --user $MF_USER --password $MF_PASSWORD --host some.domain.to.access.zosmf

(Drone masks away MF_USER and MF_PASSWORD from appearing in the UI console output — see the recording below.)

Line 14 Line 25 are the last pieces of the Drone config. These lines create a local.ts (for use by the npm scripts), and then issue commands to allocate z/OS data sets, upload source, build, test, and cleanup.

Final Result

Below is a recording showing assembler source being edited within VS Code on my workstation (on the right-hand side of the recording).

The Drone UI (left-hand side) initially shows a failure on commit #66 (yes, it took me 66 tries to get this right 😞). The commit is titled “cause assembler error”.

In the recording, I correct the assembler error in VS Code, push my changes to GitHub which kicks off the build and test pipeline for my HLASM project.

git push causes my HLASM to be built and tested automatically

Summary

In the end, I can easily determine the current build status of my HLASM project:

This badge also shows yellow for active builds & red for failed builds

There are a lot of steps and plenty more refinement necessary to build out a robust CI, but hopefully you can see how you might get started with CI in a very basic way and eventually, iteratively improve your development process (even for HLASM).

The complete project and HLASM source can be found here: https://github.com/dkelosky/assembler-metal-template/tree/v4

--

--

Dan Kelosky
Zowe
Writer for

Likes programming/automation in mainframe (assembler, C/C++), distributed (Node.js), and web development (Firebase, Angular).