Advent of Atomist Automations

Atomist
The Composition
Published in
34 min readDec 8, 2018

While children count down the days until Christmas with small chocolates and pictures of snowmen, geeks prefer mark each day by writing code. I am no exception, but as a recent Atomist employee I decided to ignore the Advent of Code challenges, preferring instead to choose my own Atomist Adventure. Atomist believes in helping developers scratch an itch and solve delivery problems in code, so each day I want to discover a new way to use a Software Delivery Machine (SDM). Here goes!

Note: You can follow along and try these for yourself with SDM local or the Atomist service (request a trial at https://app.atomist.com if you don’t yet have an account). If you run SDM in local mode for these examples, instead of invoking Slack bot commands @atomist [command] you will invoke the Atomist CLI ($ atomist [command]). And if you get stuck for any reason, use the chat on https://atomist.com or ask in our Atomist community Slack.

Day 1: Hello Atomist!

I’ll start by creating a command, as I’m most familiar with how those work. The simplest demonstrable command is one that allows me to talk to the Atomist bot and get a response. When I type @atomist hello into a Slack channel, I want to get a response from the Atomist bot. Sounds easy, right?

First, I need a file lib/handlers/commands/HandleHello.ts:

import { CommandHandlerRegistration } from "@atomist/sdm";

export const SayHello: CommandHandlerRegistration<{ name: string }> = {
name: "Hello",
description: "say hello",
intent: "hello",
parameters: { name: { required: false, defaultValue: "stranger" }},
listener: async cli => {
return cli.addressChannels(`Hello, \`${cli.parameters.name}\`!`);
},
};

There is an alternative syntax that allows us to package up the parameter(s) in a HelloParameters class (note the use of paramsMaker instead of parameters):

import { CommandHandlerRegistration } from "@atomist/sdm";
import {
Parameter,
Parameters,
} from "@atomist/automation-client";

@Parameters()
export class HelloParameters {

@Parameter({ required: false })
public name: string = "stranger";

}

export const SayHello: CommandHandlerRegistration = {
name: "Hello",
description: "say hello",
intent: "hello",
paramsMaker: HelloParameters,
listener: async cli => {
return cli.addressChannels(`Hello, \`${cli.parameters.name}\`!`);
},
};

Then I need to add this command to my SDM in my main file lib/machine/machine.ts. Here is the full machine.ts file that I will be adding to as the days go by:

import {
SoftwareDeliveryMachine,
SoftwareDeliveryMachineConfiguration,
} from "@atomist/sdm";
import {
createSoftwareDeliveryMachine,
} from "@atomist/sdm-core";
import { SayHello } from "../handlers/commands/HandleHello";

/**
* Initialize an sdm definition, and add functionality to it.
*
* @param configuration All the configuration for this service
*/
export function machine(
configuration: SoftwareDeliveryMachineConfiguration,
): SoftwareDeliveryMachine {

const sdm = createSoftwareDeliveryMachine({
name: "Advent of Atomist Automations",
configuration,
});

sdm.addCommand(SayHello);

return sdm;
}

To get my hello command working, I import SayHello, then add it to my SDM using sdm.addCommand(SayHello). Done!

Now when once I run my SDM using atomist start, I can have a (admittedly dull) conversation with the Atomist bot.

Danny>   @atomist hello
Atomist> Hello, stranger!

Danny> @atomist hello name="Father Christmas"
Atomist> Hello, Father Christmas!

Day 2: How many days?

Not technically something new this time, but I’ve got a couple of days to catch up on and there is only so much time I can spend on this! For Day 2, I’m going to create a command that tells us how many days until Christmas, so I can ask the Atomist bot @atomist days to go and get the answer.

Here is my command, in lib/handlers/commands/HandleDaysToChristmas.ts:

import { CommandHandlerRegistration } from "@atomist/sdm";

function getDaysToChristmas(): Number {
const today = new Date();
const cmas = new Date(today.getFullYear(), 11, 25);
if (today.getMonth() == 11 && today.getDate() > 25) {
cmas.setFullYear(cmas.getFullYear() + 1);
}
const oneDay = 1000 * 60 * 60 * 24;
return Math.ceil((cmas.getTime() - today.getTime()) / oneDay);
}

export const DaysToChristmas: CommandHandlerRegistration = {
name: "DaysToChristmas",
description: "give days until Christmas",
intent: "days to go",
listener: async cli => {
const daysToGo = getDaysToChristmas();
return cli.addressChannels(`There are ${daysToGo} days until Christmas!`);
},
};

This command has no parameters, so in this regard is simpler than the first one.

Finally we add the command to machine.ts by adding the following lines in the appropriate locations:

import { DaysToChristmas } from "../handlers/commands/HandleDaysToChristmas";

sdm.addCommand(DaysToChristmas);

Now we can restart the SDM and try it out:

Danny>   @atomist days to go
Atomist> There are 21 days until Christmas!

If you do the calculation, you’ll notice that I’m running this on day 4, not day 2. Still some catching up to do then!

Day 3: Mapped parameters

Another command today, but this time one that shows the use of mapped parameters. This is feature of Atomist that provides the SDM with some useful context when responding to a command. This context might be either Slack-related context (eg. the Slack channel where the command was posted), or Git-related context (eg. the Github repository linked to the Slack channel).

Atomist supports a small set of mapped parameters, each with a fixed, unique name. I’m going to try out all of them! My command today will report the values of all mapped parameters in response to @atomist report values:

import { CommandHandlerRegistration } from "@atomist/sdm";
import {
MappedParameter,
MappedParameters,
Parameters,
} from "@atomist/automation-client";

@Parameters()
export class ReportValuesParameters {

@MappedParameter('atomist://correlation_id')
public correlation_id: string;

@MappedParameter(MappedParameters.GitHubApiUrl)
public github_api_url: string;

@MappedParameter(MappedParameters.GitHubUrl)
public github_url: string;

@MappedParameter(MappedParameters.GitHubWebHookUrl)
public github_webhook_url: string;

@MappedParameter(MappedParameters.GitHubDefaultRepositoryVisibility)
public default_repo_visibility: string;

@MappedParameter(MappedParameters.GitHubRepository)
public github_repository: string;

@MappedParameter(MappedParameters.GitHubOwner)
public github_owner: string;

@MappedParameter(MappedParameters.GitHubRepositoryProvider)
public github_provider: string;

@MappedParameter(MappedParameters.GitHubUserLogin)
public github_username: string;

@MappedParameter(MappedParameters.SlackChannel)
public slack_channel_id: string;

@MappedParameter(MappedParameters.SlackChannelName)
public slack_channel_name: string;

@MappedParameter(MappedParameters.SlackTeam)
public slack_team_id: string;

@MappedParameter(MappedParameters.SlackUser)
public slack_user_id: string;

@MappedParameter(MappedParameters.SlackUserName)
public slack_user_name: string;
}

export const ReportValues: CommandHandlerRegistration = {
name: "ReportValues",
description: "report values",
intent: "report values",
paramsMaker: ReportValuesParameters,
listener: async cli => {

var message =
`Correlation Id: ${cli.parameters.correlation_id}\n` +
`Github API URL: ${cli.parameters.github_api_url}\n` +
`Github URL: ${cli.parameters.github_url}\n` +
`Github Webhook URL: ${cli.parameters.github_webhook_url}\n` +
`Default Repo Visibility: ${cli.parameters.default_repo_visibility}\n` +
`Github Repository: ${cli.parameters.github_repository}\n` +
`Github Owner: ${cli.parameters.github_owner}\n` +
`Github Provider: ${cli.parameters.github_provider}\n` +
`Github Username: ${cli.parameters.github_username}\n` +
`Chat Channel Id: ${cli.parameters.slack_channel_id}\n` +
`Chat Channel Name: ${cli.parameters.slack_channel_name}\n` +
`Chat Team Id: ${cli.parameters.slack_team_id}\n` +
`Chat User Id: ${cli.parameters.slack_user_id}\n` +
`Chat User Name: ${cli.parameters.slack_user_name}\n`;

var slackMessage = {
text: `Report values`,
attachments: [{
fallback: `Report values`,
mrkdwn_in: ["value"],
text: message
}],
};

return cli.addressChannels(slackMessage);
},
};

Note we are also sending a more sophisticated message back to Slack, using an attachment rather than a simple text message.

With an edit to machine.ts as on previous days and an SDM restart, we can try it out (note I've elided the output to avoid including the identifiers):

Danny>    @atomist report values
Atomist>

Correlation Id: 25d4e3a7-2519-4457-af3a-56a68bb1fe00
Github API URL: https://api.github.com/
Github URL: https://github.com
...

Day 4: Respond to new Git repositories

After three days our SDM now responds to three commands, but that is all. No other events will trigger the SDM to take action. What if we want our SDM to respond to another type of event, for example a new repository being created?

Let’s add something to our machine.ts file:

sdm.addFirstPushListener(fpl => {
const message = `Got a push, with a commit from ${fpl.push.commits[0].author.name}.`
return fpl.context.messageClient.addressChannels(message, "danny-02");
});

Now when we create a new repository in Github, we get this in the ‘danny-02’ channel in Slack:

Atomist> Got a push, with a commit from Danny Smith.

In order for this to work, however, we must first have:

  1. added our Github organisation to our Atomist workspace (which we can do via the Atomist Dashboard, and
  2. created a Slack channel called “danny-02”.

Day 5: Catch those commits

On Day 4 we got our SDM to send a message when a new repository was created. However, I really wanted a message on every commit. I’ve since learned that while we can use addFirstPushListener, there isn't a corresponding addPushListener. Instead, we have to learn about Goals.

From my limited knowledge, I consider Goals to be the most significant concept to grasp when writing SDMs. They are also one of the more novel concepts, and therefore require some effort to fully understand. It might be helpful to explain my current (basic) understanding.

In order to add a PushListener on Day 4, we used a pattern that is very familiar to many developers, that has the form addEventHandler(myEventHandlerFunction). If we introduce Goals into our code, a Goal would take the place of the function "myEventHandlerFunction", so we would get something like addEventHandler(myGoal). A Goal combines together an event handler function with some additional metadata. Atomist uses this metadata to give our SDM far more control over the exeuction of our handler, so that we can do things like:

  1. keep track of the state (eg. pending, running, completed etc.),
  2. re-run failed goals,
  3. create dependency relationships between goals,
  4. display current goal progress to users, either via their chat system or the Atomist web dashboard,
  5. parallelise execution across multiple SDMs

So how do we use goals to respond to our Git commits?

First we need to add a few goal-related imports to machine.ts:

import {
SoftwareDeliveryMachine,
SoftwareDeliveryMachineConfiguration,
// add these imports for Goals
onAnyPush,
createGoal,
GoalInvocation,
} from "@atomist/sdm";

Next we create a goal that reports the commit details:

const commitReporterGoal = createGoal({
displayName: "CommitReporter"
}, async (inv: GoalInvocation) => { // this is the goal execution
// get the commit from the GoalInvocation
const commit = inv.sdmGoal.push.commits[0];
// create a message with the commit attributes
const message = `Commit: ${commit.message} (${commit.sha}). Author ${commit.author.name} (${commit.author.login}).`;
// send the message to a Slack channel "danny-02"
await inv.context.messageClient.addressChannels(message, "danny-02");
});

As you can see we create a goal by supplying some metadata (in this case just a displayName) with a handler function.

Finally we specify to run this goal on any push (ie. commit):

sdm.withPushRules(
onAnyPush().setGoals(commitReporterGoal));

Now when we make a commit to any repository in our linked Github organisation, we get a message like this:

Atomist> Commit: updating README. (2bacae259c84024333839373dcf4041149eb208d). Author Danny Smith (dansmithy).

Day 6: Conditional run

Yesterday we got something running on every commit, but one of the key advantages of Atomist is that we have fine control over when and what we run. Today we’re going to change our SDM so that our goal only runs when a particular file is present in the source repository.

First we need to create a predicate to determine if a file named build.properties exists. I made this name up - it has no other significance right now. Here is our predicate:

// add the following imports to the others we already have
import {
hasFile,
PredicatePushTest,
} from "@atomist/sdm";

export const HasBuildPropertiesFile: PredicatePushTest = hasFile("build.properties");

Now we can modify the code that adds our goal, so that instead of running onAnyPush(), we introduce our predicate using whenPushSatisfies:

sdm.withPushRules(
whenPushSatisfies(HasBuildPropertiesFile).setGoals(commitReporterGoal));

Now when we make a commit to the repository, the SDM decides not to run any goals because our repostory does not contain a build.properties file. As soon as we make a commit that adds a file build.properties, however, the SDM runs our goal. The SDM will run our goal for all subsequent commits where that file is present in the repository.

Day 7: Use the content of a file

On Day 6 we chose to only run our Goal if the file build.properties existed. What if we wanted the content of build.properties to inform which Goals are run?

We can change our predicate function so that it uses the contents of the file. In this case, it is going to look for a ‘verbose’ property in the properties file, which is expected to have a boolean value. Here is the new predicate function ProjectHasVerboseOutput, along with a helper function to extract a value from the properties file:

import {
// add these imports
PushTest,
pushTest,
} from "@atomist/sdm";

const ProjectHasVerboseOutput: PushTest =
pushTest(`project has verbose output configured in properties file`,
async pci => {
const file = await pci.project.getFile("build.properties");
const content = await file.getContent();
const verboseOuput = getValueFromPropertiesFile(content, "verbose") || "";
return verboseOuput.toLocaleLowerCase() === "true";
});

const getValueFromPropertiesFile = (propertiesFileContent: string, propertyKey: string): string => {
const lines = propertiesFileContent.split("\n");
for (var i = 0; i < lines.length; i++) {
const parts: string[] = lines[i].split("=");
if (parts[0] === propertyKey) {
return parts[1];
}
}
return null;
};

Now we can wrap our goal in a function that takes an isVerbose parameter. The parameter will change the goal's behaviour a little.

const createCommitReporterGoal = (isVerbose: boolean) => {
return createGoal({
displayName: `CommitReporter (${isVerbose})`,
}, async (inv: GoalInvocation) => {
const commit = inv.sdmGoal.push.commits[0];
var message: string = null;
if (isVerbose) {
message = `Commit: ${commit.message} (${commit.sha}). Author ${commit.author.name} (${commit.author.login}).`;
} else {
message = `Commit from ${commit.author.name}`;
}
await inv.context.messageClient.addressChannels(message, "danny-02");
});
};

Finally we can modify our push rules, so that we run a different version of the goal depending on whether the project has the verbose flag set:

import {
// add these imports
not,
} from "@atomist/sdm";

sdm.withPushRules(
whenPushSatisfies(ProjectHasVerboseOutput).setGoals(createCommitReporterGoal(true)),
whenPushSatisfies(not(ProjectHasVerboseOutput)).setGoals(createCommitReporterGoal(false)));

Now if we make a commit we get the new default, non-verbose output:

Atomist> Commit from Danny Smith

But if we edit the build.properties file in our project so that it has this content:

verbose=true

Then we get this output:

Atomist> Commit: Adding verbose flag to project. (ef3f3021e941162d7392dad81334e73e7a556179). Author Danny Smith (dansmithy).

Using this technique, we can imagine having many repositories, each with a small set of properties that could drive very different build pipelines.

Day 8: A dependency graph of goals

When I listed some features of goals on Day 5, I included the ability to create dependency relationships between goals. We are going to explore this feature today by creating a dependency graph that looks very much like a build pipeline that you might see in the current crop of Continuous Integration systems.

In order to focus on the dependency graph aspect, we aren’t going to concern ourselves with actually building any software. Instead, our goals will simply sleep for a number of seconds before completing. First we must write a function that creates a sleepy goal:

const createSleepGoal = (name: string, sleepInSeconds: number = 5) => {
return createGoal({
displayName: name,
}, async (inv: GoalInvocation) => {
await timeout(sleepInSeconds * 1000);
});
};
// helper function to create a timeout Promise
const timeout = (ms: number) => {
return new Promise(resolve => setTimeout(resolve, ms));
};

Then we are going to create seven instances of these goals, each with a unique name and with a specific number of seconds to sleep:

const goal1 = createSleepGoal("Sleep goal 1", 2);
const goal2 = createSleepGoal("Sleep goal 2", 4);
const goal3 = createSleepGoal("Sleep goal 3", 2);
const goal4 = createSleepGoal("Sleep goal 4", 10);
const goal5 = createSleepGoal("Sleep goal 5", 2);
const goal6 = createSleepGoal("Sleep goal 6", 4);
const goal7 = createSleepGoal("Sleep goal 7", 3);

Now we want to specify dependencies between these goals so they form a graph like this:

+-+                        +-+       +-+
+-----> |1| +-----+ +---->+5| +---->+7|
| +-+ | | +-+ +-+
| v |
| +++ |
| |3| +---+---+
+---------+ +++ | |
| ^ | |
| +-+ | | | +-+
+-----> |2| +-----+ | +---->+6|
| +-+ | +-+
| |
| +-+ |
+-----> |4| +------------+
+-+

The above diagram shows us that goals 1, 2 and 4 will run in parallel, and goal 3 will run when and only when goals 1 and 2 have completed. Furthermore, because goal 4 takes longer than goals 1 or 2, goal 4 should still be running when goal 3 starts. Goals 5 and 6 also run in parallel, but only when goals 3 and 4 have completed. Goal 7 only waits for goal 5, so will begin while goal 6 is still running.

Let’s see how we can specify this in our SDM code:

import {
// just one new import
goals,
} from "@atomist/sdm";
const phase1 = goals("sleep goals phase 1")
.plan(goal1, goal2)
.plan(goal3).after(goal1, goal2)
.plan(goal4);
sdm.withPushRules(
onAnyPush().setGoals(goals("all sleep goals")
.plan(phase1)
.plan(goal5, goal6).after(phase1)
.plan(goal7).after(goal5)));

Notice that there are two ways to say “run goal A after goals B and C have bothcompleted”:

  1. We can simply write .plan(goalA).after(goalB, goalC), or
  2. We can first group goals B and C together using const myGrouping = goals("a group").plan(goalB, goalC)), then use myGrouping in the after clause, i.e. .plan(goalA).after(myGrouping). The advantage of this approach is that we don't need to know about the individual goals in order to compose them. This helps us create and compose sharable libraries of goals.

If we now make a commit to our Git repository, our SDM will run these goals, honoring the dependency relationships we specified.

But how do we see those goals running? What I haven’t mentioned yet, is that Atomist will give us a visual representation of the running goals in Slack, provided that we have linked a Slack channel with our Git repository. See the recording below for how Atomist displays these goals in Slack.

Day 9: Failing goals

Today we’re going to make some modifications to yesterday’s code. We’ll find out how to fail a goal, and see how that might affect downstream goals.

To start with, we’ll change our createSleepGoal function in a couple of ways:

  1. Instead of giving it a unique name, we’ll give it a unique number. The number is then used to create a unique name.
  2. Rather than always succeeding, we fail the goal if the commit message contains the text [goal {number}]. This means we can choose to fail any single goal with our commit message.
  3. Suppress failure logs, to avoid NodeJS error messages appearing in Slack.

This is what createSleepGoal looks like now:

import {
// one new import
LogSuppressor,
} from "@atomist/sdm";
const createSleepGoal = (goalNumber: number, sleepInSeconds: number = 5) => {
return createGoal({
displayName: `Sleep goal ${goalNumber}`,
}, async (inv: GoalInvocation) => {
await timeout(sleepInSeconds * 1000);
const commitMessage = inv.sdmGoal.push.commits[0].message;
if (commitMessage.includes(`[goal ${goalNumber}]`)) {
return { code: 1, message: "Goal failed."}
} else {
return { code: 0 }
}
},
{
logInterpreter: LogSuppressor,
});
};

We then have to change our goal creation to pass a number instead of a name:

const goal1 = createSleepGoal(1, 2);
const goal2 = createSleepGoal(2, 4);
const goal3 = createSleepGoal(3, 2);
const goal4 = createSleepGoal(4, 10);
const goal5 = createSleepGoal(5, 2);
const goal6 = createSleepGoal(6, 4);
const goal7 = createSleepGoal(7, 3);

To see this in action, we make a commit with a message [goal 3]. We can see from the Slack message below that Atomist skipped all downstream goals.

Day 10: Approval required

Asking a user to give their approval before continuing is a common requirement in any delivery pipeline. Today we are going to see how we can do that using Atomist, by inserting a manual approval step into the set of goals that we built over the last couple of days.

First we are going to create a goal that requires approval. As before, this goal won’t do anything except sleep for a second. To require approval all we need are two new attributes when creating our goal.

const goalWithApproval = createGoal({
displayName: `approve me`,
approvalRequired: true,
waitingForApprovalDescription: "Please click to approve",
}, async (inv: GoalInvocation) => {
await timeout(1 * 1000);
});

We will insert this goal into our dependency graph so that it runs just after phase 1:

sdm.withPushRules(
onAnyPush().setGoals(goals("all sleep goals")
.plan(phase1)
.plan(goalWithApproval).after(phase1)
.plan(goal5, goal6).after(goalWithApproval)
.plan(goal7).after(goal5)));

Now when we make a commit, this is what we see in Slack when we get to the approval goal:

Once we click the button, the goal execution resumes to completion.

Day 11: Make something

It occured to me that we’ve completed 10 days without actually doing anything useful — quite an achievement! To address that, we’ll see how to run make in an Atomist goal.

What we want is a goal that a) clones our Git repository and then b) runs the makecommand, sending its output to Atomist so that it is available to view on the Atomist dashboard. We want our goal to run on any repository that contains a Makefile.

Here is the goal:

import {
// one new import
spawnAndLog,
} from "@atomist/sdm";
const runMakeGoal = createGoal({
displayName: "Run Make",
}, async (inv: GoalInvocation) => {
const { configuration, credentials, id, context } = inv;
await configuration.sdm.projectLoader.doWithProject({ credentials, id, context, readOnly: false }, async gitProject => {
const projectDir = gitProject.baseDir;
await spawnAndLog(inv.progressLog, 'make', [], { cwd: projectDir });
});
});

This goal contains two new functions:

  1. doWithProject will clone our Git repository, and gives us a GitProject object that has parameters and functions to help us work with it. For our purposes, we only need to know its baseDir, i.e. where the local clone can be found.
  2. spawnAndLog is used to run an external shell command, in our case make. We need to give it our working directory and an instance of ProgressLog. The ProgressLog ensures that any output from our command can be viewed in the dashboard. When this goal is shown in Slack, Atomist will make the goal link to the dashboard log view.

Now all we need is a predicate function to detect projects with a Makefile (exactly like we did on Day 6):

const HasMakefile: PredicatePushTest = hasFile("Makefile");

We use this predicate in order to trigger our goal from a Git commit:

sdm.withPushRules(whenPushSatisfies(HasMakefile).setGoals(runMakeGoal));

To try this out, we just need to add a Makefile to one of our Git repositories. The simplest possible one being something like this:

build:
echo "Running build"

(Note: ensure you have a tab to indent the second line. Makefiles are fussy about tabs.)

When we commit this Makefile, we get something like this in Slack:

and if we click on the Complete: Run Make link, we can view these logs:

Day 12: Make something fail!

Yesterday we ran make with a Makefile so simple it couldn't possibly fail. Of course in the real world, it would fail at some point, and what happens to our goal then?

I changed the Makefile so that it returns a non-zero exit code, and was disappointed to find out that the goal still succeeded. That's not going to help me spot failed builds!

However, I soon found that we could fix the issue with a couple of small changes. Here is the new goal definition:

const runMakeGoal = createGoal({
displayName: "Run Make",
}, async (inv: GoalInvocation) => {
const { configuration, credentials, id, context } = inv;
return await configuration.sdm.projectLoader.doWithProject({ credentials, id, context, readOnly: false }, async gitProject => {
const projectDir = gitProject.baseDir;
return await spawnAndLog(inv.progressLog, 'make', [], { cwd: projectDir });
});
},
{
logInterpreter: LogSuppressor,
});

The changes to yesterday’s code are:

  1. Adding two return statements, so that the Promise returned by spawnAndLogbecomes the return value of the goal. With this change, the exit code from makeis included in the Promise in a code parameter.
  2. Adding a LogSuppressor to the goal, exactly as we did on Day 9, to prevent failed output being sent to Slack.

Now our goal will fail if make fails, and the goal failure wil be evident in Slack. With the addition of this small power, we may know enough to put Atomist to work on building our software!

Day 13: Give a link

We saw previously that we can click on goals in Slack to view the associated logs. One handy feature is the ability to add additional URLs to goals, which are displayed alongside the goal on completion. This can be used to link to artifacts that the goal has produced, or maybe a running service if the goal was responsible for a deployment.

To demonstrate this feature, we can create a goal that has a link to a randomly selected xkcd post. We create the goal like this:

const goalWithXkcdLink = createGoal({
displayName: `Create xkcd link`,
}, async (inv: GoalInvocation) => {
const randomNumber = Math.floor((Math.random() * 2082) + 1);
return {externalUrls: [{label: `xkcd ${randomNumber}`, url: `https://xkcd.com/${randomNumber}/`}]}
});

The trick here is to include a externalUrls attribute in our response, which takes an array of URLs, each with a corresponding label.

Once we add this goal to our goal set (just as we added goals previously) then our next commit will produce a goal in Slack that contains both a link to the logs and an additional link to the xkcd post. Something like this:

✅ Complete: Create xkcd link | xkcd 1338

Day 14: Version it!

When we run software builds, we almost always want to generate a unique version number that we can associate with any artifacts we produce, such as binaries or Docker containers.

Today we will explore how to achieve this with Atomist, using the Version goal that ships with the Atomist SDM library.

So far we have constructed all our goals ourselves (using the createGoal function in all instances), but we can also make use the off-the-shelf goals contained within the Atomist SDM library and the extension packs. Like any code library, these goals prove useful provided that we can customize them to our needs.

The Version goal is created like this:

// NOTE: import from sdm-core, NOT sdm
import {
Version,
ProjectVersioner,
} from "@atomist/sdm-core";
const versionGoal = new Version().with({ versioner: MyProjectVersioner });

We need to give the goal a versioner (an instance of ProjectVersioner), which is a function that takes an SdmGoalEvent and a GitProject, and should return a string representing the version. Here is ours:

export const MyProjectVersioner: ProjectVersioner = async (sdmGoalEvent, gitProject, log) => {
const baseVersion = "1.0";
const branch = sdmGoalEvent.branch.split("/").join(".");
const branchSuffix = (branch !== sdmGoalEvent.push.repo.defaultBranch) ? `${branch}.` : "";
return `${baseVersion}-${branchSuffix}${dateFormat(new Date(), "yyyymmddHHMMss")}`;
};

This function fixes a baseVersion to “1.0.0”, but in practice this value would be read from a file in the repository, e.g. package.json or equivalent, or maybe just a versionfile you invent. In either case, this can be achieved using the gitProject object that gives us access to all the repository files and their content.

Our example above also makes use of the dateformat NPM library, so we must add first add a dependency to our package.json and import it into our Typescript file:

// dependency in package.json
"dateformat": "^3.0.3"
// import in machine.ts
import * as dateFormat from "dateformat";

Our ProjectVersioner function produces version strings like this:

  • 1.0.0-20181217094010 (on the default Git branch, which is usually master)
  • 1.0.0-my-branch.20181214224027(on a non-default branch called my-branch)

Note: Atomist ships with ProjectVersioners for both Java and Node, saving us the trouble of creating our own.

When you use the Version goal, Atomist stores the version string and associates it with the corresponding Git commit. To be useful to us, however, we need to read this version in future goals. How do we do that?

To illustrate the solution, here is a goal called goalSayVersion that will write the version string back to our Slack channel:

import {
// another import needed
readSdmVersion,
} from "@atomist/sdm-core";
// helper function to get version from a GoalInvocation
const storedVersion = async (inv: GoalInvocation): Promise<string> => {
const sdmGoal = inv.sdmGoal;
const version = await readSdmVersion(
sdmGoal.repo.owner,
sdmGoal.repo.name,
sdmGoal.repo.providerId,
sdmGoal.sha,
sdmGoal.branch,
inv.context);
return version;
}
const sayVersionGoal = createGoal({
displayName: `Say Version`,
}, async (inv: GoalInvocation) => {
const version = await storedVersion(inv);
inv.addressChannels(`Version is: ${version}.`);
});

We can add these goals in the normal fashion, ensuring that sayVersionGoal runs after versionGoal:

sdm.withPushRules(onAnyPush().setGoals(goals("version goals")
.plan(versionGoal)
.plan(sayVersionGoal).after(versionGoal)));

A commit to our repository with this in place gives output in Slack something like this:

✅ Versioned 1.0.0-20181215215840

Version is: 1.0.0–20181215215840.

We could now choose to use this version throughout our build pipeline to correctly label the artifacts we produce and identify the correct resources when running tests or deployments.

Day 15: Build and upload

Today we’re looking at doing a “real” build, where the ouptut of my goal is a tar.gzfile that is uploaded to an S3 bucket. We can achieve this using the knowledge we've built up over previous days.

The project we are building is a static website, where the static content is generated from Markdown files by the Hugo framework. We want a goal that:

  1. Generates the static website content by running the hugo command.
  2. Archives the content into a tar.gz file.
  3. Uploads the tar.gz file to a S3 bucket, using the AWS CLI.

Our SDM must be running somewhere that has both Hugo and the AWS CLI installed, complete with our AWS credentials, whether that be on our laptop OS or inside Docker container.

We will also make use of the Version goal that we explored on Day 14, and our Hugo build goal will lookup the generated version string. Here is the goal that builds and uploads our static website:

import {
// import new convenience function
doWithProject,
} from "@atomist/sdm";
const hugoBuildGoal = createGoal({
displayName: `Build Hugo`,
}, doWithProject(async inv => {
const artifactVersion = await storedVersion(inv);
const tarFilename = `${inv.id.repo}-${artifactVersion}.tgz`;
const s3Bucket = "dan-upload-test";
const buildResult = await inv.spawn("hugo");
if (buildResult.code !== 0) {
return buildResult;
}
const tarArgs: string[] = ["-czf", tarFilename, "-C", "public", "."];
const tarResult = await inv.spawn("tar", tarArgs);
if (tarResult.code !== 0) {
return tarResult;
}
const uploadArgs: string[] = ["s3", "cp", tarFilename, `s3://${s3Bucket}/`];
const uploadResult = await inv.spawn("aws", uploadArgs);
if (uploadResult.code !== 0) {
return uploadResult;
}
return {
...uploadResult,
externalUrls: [{label: `S3 Tar file`, url: `https://s3.amazonaws.com/${s3Bucket}/${tarFilename}`}]
};
}),
);

We want this goal to be run only on repositories that are built using Hugo. However, it is difficult to deterimine if a Git repository is built using Hugo by looking at the files and code it contains, since Hugo does not mandate any specific files, To solve this, we will ask that Hugo repositories include a file named hugo in their root directory, and our SDM will look for this file when scheduling goals. With this in mind, the remainder of our SDM changes look like this:

sdm.withPushRules(
whenPushSatisfies(hasFile("hugo"))
.setGoals(goals("hugo goals")
.plan(versionGoal)
.plan(hugoBuildGoal).after(versionGoal)));

Day 16: See build progress

Yesterday we built an application that ran three commands, representing three phases: build, package and upload. It is common for builds to consist of many more than three phases, and for some phases to be long-running. In those circumstances our goal could be in a “Working” state for a many minutes, giving us no sense of progression until it was complete. Often developers solve this by watching scrolling logs, but our SDM can provide us with a nice alternative: we can append some text to the goal in Slack that indicates which phase we are currently running.

Atomist ships with a handy way to do this, that looks for regular expression matches in the log output. Using this approach we can identify the current phase even when running a single long-running script.

Here is an updated version of yesterday’s build goal, with support for phases:

import {
// one new import
testProgressReporter,
} from "@atomist/sdm";
const hugoBuildGoal = createGoal({
displayName: `Build Hugo`,
}, doWithProject(async inv => {
const artifactVersion = await storedVersion(inv);
const tarFilename = `${inv.id.repo}-${artifactVersion}.tgz`;
const s3Bucket = "dan-upload-test";
inv.progressLog.write("phase:hugo-build");
const buildResult = await inv.spawn("hugo");
if (buildResult.code !== 0) {
return buildResult;
}
const tarArgs: string[] = ["-czf", tarFilename, "-C", "public", "."];
inv.progressLog.write("phase:create-tar");
const tarResult = await inv.spawn("tar", tarArgs);
if (tarResult.code !== 0) {
return tarResult;
}
const uploadArgs: string[] = ["s3", "cp", tarFilename, `s3://${s3Bucket}/`];
inv.progressLog.write("phase:s3-upload");
const uploadResult = await inv.spawn("aws", uploadArgs);
if (uploadResult.code !== 0) {
return uploadResult;
}
return {
...uploadResult,
externalUrls: [{label: `S3 Tar file`, url: `https://s3.amazonaws.com/${s3Bucket}/${tarFilename}`}]
};
}), {
progressReporter: testProgressReporter({
test: /phase:hugo-build/i,
phase: "Running Hugo build"
}, {
test: /phase:create-tar/i,
phase: "Creating .tar file"
}, {
test: /phase:s3-upload/i,
phase: "Uploading tar file to S3."
}
),
}
);

The changes we have made to yesterday’s version are:

  1. We write to the log before running each command, using inv.progressLog.write, indicating which phase is running.
  2. We add a third ‘options’ argument to our createGoal function. The options argument allows us to specify a progressReporter, which is a function that takes the log as an argument and returns a phase (as a string). Instead of writing our own, we use a testProgressReporter that takes mappings of regular expressions to phases.

Now when we trigger a build, instead of seeing this output in Slack:

▶ Working: Build Hugo

we see this:

▶ Working: Build Hugo | Creating .tar file

Day 17: Identify errors

Yesterday we saw how we could use our log output to clearly highlight progress in a long-running build. Today we are looking at another common use of log output: identifying and diagnosing errors in our builds.

When we encounter a failed build we want to fix it quickly, and we don’t want to expend time and effort identifying the root cause. Hopefully over time our builds become more reliable as we encounter new situations and respond with fixes and mitigations, but errors are impossible to eliminate entirely. Some errors point to fresh mistakes in our code that need fixing, and some errors are outside of our control, such as an external binary repository being unavailable. Teams acquire familiarity with these errors, building knowledge on how to identify them and what steps can be taken to fix them, but often that valuable knowledge is only in people’s heads.

Today we will explore how to take that knowledge from developers heads and put it into our SDM codebase. With this approach, the SDM becomes the team member that always seems to know what the error is and what to do about it!

I know that if I make a mistake in one of my Markdown files, Hugo will output an error like this:

Error: Error building site: "/code/hugo-wiki/content/_index.md:4:1": failed to unmarshal YAML: yaml: line 3: could not find expected ':'

We want our SDM to spot an error with the phrase Error building site: and post an error to our Slack channel with the line in question, and a message that tells us what type of error it is. To do this we create a logInterpreter, which is an instance of InterpretLog. Here is the code:

import {
// one new import
InterpretedLog,
} from "@atomist/sdm";
const hugoBuildLogInterpreter = (log: string): InterpretedLog => {
const hugoBuildRegex = /.*Error building site.*/
const found = log.match(hugoBuildRegex);
if (found) {
return {
relevantPart: found[0],
message: "Failed to build Hugo site. We must have made a mistake.",
};
} else {
return {
relevantPart: "",
message: "Unknown error. See log.",
};
}
};

The logInterpreter is a function that takes log output and returns a summary error message, highlighting the log. We can choose to include as little or as much of the log as we want, plus a message to clearly describe the error.

Then we need to add it to our goal:

const hugoBuildGoal = createGoal({
displayName: `Build Hugo`,
}, doWithProject(async inv => {
const artifactVersion = await storedVersion(inv);
const tarFilename = `${inv.id.repo}-${artifactVersion}.tgz`;
const s3Bucket = "dan-upload-test";
inv.progressLog.write("phase:hugo-build");
const buildResult = await inv.spawn("hugo");
if (buildResult.code !== 0) {
return buildResult;
}
const tarArgs: string[] = ["-czf", tarFilename, "-C", "public", "."];
inv.progressLog.write("phase:create-tar");
const tarResult = await inv.spawn("tar", tarArgs);
if (tarResult.code !== 0) {
return tarResult;
}
const uploadArgs: string[] = ["s3", "cp", tarFilename, `s3://${s3Bucket}/`];
inv.progressLog.write("phase:s3-upload");
const uploadResult = await inv.spawn("aws", uploadArgs);
if (uploadResult.code !== 0) {
return uploadResult;
}
return {
...uploadResult,
externalUrls: [{label: `S3 Tar file`, url: `https://s3.amazonaws.com/${s3Bucket}/${tarFilename}`}]
};
}), {
progressReporter: testProgressReporter({
test: /phase:hugo-build/i,
phase: "Running Hugo build"
}, {
test: /phase:create-tar/i,
phase: "Creating .tar file"
}, {
test: /phase:s3-upload/i,
phase: "Uploading tar file to S3."
}
), logInterpreter: hugoBuildLogInterpreter,
}
);

Only one line was added today — the one that reads logInterpreter: hugoBuildLogInterpreter,.

We can demonstrate this in action by introducing an error to our Hugo wiki repository, then committing the change. In Slack we get an error like this:

Failed to build Hugo site. We must have made a mistake.

Error: Error building site: "/private/var/folders/5v/4dmws3rj4ydcjn09txggqq580000gn/T/atm-4116-41166AOk6mDJx2x7/content/_index.md:4:1": failed to unmarshal YAML: yaml: line 3: could not find expected ':'

If each time our team encounters a new error we add them to our logInterpreter, then over time we will assemble a good knowledge-base of errors that benefits everyone.

Day 18: Generate a project

Today we are going to introduce another Atomist feature: generators. We can use a generator to create a new repository based on an existing one, making changes in the process.

We will use the Hugo site repository we have been using previously as our ‘seed’ project, and for today will make a straight copy — no modifications.

Here how it is done:

import {
// one new import from sdm
GeneratorRegistration,
} from "@atomist/sdm";
import {
// one new import from automation-client
GitHubRepoRef,
} from "@atomist/automation-client";
const HugoSiteGenerator: GeneratorRegistration = {
name: "Hugo Site generation",
intent: "create hugo site",
startingPoint: GitHubRepoRef.from({
owner: "my-github-org",
repo: "hugo-wiki"
}),
transform: [],
};
sdm.addGeneratorCommand(HugoSiteGenerator);

That’s it!

The intent attribute determines how this generator will be run. We have chosen create hugo site, so this is what we type in Slack: @atomist create hugo site. The bot responds by asking us for the name of the target repository, then it tells our SDM to make a copy of the hugo-wiki repository. A short wait and we have a whole new Hugo repository to play with.

Not only that, but over previous three days of this series we have created a set of goals that builds a Hugo site and uploads the result to S3 for any repository containing a ‘hugo’ file! A new commit event is fired when the project is copied, which our SDM responds to and so builds the project despite never having seen it before. This, to me, feels like a new superpower!

Day 19: Generate and transform

Yesterday we were introduced to generators, and today we are going to continue with the same theme. A straight copy of a Git repository is useful, but the real power comes when we apply transforms to make changes to our new repository.

Generators and transforms together deliver a similar capability to project creation tools like Yeoman or Maven Archetypes. There are, however, some differences in approach. When using Atomist, seed projects are still ‘real’ projects, that can be built and run, and contain no template code. To enable this, Atomist has rich support for AST-based transforms, which can also be used to good effect independently of generators, for example to apply auto-fixes to projects.

We want to apply a transform that updates the H1 header of our homepage Markdown file. In this instance, we don’t have to create our own transform as there is one in the nascent Atomist Markdown extension pack. We import it by adding it to our package.json:

"@atomist/sdm-pack-markdown": "^0.1.2",

The transform is called updateTitle, and takes two arguments: a path to the Markdown file we wish to change, and the new heading text. Once we add this transform to the array of transforms, we're done.

import {
updateTitle,
} from "@atomist/sdm-pack-markdown";
const hugoSeedRepoDetails = { owner: "my-github-org", repo: "hugo-wiki"}const HugoSiteGenerator: GeneratorRegistration = {
name: "Hugo Site generation",
intent: "create hugo site",
startingPoint: GitHubRepoRef.from(hugoSeedRepoDetails),
transform: [updateTitle("README.md", "Merry Christmas")],
};

Next time we run our command @atomist create hugo site, the README.md file of our new repository will have the heading "Merry Christmas".

Day 20: Custom transform

We saw how to add a pre-built transform to our generator, but today we will write our own.

Our seed project is called “Sample Wiki”, and this name appears in three different files. Yesterday we used an off-the-shelf Markdown transformation to update one of them to “Merry Christmas”, but now we will create a custom transformation to update the remaining two instances.

A custom transformation is an instance of a CodeTransform, which in turn is a function that takes a Project and returns a result. The result also contains the project, plus two flags to indicate a) whether the transform was successful and b) whether we edited any files in the process.

Our code transform uses straightforward string replacements of the file content. There are a few steps involved in updating a file (read the content, change the content, write the content back), so to help us we create a helper function replaceInFile. Here is our new code:

import {
// one new import from sdm
CodeTransform,
} from "@atomist/sdm";
import {
// two new imports from automation-client
Project,
NoParameters,
} from "@atomist/automation-client";
import { File } from "@atomist/automation-client/lib/project/File";
const hugoSeedRepoDetails = { owner: "my-github-org", repo: "hugo-wiki"}async function replaceInFile (file: File, updateFileFn: (original: string) => string): Promise<void> {
const fileContent = await file.getContent();
const newFileContent = updateFileFn(fileContent);
await file.setContent(newFileContent);
}
const HugoSiteTransform: CodeTransform<NoParameters> = async (p: Project) => {
const replaceTitle = (content: string) => content.replace("Sample Wiki", "Merry Christmas");
await replaceInFile(await p.getFile("config.toml"), replaceTitle);
await replaceInFile(await p.getFile("content/_index.md"), replaceTitle);
return { edited: true, target: p, success: true };
};
const HugoSiteGenerator: GeneratorRegistration = {
name: "Hugo Site generation",
intent: "create hugo site",
startingPoint: GitHubRepoRef.from(hugoSeedRepoDetails),
transform: [
updateTitle("README.md", "Merry Christmas"),
HugoSiteTransform],
};

Notice we add our new transform, HugoSiteTransform, to the array of transforms. Our project generation is now much-improved, with the same title appearing in all relevant places. However, having the title hard-coded to "Merry Christmas" is far from ideal. Maybe we'll address that in tomorrow's post!

Day 21: Generate with parameters

At the end of the previous post I suggested that today we would stop using “Merry Christmas” as the title of our newly generated project, and instead ask the person creating the project to provide the title.

To do this, we need to introduce parameters. Or more accurately, we need to refresh our memory of parameters, as we covered them back on the very first day of this series! On days 1, 2 and 3, we looked at commands, and we’ve been creating our generators using a GeneratorRegistration which is simply a specialized command. Consequently, the generator command accepts all the command attributes we've seen previously, including name, intent and parameters.

For our purposes, we want to add a single parameter, title. We will make this a required parameter and give it a description and a displayName.

In order to use parameters in our transform, we discover that a CodeTransform is actually a function that takes three arguments (not one as previously suggested), with parameters as the third argument. We will ignore the second argument for this post.

The code changes we need to make today are:

  1. Add a parameters attribute to our GeneratorRegistration, defining our titleparameter.
  2. Add parameters as an argument to our CodeTransform, and give it a type to get type-safety.
  3. Replace the hard-coded “Merry Christmas” with params.title.
  4. Move the updateTitle transform inside our custom transform, thus combining our two transforms into one. We do this because we cannot use our parameter in the updateTitle transfom. Note: having one big transform or many smaller ones is our choice, and depends on how re-usable we want to make the individual transforms.

Here is the full code:

const hugoSeedRepoDetails = { owner: "my-github-org", repo: "hugo-wiki"}type HugoGenerateParams = { title: string };const HugoSiteTransform: CodeTransform<HugoGenerateParams> = async (p: Project, papi, params: HugoGenerateParams) => {
const seedTitle = "Sample Wiki";
const replaceTitle = (content: string) => content.replace(seedTitle, params.title);
await replaceInFile(await p.getFile("config.toml"), replaceTitle);
await replaceInFile(await p.getFile("content/_index.md"), replaceTitle);
await updateTitle("README.md", params.title)(p, papi);
return { edited: true, target: p, success: true };
};
const HugoSiteGenerator: GeneratorRegistration<HugoGenerateParams> = {
name: "Hugo Site generation",
intent: "create hugo site",
parameters: { title: { required: true, description: "The title of your Hugo wiki", displayName: "Title" } },
startingPoint: GitHubRepoRef.from(hugoSeedRepoDetails),
transform: [HugoSiteTransform],
};

When we next run @atomist create hugo site and are prompted to supply parameters, we are asked for an additional 'title' parameter, which is then used to customise our new repository. Now we have something really useful!

Day 22: Build a Docker image

For today’s installment, I decided to leave generators behind and return to builds. Back on day 15, we created a goal that built and uploaded a TAR file full of web content to our S3 bucket. One way to serve our web content would be to include it on a Docker image that runs Nginx. Let’s create a new goal to do that.

Doubtless we could achieve our aim by running Docker commands using techniques we’ve already encountered, but given there is an Atomist SDM Docker extension packI feel we should try and make use of it. We can add the pack to our package.jsondependencies just as we did with the markdown pack:

"dependencies": {
"@atomist/sdm-pack-docker": "1.0.2",
},

With the pack in place, all we need to create a Docker build goal is this:

import {
DockerBuild,
} from "@atomist/sdm-pack-docker";
const hugoDockerBuildGoal = new DockerBuild().with({ options: { push: false } });

For now we’ll set the push option flag to false so there is no attempt to push our image to a registry.

With this terse configuration we get the default behavior and therefore must accept certain conditions. Firstly our Git repository must contain a Dockerfile in its root, and with no modifications to the repository we must be able to run a docker build. Secondly, the Docker image name will be derived from our repository name, and the sole image tag will be the build version generated and stored by the Version goal (see Day 14).

If we accept these conditions, we can add this goal to our goal set:

sdm.withPushRules(
whenPushSatisfies(hasFile("hugo"))
.setGoals(goals("hugo goals")
.plan(versionGoal)
.plan(hugoBuildGoal).after(versionGoal)
.plan(hugoDockerBuildGoal).after(hugoBuildGoal)));

When we try it out, we will find that three goals run (Version, Hugo build and Docker build). If all goals succeed, then we will find a new entry in our list of docker images. Tomorrow we will look at how to make changes to the repository before we run the docker build, which is almost always necessary in practice.

Day 23: Build a Docker image, part 2

Yesterday we ran a Docker build in a goal, with almost no customization of the default behavior. Today we are going to add our Hugo site files to the Docker image, but to do that we have to download the TAR file from S3.

This is the Dockerfile we will use:

FROM nginxCOPY build /usr/share/nginx/html

The Dockerfile lives in the root of our Git repository, but the contents of the builddirectory will come from our TAR file.

Note You may wonder why we chose to transfer the files to and from S3 in our example build process. In fact we didn’t need to — it would be simpler in this case to create a single goal that performed both the Hugo build and the Docker build (it could not remain as separate goals, as each goal gets its own fresh clone of the Git repository). However, there are use cases where uploading a TAR file would be necessary, for example if downstream build processes wanted to use the TAR for other purposes.

The DockerBuild goal we created yesterday allows us to modify the content of the repository on disk before the Docker build runs, which we do by adding a project listener, which is a function that looks like this:

async function myProjectListener(gitProject: GitProject, goalInvocation: GoalInvocation, event: GoalProjectListenerEvent): Promise<void | ExecuteGoalResult> {
// implementation here
}

Fortunately we don’t need to learn anything new to complete the implementation. Here it is:

import {
// new imports
GoalProjectListenerEvent,
spawnLog,
} from "@atomist/sdm";
// pull these out so they can be shared with our upload goal
const s3Bucket = "dan-upload-test";
const hugoTarFilename = (repo: string, version: string) => `${repo}-${version}.tgz`;
async function hugoS3DownloadProjectListener(gitProject: GitProject, goalInvocation: GoalInvocation, event: GoalProjectListenerEvent): Promise<void | ExecuteGoalResult> { const opts: SpawnLogOptions = {
cwd: gitProject.baseDir,
log: goalInvocation.progressLog
};
const version = await storedVersion(goalInvocation);
const tarFilename = hugoTarFilename(gitProject.id.repo, version);
await gitProject.addDirectory("build");
const downloadArgs: string[] = ["s3", "cp", `s3://${s3Bucket}/${tarFilename}`, tarFilename];
goalInvocation.progressLog.write("phase:s3-download");
let result = await spawnLog("aws", downloadArgs, opts);
if (result.code !== 0) {
return result;
}
const untarArgs: string[] = ["xvfz", tarFilename, "-C", "build"];
goalInvocation.progressLog.write("phase:untar");
return await spawnLog("tar", untarArgs, opts);
}

Now we need to register this project listener with the DockerBuild goal:

import {
// new import
HasDockerfile,
} from "@atomist/sdm-pack-docker";
const hugoDockerBuildGoal = new DockerBuild().with({ options: { push: false } })
.withProjectListener({
name: "s3-download",
listener: hugoS3DownloadProjectListener,
pushTest: HasDockerfile,
});

At this point we have an end-to-end build from source to runnable Docker image.

Day 24: Using configuration

This is the final post in this series. When I started it I knew very little about writing SDM code. I knew it would be challenge to get through all 24 days, but in doing so I hoped to learn plenty, and feel capable of creating a sophisticated build proces. I’ve certainly achieved the former objective, and am on my way to the latter.

Today we will look at a feature we haven’t covered yet — making use of the Atomist client configuration file. The file lives at ~/.atomist/client.config.json and is required in order to run an SDM. It should be used for data that we don't want to put in the source of our SDM.

When we built our Docker image, we chose not to push it to a registry. In order to do so, we will certainly need at least a registry address, plus likely a set of credentials.

If we make a small change to our SDM code to set the push flag to true:

const hugoDockerBuildGoal = new DockerBuild().with({ options: { push: true } })

With this change our goal will fail with an error that reads Failure in executing goal: Result code 1 Required configuration missing for pushing docker image. Please make sure to set ‘registry’, ‘user’ and ‘password’ in your configuration.

We don’t want to put these details in our SDM, so we’ll put them in our client.config.json file like this:

{
"sdm": {
"docker": {
"hub": {
"registry": "quay.io",
"user": "atomist",
"password": "mypa55word"
}
}
}
}

Now we can change the DockerBuild goal to use these values by including them in the options:

import {
DockerOptions,
} from "@atomist/sdm-pack-docker";
const hugoDockerBuildGoal = new DockerBuild()
.with({
options: {
push: true,
...configuration.sdm.docker.hub as DockerOptions,
}
})
.withProjectListener({
name: "s3-download",
listener: hugoS3DownloadProjectListener,
pushTest: HasDockerfile,
});

The configuration variable is an argument that is passed into the machine function, and gives us access to the entire contents of client.config.json.

If you are familiar with Docker, you will be aware that in order to push to a registry, our image name must include the registry address. Fortunately this is taken care of without any additional effort on our part — the default Docker image name creator will include the registry in the image name if it finds one set in the options.

That’s it — all 24 days complete! Phew! There’s still plenty we didn’t cover, including auto-fixes, code inspections, presenting buttons to users in Slack, fingerprints, making custom GraphQL queries … and no doubt much more I’m yet to discover. I think I’ll leave all that to explore in 2019. Merry Christmas.

--

--