Getting Started with Travis, NPM, linting and GreenKeeper — RUN__ON Part 1

CS Weekly 6 — Run__ON Part 1

Gregory ‘Grey’ Barkans
17 min readAug 23, 2018

This is Part 1 of a mini-series that discusses several topics in creating a full-stack web application called RUN__ON.

For the next several weeks I’m going to create a full stack application, broken down into a series of substantial sub-topics for each week. Naturally, I don’t think I’ll be able to cover every line of code for a full-scale application, but rather I want to remain in the CS Weekly spirit: present a deep(er) dive into substantial topics of interest.

For those that haven’t read about CS Weekly, I presented motivation for a manifesto towards committing oneself to weekly programmatic explorations in an article found here:

Last week’s CS Weekly covered string pattern-matching with R-Way Tries:

Introducing RUN__ON: A Project Mini Series

This mini series will cover building a full stack application. I didn’t build the application prior; I will chip away at topics each week in the natural development life cycle flow.

The topics I aim to cover approximate:

  • CI
  • PostgreSQL and using PgAdmin
  • Node/Express
  • Developing an API
  • TDD, Modules, Software Engineering
  • PG-Promise, Data Layers and Controllers
  • Express Router
  • Building A Client with Vue
  • Vue Components & AJAX Requests
  • Redis and Caching
  • Git and Github Configurations for Open Source Projects

What Will Be Built?

The project-to-be is a variant of MadLibs, a silly fill-in-the-blank game. The initial plan is to have the following functionalities:

  • Users can create new “stories”
  • Stories consist of multiple phrases, which can contain fill-in-the-blanks.
  • Users can fill in a blank on an existing story
  • Users can add sentences to an existing story
  • A NewsFeed of recent activity

CI — What and Why

Travis, Greenkeeper and other services such as AppVeyor are tools that aid in continuous integration (CI for short). The basic premise of continuous integration is to craft an architecture or pipeline for adding to, testing and deploying a code base.

To give motivation, imagine the following:

  • You and several others work on a project together using a central repository
  • Each developer uses personalized development environments and code editors
  • The project utilizes various dependencies that are constantly being patched
  • The software is hosted on a server with a very specific environment

It doesn’t take a stretch of the imagination to think of some of the problems that could very easily arise. To point out a few:

  • unix vs windows line endings in commits
  • tests not running properly on personalized development environments (or taking a colossal time to process)
  • a mismatch in development environment vs deployment environment
  • updating dependencies causing unexpected bugs
  • improperly or otherwise downright forgetting to deploy a recent patch

With CI, the goal is to automate many of these steps in a centralized way that ensures a smooth development-to-deployment cycle, while minimizing the above bugs and inconsistencies. In particular, Travis can run both test and deploy scripts in very specific environments on a remote server. Developers no longer need to configure and run tests locally — and thus environment issues are less likely to be an issue. Further, the code base may be able to be tested on a distribution the development team otherwise does not use or have access to.

Furthermore, Github branches can be configured such as to not allow merges until Travis passes with a green light. This can even include simple things like code-linting. Branches can also be individually configured to trigger specific builds and automatic deploys every time there’s a push or merge (ex: master or release branch(es)).

Greenkeeper monitors package dependencies for updates. For example, if a project depends on vue and vue releases a new patch, Greenkeeper will automatically create a pull request with the updated vue dependency. By doing so, it invokes a Travis build which is used to demonstrate if the upgraded dependency will cause any breakages. Thus it keeps one’s dependencies fresh or ‘green’, which in the long-run is helpful for project maintenance and mitigating security vulnerabilities in older versions.

Of course, all of these tools also automatically send notifications and emails in order to make everything easy to monitor.

Essentially, everything just kind of happens ‘magically’, which makes our lives just a tad easier.

Project Setup

Create a new project on Github and enable Travis as well as Greenkeeper. I like to keep everything empty so that my initial commit comes from my first push.

If you’ve never used Travis or Greenkeeper, chances are those options don’t appear for you. Not to sweat — before creating your new repo, head over to the Github Marketplace. There, you can do a search for each and setup the free plans for your account.

Once you’ve created the repository and given access to Travis CI and Greenkeeper, you’ll need to head to each of their websites and do some additional setup.

Initializing Travis

It’s best to start with Travis since Greenkeeper relies on it. Head to https://travis-ci.org and in the top right hit ‘sign in with Github’, then go to your profile. Here you’ll want to click sync account.

Once everything is sync’d, find the repository you just created (pro-tip: just type it out into the filter box). Move the toggle switch to ‘on’.

Great, now you have access to the project’s dashboard:

build: unknown!!!

Initializing Greenkeeper

The process for Greenkeeper depends upon Travis. At this point, Travis is enabled but our repository is empty. Thus we have to actually configure Travis and make a first commit, then we’ll configure Greenkeeper.

Setting Up A Local Dev Environment

If you’re on windows like me, open up Git Bash (otherwise a terminal on a *nix system) and head to a directory of choice. Make a new directory for the project and point it to the remote repository that was just created on Github.

cd ~/Documents/Repositories/cs-weekly
mkdir run__on
cd run__on
git init
git remote add origin <your url>
# to verify
git remote -v
git fetch origin

Let’s add 2 files:

  • README
  • .travis.yml (ps: do note the dot (.))
echo "# RUN__ON" > README.md
touch .travis.yml

Travis Scripts — A Brief Intro

A first Travis script is line-by-line not overly difficult to write. The real challenge isunderstanding how it all works together.

For the complete newcomer, there are a few core concepts/terms that need to be hashed out prior:

  • Run Cycle
  • Terms: Jobs, Phases, Builds, Stages

Travis Run Cycle - Installations and Scripts

There are two major parts of the cycle: installing dependencies/environmental setup, followed by executing scripts.

The entire cycle is as follows

  1. Install addons — Travis runs a Ubuntu environment, with access to the Advanced Package Tool (apt). Here, packages related to the environment are installed by running commands like apt-get. We’ll be setting up a postgresql database.
  2. Install cached components — In order to speed up Travis builds, specific addons and dependencies can be cached for a quicker download. At this stage, any prior cached components are installed.
  3. Before Install — Commands to run before installing project dependencies. We’ll install global npm modules here.
  4. Install — The project dependencies are installed (ie: npm install).
  5. Before Script — Commands to run before your test scripts. We’ll run database creation here.
  6. Script — Test Suites (ie: npm test)
  7. Before Cache — Run commands just before uploading a new cache archive. The documentation lists managing log files as a potential use case.
  8. After Success or After Failure— Depending on whether or not the prior phases passed without errors, you’ll have access to run commands after success or after failure. In the future we’ll run code coverage after success.
  9. Before Deploy — If you’re using travis to automatically deploy, the before deploy, deploy and after deploy hooks are run only on success.
  10. Deploy — in the future, we’ll run a Heroku deploy here.
  11. After Deploy
  12. After Script — Finally if there’s any other tasks to do after everything, you can define them here. Perhaps you send information to a slack channel or increment a counter on an Arduino running on the moon. Whatever else you fancy, you can do it here.

Builds, Jobs, Stages — Oh My

  • Job — A run-through of a series of phases (installation → scripts)
  • Phase — Sequential steps in the job (parts of the Run Cycle). For example, there could be multiple phases for installation in the cycle.
  • Build — A group of jobs. The simplest example of build with multiple jobs is running the same tests in two different environments (ex Node latest version and Node LTS version)
  • Stage — A build stage is a way to group jobs such that the jobs of each stage run in parallel, but the stages are sequential. An example is several test stages that can run simultaneously and a deploy stage that should only run once all of the test stages finish without error.

That’s a lot to take in without an example. Let’s dive into a first build that runs a single job. A stage will not yet be declared as there’s only one job.

A First Travis Script for Node with Postgres

The script is written top-down in the order of the run cycle.

Start by declaring the environment, installation and add-ons (steps 1–2 in the run cycle).

Define the language and version(s) used to run the application. Node’s latest stable version will be used as Heroku supports it and thus the application will be deployed with it as a target. As shown in the docs, “node” (in quotation marks) declares the latest stable version.

language: node_js
node_js:
- "node"

A PostgreSQL database will be used, which is considered a service/add-on. By default, PostgreSQL 9.2 is installed, but any of the supported versions in the Apt can be installed. Thus, I’m going to declare version 10. However, doing so is not as straight forward as it should be, as discussed in this popular open issue https://github.com/travis-ci/travis-ci/issues/8537.

I’ll present the solution discovered in that thread.

services:
- postgresql
addons:
postgresql: "10"
apt:
packages:
- postgresql-10
- postgresql-client-10

As is clear, we’re using the APT to install postgresql as well as the postgresql clients (which includes psql).

Next install project dependencies (steps 3–4 in the run cycle).

It might be tempting to do the following:

install:
- npm install

However, by virtue of listing language: node_js, the above install script is the default behaviour that will run if nothing is specified. Therefore omit the install field for now.

Next is running scripts (step 5–6).

PostgreSQL was installed earlier, but a user and database needs to be configured in order to create tables and run tests ec. The before_script is the place to run these kinds of configurations, as they’re certainly not a matter of installation but also need to occur prior to running test suites.

It might be tempting to construct a command like the following:

psql --command="CREATE DATABASE WITH OWNER = postgres;"

However, think about how database credentials are passed to the application in production and even in development: usually through environment variables.

Thus it’s better practice to do the same in the Travis environment. The env flag can be used to declare environment variables. When Travis executes this script, these variables are just exported as such: export PGPORT=5433.

env:
global:
- PGPORT=5433
- DB_NAME=runon_test
- DB_USER=runon
before_script:
- psql --command="CREATE USER ${DB_USER};"
- psql --command="CREATE DATABASE ${DB_NAME} WITH OWNER = ${DB_USER};"

Finally, the test script can be run. Again, it might be tempting to write the following:

script:
- npm run test

but much like the installation, this is the default behaviour if not specified by virtue of setting the language to node. Thus omit declaring the script field and we’re actually done!

The script in its entirety:

language: node_js
node_js:
- "node"
services:
- postgresql
addons:
postgresql: "10"
apt:
packages:
- postgresql-10
- postgresql-client-10
env:
global:
- PGPORT=5433
- DB_NAME=runon_test
- DB_USER=runon
before_script:
- psql --command="CREATE USER ${DB_USER};"
- psql --command="CREATE DATABASE ${DB_NAME} WITH OWNER = ${DB_USER};"

NPM

For a first commit and Travis build, a package is needed.

npm init

Now, I actually use this command as a lazy way to just create the package.json file while getting for free the ‘repository’, ‘homepage’ and ‘issues’ fields filled in. But it is not necessary as the command doesn’t do anything special like installations, thus one can entirely omit running it. For the prompts that follow after running npm init I just rapidly hit [enter] until the file is created.

As for a justification: the cli doesn’t have a prompt for all of the fields I wish to specify, I dislike typing long strings into a command line interface and I don’t like how the generated file is structured.

Once the default is created, I open it up in my preferred text editor (sublime) and I fill out all relevant fields in the order they’re defined in the npm docs:

{
"name": "run__on",
"version": "0.0.0",
"description": "A variant of mad libs.",
"homepage": "https://github.com/vapurrmaid/run__on#readme",
"keywords": [],
"bugs": {
"url": "https://github.com/vapurrmaid/run__on/issues"
},
"license": "Apache-2.0",
"author": {
"name": "Vapurrmaid",
"email": "vapurrmaid@gmail.com",
"url": "https://github.com/vapurrmaid"
},
"contributors": [],
"repository": {
"type": "git",
"url": "git+https://github.com/vapurrmaid/run__on.git"
},
"scripts": {},
"dependencies": {},
"devDependencies": {},
"engines": {
"node": ">=10.0"
},
"private": true
}

Additionally, I think it’s worth putting in the time to get the package set correctly from the get-go.

Of special note:

  • version set to 0.0.0 instead of the default 1.0.0. I don’t think it makes sense to start at 1 (I mean, we count indices starting from 0, why would a new project already be set to version 1?). I like to consider 0.1.0 the first major development publish — something that could run standalone error-free but isn’t necessarily production ready. In my opinion, 1.0.0 is the first publicly released API. In the case of an application, the first version of a real deploy after release candidates.
  • no main is listed because we’re not exporting a library/module, rather defining an application. In other words, run__on is not being imported or required into another project.
  • engines matches the deployment environment or otherwise the intended environment where the package should be executed
  • private is set to true indicating no intention of making this a public NPM package (and it wouldn’t make sense to do so, as once again it’s not something others can import)

Here’s a really useful NPM package validation tool to verify that everything is filled out correctly:

Sorting out Project Infrastructure: Mono Repos or Not?

At this stage, before installing dependencies and making any further strides, it’s important to come to an understanding of how the project will be structured. Quite specifically I’m referring to the directory structure and packages.

One architecture is the so-called monorepo. Essentially the entire application codebase lives in a single repository and is considered a package of packages. Let’s assume, for example, that you’re creating a desktop and mobile client of a site which uses a single server. A package of package monorepo structure could look as the following:

root
|_ .travis.yml
|_ package.json
|_ mobile_client/
|_ package.json
|_ server/
|_ package.json
|_ web_client/
|_ package.json

If there’s just one client, a condensed version is to simply make the server the root and add a client sub-directory:

root
|_ .travis.yml
|_ package.json
|_ server.js
|_ client/
|_ package.json
|_ controllers/
|_ routes/
|_ services/

Often, however, packages are kept in separate repositories. In the first example there would be one repository for the sever/API, one for a web client, one for a mobile client and perhaps even another one solely for bugs/reporting and documentation.

It wasn’t actually possible until recently to use Greenkeeper in a monorepo with multiple packages unless all dependencies were listed in the root. However, they do now (and thankfully) support monorepo architectures.

Because RUN__ON is a toy/hobby project and there will not be more than one client nor any intention of a native mobile app, I’ll opt to use a monorepo to keep everything in one place. Further, for simplicity, I’ll use the condensed structure: the server is the root package, as opposed to shipping a package of packages. Again, I’m doing this for simplicity but please keep in mind that separate repositories or shipping a package of packages is likely the better option for serious endeavours, or at a very minimum complies with semantic versioning in a more intuitive manner.

Another note with separate repositories is that history related to each package is self-contained. If, for example, one switches from a react web client 👎 to a vue web client 👍(a change for the better), there will be some messy commits in the history. These commit histories will have nothing in common with work on the API. However in a separate repository model, there are many options to keep things ‘clean’. The existing client repository could be archived in favour of a new one, it could be archived in a separate branch or simply the history could be deleted/re-written without compromising other unrelated histories.

Setting up a Linter

Alright, at this point we have:

  • initialized a Github project
  • decided on a project architecture
  • initialized Travis and wrote a basic .travis.yml script
  • initialized the package.json
  • initialized a README.md

Before the first commit, I like to add linting. For those that are unfamiliar with linting, these are tools to enforce and ensure code styles. Lint tests are added to the test suite so that any code prior to merge is guaranteed to be consistent with line endings, tabs/spaces and general code aesthetic. Beyond these factors, linting helps discover bugs during development (such as unused variables, etc).

I use both ESLint and Standard. ESLint configuration will be seen in the future, as that will be more relevant to the Vue client. Standard is a self-contained ESLint configuration that will work for the server JavaScript files. We can use it as a global or drop it in as a development dependency without any manual configuration or installing any other dependency.

It’s recommended to install standard globally if using certain text editor plugins. Thus, run:

npm install -g standard

However, we have to be careful now — by installing as a global and not as a development dependency, Travis may not have standard in its path. So let’s add standard into the travis.yml script in the before_install field as that mirrors the development situation:

before_install:
- npm install -g standard

Note: before_install will appear between env and before_script.

If like me you use Sublime Text, make sure you installed and read the documentation of the following with package control:

The latter package automatically runs standard --fix from your global path in order to lint JavaScript files live as you write them. It’s quite handy.

Alright now that standard is installed and in your path as well as optionally integrated with your code editor, simply create a lint script in package.json, and run that script from a test script:

scripts: {
"test": "npm run lint",
"lint": "standard"
}

To verify if things work, create some temp file foo.js and make some kind of obvious error.

console.log('error

If your code editor successfully integrated with standard, you’ll get some error highlighting with error messages on hover (as well as messages in the console).

Linting Is Quite Useful in Reducing Simple Blunders

Next test all 3 of the following:

  1. standard
  2. npm run lint
  3. npm run test

In all three cases you should see something like:

Parsing error: Unterminated string constant

and for the npm commands, some error logs will be additionally printed.

Verify everything works, then remove the temporary file.

Adding First Dependencies, Gitignore and First Commit

Last step before the initial commit. I promise.

At this point, it is already known that express, postgresql and a few testing libraries will be used. Add the following:

npm install --save express pg-promisenpm install --save-dev assert cross-env mocha nodemon

Running npm installations generates a local node_modules folder which is to be ignored from source control. Add an ignore file and list that folder:

echo -e "#dependencies\nnode_modules" > .gitignore

Finally, fire up the commit pipeline.

Verify everything:

git status
> .gitignore
> README.md
> package-lock.json
> package.json
> travis.yml

Stage everything:

git add .

Write a first commit message:

git commit# in the vim editor
Initial Commit
:wq

Finally, push and let Travis run for the first time:

git push origin master

With Travis open in the browser the build is seen in real time:

TIP: click the grey circle to follow the log prints in real time, with auto scroll

Github — Loose Ends

Before leaving things hanging at this point, let’s cover two loose ends on Github.

First, a license was declared in the root package.json, and thus a license should be added to the repository. Github offers a really easy way to add this without much effort:

  • Insights > Community > Add License
Don’t sweat adding an open source license — let Github handle it

From there choose a license from the left menu and click “review and submit”. Depending on the license, it might prompt you for some additional information to add to the license such as your name.

Finally, commit the license file directly to master. Don’t forget to pull this commit locally.

No need for a separate branch

As a last loose end, protect the master branch given that CI is all set.

  • Settings > Branches > Add Rule

To the master branch, add a rule to ensure status checks. This way, merges into master will be blocked if Travis fails or new code has been pushed to master without running tests on the pull request. As the repository owner/administrator, you always have the ability to override status checks, but it’s just a little bit of insurance especially if others start collaborating on the project or during an all-too-human moment, you select the wrong base branch for a pull request without noticing.

Basic branch protection

Note — Once properly set a lock 🔒 will appear next to master under the “branches” tab.

Activating GreenKeeper

Go back to your account on https://greenkeeper.io/. In my experience, it is highly likely that you will see the same red highlighting for the project indicating that Greenkeeper still needs to be enabled. This is because we installed GreenKeeper before having a Travis build for it to operate.

Just click the ‘fix repo’ button, which should trigger GreenKeeper to open a pull request in your repository.

If you open the PR, you’ll see a real-time display of CI status checks. Only when all of them pass will the merge button turn green.

As an optional aesthetic preference, I’m going to also add the Travis badge to this PR.

On the Travis dashboard, click the badge at the top (it might say ‘build unknown’ still) and copy the markdown code. Add it to the README file next to the greenkeeper badge.

After saving this change, you’ll see the merge button go red and the status checks re-trigger immediately, which is pretty neat.

Merge once ready, and call it a day for now.

Wrap Up

The idea for project run_on was introduced alongside preliminary setup for continuous integration (CI). Travis is a tool that automatically runs builds to test software in specified environments. Travis scripts consist of setting an environment, running installations, test scripts and deploy scripts. Greenkeeper monitors updates for project dependencies. Lint checks ensure line endings and code style are consistent across commits. Finally, Github offers ways to protect branches from commits that do not pass status checks such as failing Travis builds.

The Project lives here on Github and the commit history follows exactly what was shown in this Article:

--

--

Gregory ‘Grey’ Barkans

I’m a software engineer between Hamont ← → ATX that’s mainly interested in technology and philosophy. I used to spin DJ mixes as well. vapurrmaid.ca