Task runner or npm run-scripts?
A viable replacement for flavor of the month task runner? 🏃
This one’s been on the back burner for a while so here goes 🛠
“What’s all this hype about run-scripts?!”
I’ve seen articles pop up about ditching “flavor of the month” task runners 🍭Can run-scripts
really be a long term solution for ditching my task runner?
“You don’t need X, Y or Z. Just use npm.” — someone, somewhere… likely.
I’ll disclaim now that if you have a build set up that works well for you. I’m by no means suggesting that you go out and rewrite the whole thing in npm
run-scripts
.
For those in camp TL;DR, run-scripts
can be a viable solution(strong emphasis on “can”) in the right situations and here’s an example.
So what’s an npm run-script?
The npm run
command is powerful. It enables you to run any defined instructions living under the scripts
key of your package.json
file. These scripts have access to locally installed packages within your project, namely any CLI that is available for a package.
Let’s start with a very stripped down package file.
{
“name”: “package”,
“version”: “0.0.0”
}
A basic script
For an example, let’s assume we are using bower
in a project. Like npm
, bower
requires it’s own install
command to be run in order to pull in dependencies.
For a very simple run-script
, we could create a setup
script.
{
"name": "package",
"version": "0.0.0",
"scripts": {
"setup": "bower install && npm install"
}
}
To run this;
npm run setup
That’s a run script 👍 It’s really that simple and often overlooked.
A big benefit of using run-scripts
is that you get to use packages directly and this means you’re always up to date. This is something that can sometimes be problematic with flavor of the month task runners. For example; consider a scenario where I’m using package-X
in a project that uses gulp
. I pull in gulp-package-X
to process my source. A few weeks later, some update is made and there is a new version of package-X
with some great feature I would like to use. I can’t use that feature until gulp-package-X
is updated. If it isn’t maintained regularly, I could get caught using an out of date version of package-X
for some time 👎
Pre/Post hooks
You can define pre
and post
hooks for run-scripts
. These run before and after a run-script
respectively. To create a pre
/post
hook just define a run-script
with the exact same name as the main run-script
but with pre
/post
prefixed to the script name;
"setup": "bower install && npm install",
"presetup": "say Commencing setup",
"postsetup": "say Setup complete!"
In this scenario we could add audio cues to let the user know the setup has complete 🤓
Config options
We don’t want to keep repeating ourselves across run-scripts
. That’s fine as we can configure variables that may be used across run-scripts
. Let’s assume for our setup
example we wanted to create message variables for the audio cues;
"config": {
"commence_setup_msg": "commencing setup",
"complete_setup_msg": "setup complete"
},
"scripts": {
"presetup": "say $npm_package_config_commence_setup_msg",
"setup": "npm install",
"postsetup": "say $npm_package_config_complete_setup_msg"
}
Using the prefix $npm_package_config
we can access any key values under the config
key. File paths for IO operations are going to be the most fitting use for these config variables. The only issue here being the amount of bloat in the JSON from constantly writing $npm_package_config
🐡
Pass through options
If one run-script
invokes another we can pass through options to that sub run-script
using two hypens before our list of options. Consider the following;
"a": "someAwesomePackage",
"b": "npm run a -- --awesome --amazing"
In this example running run-script
b
would invoke run-script
a
but passing the options awesome
and amazing
to someAwesomePackage
.
That’s it for the basic anatomy of run-scripts
. More information can be attained here 📖
Npm run scripts for real tasks
How about something useful? A common scenario would be compiling some source and outputting it somewhere. How about Babel
transpilation and watching?
"watch:scripts": "babel src/scripts --watch --out-dir public/js",
"build:scripts": "babel src/scripts --out-dir public/js",
Here we invoke the command line interface of the babel
package to process any scripts files under src/scripts
👍
To run our transpilation task;
npm run build:scripts
How about a script for setting up a local static server with live reload goodness?
"serve": "browser-sync start --port 3000 --files public/ --server public",
A pre
hook would be advisable too in order to ensure there is content to serve. Maybe something that compiles all of your content;
"preserve": "npm run build:site"
Keep on developing run scripts and soon you will have covered all bases. Are run-scripts
really usable? Sure. It can be convenient and more than enough for some projects.
The dark side of just using npm run-scripts
They are not perfect, is any task runner? There can be issues when you want to use just run-scripts
. They aren’t major. Some are even no issues.
Differing package behaviour and CLI adjustment
Somewhat of a no issue, but worth a mention.
There isn’t a set of rules on how everyone should write their node
packages. As a result, most node
packages will behave and interact differently. It’s an exceptionally large ecosystem in comparison to those of task runners such as gulp
and grunt
.
I recently encountered a scenario where one package would throw an error if the output directory directory did not exist. A different package doing the same job before had not thrown the error and would create the directory itself if it did not exist. I found myself having to write a prehook
to make sure the directory existed.
It’s not something that should deter you from using run-scripts
. It’s just something to be aware of. Different packages have different interfaces and behaviors.
Consider that same scenario when using a task runner like gulp
. It would actually be a lot simpler because it abstracts us away from having to know and deal with these small quirks. For example, the following gulp
task;
gulp.task('compile:styles', function(){
return gulp.src(src.styles)
.pipe(plugins.sass(opts))
.pipe(gulp.dest(dest.css));
});
This compiles our .sass
files. We then decide to switch to Stylus
. This would require a one line change and any necessary options to be altered;
gulp.task('compile:styles', function(){
return gulp.src(src.styles)
.pipe(plugins.stylus(opts))
.pipe(gulp.dest(dest.css));
});
Yes. There will still be some adjustment required if certain options are needed but the most common case will be; take some code, process it, then pipe the output to be put somewhere. With run-scripts
we have to account for differing behavior in addition to potential differing CLI flags.
Comprehension
I believe the ideal scenario for using run-scripts
is when the build logic footprint is minimal. I made an effort to use purely run-scripts
on one of my own projects;
It’s a small project… If I didn’t know how run-scripts
worked or was unfamiliar with the project I wouldn’t say they were the easiest thing to look at.
IMHO, the run-scripts
are not the most user friendly to look at, comprehension isn’t great. To hamper this, it’s a JSON file. This means NO comments and no aiding others with their understanding of the scripts.
You can run;
npm run
But this merely echoes out the available scripts and their content in the Terminal. Admittedly, it is easier to read but still doesn’t explain anything(maybe an opportunity to hook into the npm run command and do some pretty printing etc.). This means you will likely have to write some documentation in order for people to use and compile your code.
This is true for most projects. However, more commonplace task runners have some cushion in the fact they implement default tasks or helpers. When I see a gulpfile
or a gruntfile
in a project, if I’m not sure what I need to do, I just run gulp
/grunt
and see what happens. Either that or I can have a look into those files.
With run-scripts
, there is no standard naming convention for a default task. For now, I’ve adopted using npm run develop
as my default development task. This would need explicitly documenting for users.
Yes, you don’t have to write any task runner code. But, this could balance out when you need to write a document explaining how your run-scripts
work.
A self documented Makefile
is ideal for this problem and I actually believe could be a better solution.
Tighter control
How about more complicated projects? What if we need tighter, more low level control over certain tasks? What about when the CLI for a package doesn’t quite offer what we need?
For one, the amount of run-scripts
could grow and become hard to maintain. And I’ve already encountered scenarios with run-scripts
whereby I couldn’t quite do what I wanted but I knew the package provided the functionality.
Consider the following example with browser-sync
.
If I’m using browser-sync
, and I make changes to my CSS
, I don’t want to refresh the whole page. This is because I may lose the state of the page on refresh. There currently at time of writing no way with the CLI of browser-sync
to say “hey, when you pick up on a CSS change, just inject that OK?” 😢
A happy medium, scripts + scripts?
Maybe writing all of your build logic into package.json
isn’t ideal. So why not write some build tasks in separate javascript
modules and invoke them from run-scripts
?
For a lot of projects we do similar things in our build tasks.
- Watch some files
- Compile some files
- Optimise some files
- Lint some files
We can invoke node scripts from within our run-scripts
. We could for example, write some task within a file;
"task": "node tasks/task.js"
and invoke it from a run-script
;
npm run task
This actually solves one of the shortfalls of sticking strictly to the CLIs of npm
packages.
A real example
Let’s start with a real example working through one of the previous problems regarding injecting CSS
changes with browser-sync
. With the CLI
this is not possible(as far as I’m aware). But if we define a task within a run-script
we can tackle this.
Consider the following file serve.js
const sync = require('browser-sync'),
fs = require('fs'),
opts = require('../bolt-config').pluginOpts,
source = require('vinyl-source-stream'),
buffer = require('vinyl-buffer'),
vinyl = require('vinyl-file');const serve = function() {
const server = sync.create();
server.init(opts.browsersync);
server.watch('public/**/*.*', function(evt, file) {
if (evt === 'change' && file.indexOf('.css') === -1) {
server.reload();
}
if (evt === 'change' && file.indexOf('.css') !== -1) {
vinyl.readSync(file)
.pipe(source(file))
.pipe(buffer())
.pipe(server.stream());
}
});
};if (require.main === module) {
serve();
} else {
module.exports = serve;
}
This is the entirety of the script. We set up a local static server with browser-sync
and watch for file changes in the output directory. If a file changes and is not CSS
, we reload the server. If a CSS
file changes, we read the file and stream the buffer contents of that file to the server for injection.
To invoke the script we could set up something like;
"serve": "node tasks/serve.js"
We will most likely want to use a prehook
to ensure that our content is built to be served
npm run serve
Passing options to tasks
Simple tasks most likely don’t need any runtime options. But how about when we want to specify that our task should act slightly different?
We can write a utils
module to help us out.
Let’s assume we are working with some scripts;
"build:scripts": "node tasks/scripts.js"
A common scenario may be where we want to minify
and optimise some compiled scripts. Using pass through options we could pass a minify
flag to our task script.
"deploy:scripts": "node tasks/scripts.js -- --minified"
How do we read these options? We could parse them manually or call on the trusty help of a node package like commander.js
.
In our utils
module, we could expose the following function;
const program = require('commander');const getArgs = function(args) {
program
.option('-m, --minified', 'Minify output')
.parse(args);
return program;
}
Then, from within our scripts
module, we can read the options by invoking this utility
function.
const args = utils.getArgs(process.argv);
And then use them to make decisions in our build logic;
if (args.minified) {
const minified = uglify.minify(scripts, opts.uglify);
fs.writeFileSync('script.min.js'), minified.code);
}
Watching files
Watching files is going to be important if we want to automate things.
If you’re using a package CLI to process your code and it offers watch functionality then this will most likely be more than suitable.
Alternatively, you could use a CLI solution like the watch
module to monitor changes in files and trigger compilation commands. Something like;
"watch:scripts": "watch \"npm run build:scripts\" src/coffee"
Personally, I didn’t feel like CLI solutions were quite quick enough in comparison to solutions like gulp
. I always found a slight delay with compilation. This shouldn’t be much of an issue but it’d be nice to have it run quicker.
We could always write our own watch handler in a task?
There actually isn’t much to it.
We want to pass a directory that we are watching, a command or script to execute on change and a name for the watcher. This means we can do something like;
node ./tasks/watch --dir src/coffee --compiler scripts.js --name CoffeeScript
First, we would need to extend our utility options handler;
getArgs = function(args) {
program
.option('-d, --dir [value]', 'Specify a directory')
.option('-e, --exec [value]', 'Execute function')
.option('-c, --compiler [value]', 'Node script to fire')
.option('-n, --name [value]', 'Name')
.option('-m, --minified', 'Minify output')
.parse(args);
return program;
}
And then we create a watcher using fs.watchFile
const compiler = {},
files = fs.readdirSync(__dirname);
watch = function() {
const args = utils.getArgs(process.argv);
if (typeof args.name === 'string')
winston.info(`${args.name} watcher started!`); if (args.dir && (args.exec || args.compiler)) {
fs.watch(args.dir, {persistent: true, recursive: true}, function(e, file) {
if (file) {
winston.info(`${file} changed!`);
if (args.compiler && compiler[args.compiler])
compiler[args.compiler]();
else
shell.exec(args.exec);
}
});
} else {
throw('Something went wrong');
}
};for (var file of files) {
if (file.indexOf('.js') !== -1)
compiler[file] = require(`./${file}`);
}
The resulting script and watcher is actually pretty fast.
Linting etc.
For tasks such as linting files, running analytics, setup etc. it’s likely you won’t need to write any task files as you’re not dealing with output files and in most cases output on the terminal.
I could keep writing about different types of task, run-script
, approach etc. That’s one of the things about using run-scripts
. It’s a bit of a free for all, there are multiple approaches you can take and various scenarios you might come across.
To conclude
run-scripts
can be a viable solution for running tasks. But for me, only when it’s appropriate. Appropriate scenarios are most likely going to be projects with minimal build logic footprint. When things get a little complex and the majority of your build logic lives in a JSON
file, it might not feel ideal.
It’s personally not something I would use on any project of substantial size. You can alleviate some of the pain by putting build logic into JS files and invoking them with node
. However, you are still defining your build configuration in package.json
. This doesn’t feel right to me. I like my build logic/configuration to live in it’s own space. If you really want to ditch task runner X, Y, or Z, then maybe a self documented Makefile
would be a better solution.
As always, any questions or suggestions, please feel free to leave a response or tweet me 🐦!