Speeding Up ESLint

Vinson Chuong
Scripting Bits
Published in
3 min readJul 29, 2020

ESLint is pretty slow, especially after adding a bunch of plugins.

Today, I happened to notice that on my project of 8 files, it’s equally slow with the whole project as it is with a single file — 3 seconds. This led me to believe that most of the time is spent loading plugins into memory.

So, I tried doing multiple runs in a Node.js REPL:

>> node --experimental-repl-await> s = Date.now(); xo = require('xo'); console.log(Date.now() - s)> s = Date.now(); await xo.lintFiles('./index.js'); console.log(Date.now() - s)> s = Date.now(); await xo.lintFiles('./index.js'); console.log(Date.now() - s)> s = Date.now(); await xo.lintFiles('./index.test.js'); console.log(Date.now() - s)

I found:

  • Importing the linter takes about 900ms.
  • Running the linter for the first time takes about 2000ms.
  • Running the linter again, on this or other files, takes about 300ms.

So, this means that if I keep a process running with the linter warmed up, I can speed up linting by 10x, at least for a small project.

Most tools interact with ESLint (and it’s derivatives) through its CLI. So, I’ll have to figure out a way to speed up the CLI.

I’ll have to keep a Node.js process running in the background so that subsequent CLI commands can take advantage of ESLint already being warmed up. This means that I need a shell script that does the following:

  • Serializes any arguments
  • Causes code to be executed on a separate process
  • Receives any output and prints it to screen

And, I want to accomplish all of this with minimal additional or duplicated code.

My first idea is to, at runtime, intercept an import or require and replace the code with an RPC client that interacts with a separate process that runs the code. Data and references to objects can be passed back and forth. Prior art includes:

JSON-serializable data can be stringified and passed over a socket, which is easy enough. But, handling objects and supporting various ways of interacting with them is complicated.

Another idea is to replace the CLI command. CLI commands typically only deal with strings, making shuttling data between processes much easier. I could write a Bash script, in my case named xo, that passes arguments to a persistent Node.js process. The Node.js process then mutates process.argv and evaluates the real xo.

I tried it out in the Node.js REPL:

>> node> process.argv[2] = 'index.js'
> require('.bin/xo')
> delete require.cache[require.resolve('xo/cli.js')]
> delete require.cache[require.resolve('xo/cli-main.js')]
> process.argv[2] = 'index.test.js'
> require('.bin/xo')

Because Node.js caches previously require‘d files and those files do their business immediately upon being require‘d, those files need to be removed from the cache.

So, I decided to try and implement this:

Along the way, I ran into a couple of challenges.

Knowing when the linter is finished is difficult. It prints to stdout until it stops printing. Usually, Node.js just exits when there’s no work left to be done, but, since I don’t want the process to exit, I needed to find an alternative. Luckily, Node.js has a beforeExit event that allows me to know when Node.js would exit.

Many CLI tools show visual flourishes, like colors, only if they’re run in a TTY. Since, I’m running the tool in a background process, I need an alternative way to enable these features. Fortunately, a lot of libraries that deal with printing to the terminal respect the FORCE_COLOR environment variable. However, some CLI tools rely on knowing the height and width of the terminal to format text; there’s no easy way to support that.

In short, I was able to get this working to some degree with a bunch of hacks.

The most interesting part for me was experimenting with UNIX sockets, which are well-supported by Node.js. Client-server is assumed. But, everything else is up to me to design and implement:

  • The format of the data transferred
  • How much data is sent by the client vs. the server
  • When communication ends and which side ends it.

Overall, while this is a workable approach, it’s probably prone to having many bugs. A more reasonable approach would be to take the underlying package, split up its logic, run the bulk of the work on a server process, and run the presentation logic on a client process.

--

--