State of the Union at the JavaScript World Intelligence Center

State of the Union.js

Note: This is a LONG post/rant/exploration that involves some happy and some bleak outlooks on the state of development, and where I believe we as an industry are heading.
Thoughts are mine and mine alone. :-)

2014 and 2015 came with blast. If you’ve been writing software, and writing about software as long as I have (and many of you longer), you’ll understand when I saw that the time felt reaaaalllly long while it was passing, but feels extremely quick and eruptive in retrospective.

Hot lava. Eruptive. You get what I mean…

It started slow, and largely unnoticed by the masses, and then it sort-of erupted into tool and theory overload on a community not quite used to (at least, in my opinion) the onslaught of academically-charged tools, languages, and practices that were set forth on the languages of the web.

So here we are at the cusp of 2015 and 2016, marled by the paradox of choice, and wondering “what’s next?”.


I have really good news for: you don’t have to be part of the rat-race to learn everything that the web development community — and JS community especially — has released into the world so that you feel valuable, or valid.

“What!?” I think I heard you just think, loudly, “how can that possibly be?”.

Let’s walkthrough a quick, incomplete, ever-growing braindump of nouns, pronouns, buzzwords, open-source tools, programming language names, and other items that have become far more popular in the recent year:

JavaScript (ES 6 / 7) → which is renamed to ES2015 and ES2016, Babel, Closure, Clojure, React, Ember, Backbone, Oboe, Angular (1 & 2), Web Workers, Service Workers, RAIL, Rails, Hot Loading, Node.js & iojs, Promises, Observables, GraphQL, CSP & Channels, core.async, immutable, HTML5, npm, build tools, brunch, gulp, grunt, npm scripts, broccoli, ember-cli, mocha, chai, karma, sinon, jQuery, vanilla.js, Mithril, virtual doms, routing, isomorphic & universal code, Express, Koa, Elm, CLJS, PureScript, TypeScript, Flow, static type checking, compilers, compile-to-js languages, browserify, rollup, commonjs, amd, umd, SystemJS, js-to-native packages, Haskell, type systems, lambda calculus, functional programming, Meteor, transpilers, variadic behavior, context vs. contextless approaches, offline first, deployment tools, docker, macros, languages extensions, sweet.js, containers, container management services, dev-ops & web-ops, WebRTC, JSCS, style guides, linting, WebGL, Unity engine, benchmark suites, online coding environments, and so so so so many others things that I am missing here…

It isn’t the best feeling to read the list above if you’ve fought the mountain for years, and it just grew twice the size when you read it.

There’s so much to learn; and especially for a newcomer to really feel confident that they are conquering the next best topic/library at the right time, at the right pace, in the right order.

Again, you don’t have to be part of the rat-race to learn everything. Let’s start with some coffee, and an overview of some paradigms to cover to give you a better mental-map of what I’d like to review with you:

  • Compilers (of compilers (of compilers (…)))
  • Functional programming, and the popularization of academia in web development
  • “The Year npm blew up” — a serious over-saturation of “me too” projects
  • The fight between the contextual and contextless
  • ES6/7, Sweet.js, CLJS, Elm, PureScript, TypeScript, Flow…
  • Abstractions (of abstractions (of abstractions (…)))
  • “Dirtual Voms” — React, Mithril, Bobil, Vue, Mercury, virtual-dom, incremental-dom, https://medium.com/@dan_abramov/react-components-elements-and-instances-90800811f8ca
  • Death to promises, long live promises, CSP, observables, Asynquence…
  • Hashimoto’s Revenge (Hashicorp and the fight for automation) (AWS vs others… JAWS… Sails… npm-scripts, etc)
  • Let’s Encrypt
  • Let’s Parallel and Offline work (http://www.pocketjavascript.com/blog/2015/11/23/introducing-pokedex-org)
  • The Web Platform Matures: web workers, graphics, SVG support, CSS modules, service workers, indexedDB, storage and caching mechanisms, WebRTC, Webcams/haptic-feedback/vibration/orientation APIs…

Getting started

Tools and“compile-to-JS” languages (like Elm, or CLJS) don’t systemically help you if you have difficulty understanding some of the oldest parts of JavaScript in the first place — the features that exist in trusty ol’ ES3.

This means, before really mastering advanced application patterns, reactive extensions, virtual DOM frameworks, and so forth, you should build comfort with simple principles and understanding of JavaScript in its simplest, purest forms:

  • JavaScript is a language that can be executed in the terminal, or in the browser. In the browser, it has special access to the browser environment, like HTML elements in a web page. In the terminal, it can have access to Operating System resources, like network ports, files on your disk, and database programs that run simultaneously with the program.
  • JavaScript, no matter how simple it is, represents a program. When you write a console.log() statement, you are telling a compiler/interpreter to take your code (as a big string of letters), break it into pieces, interpret it, and do something based off of those commands.
  • To my own understanding, programming languages are all understood in some form as a text that is broken apart into tokens by lexical analysis. These tokens are then either analyzed, optimized, and distilled into a representation called an Abstract Syntax Tree, which is basically a graph of tokens describing how a program is structured. Then that is either compiled into a lower-level language, or interpreted/executed by the language’s runtime. In JavaScript’s case, there has been a lot of improvements in the past year or so on the JS runtimes embedded in Node, and browsers, that speeds up the parsing/lexical analysis — and the execution of — your JS programs.
    Eventually, the buck stops with machine-code / x86-level code that is run directly on the CPU.
    The reasons these taxonomies and compilers exist are many, but a compiler can provide guarantees about the safety, speed, and features of a language that the program itself could either target specifically, or be happily ignorant and unaware of what the compiler is doing for it.
    We’ve lived far too distant from these basics in the web field, and its important for junior-to-mid-level developers to be comfortable with these paradigms.
  • Some of the hardest parts of implementing and using a library, framework, or tool is comprehending and understanding the in’s and out’s of ES3, such as using functions as building blocks, understanding `this`, `arguments` and variadic behavior, IIFEs and anonymous functions, Function call/apply/bind, and a plethora of repeatable patterns surrounding the process of functions and how they can hide or abstract away functionality. Asynchronous and synchronous callback functions and other patterns are also important to master, such as knowing how to implement/solve a large number of programming paradigms and problems with Array map/reduce/filter.
    Being comfortable with these is 75 to 85% of the prerequisites to comfortably master topics like Observables, CSP, Promises chaining, async, etc.
  • Up until recently, and even still now, there are popular libraries/frameworks that involve configuration and/or leaky abstractions, which can be taxing on the programmer to mentally juggle alongside the problem-solving she/he is having to solve.
    Remember to always question what the type of input to a function is, and the type of output is. If a function is not pure, why isn’t it, and what are the possible side-effects of that function?
    This is why type-systems and functional programming are becoming more popular in our daily-run-ins… the type-systems can provide safety, security, clarification of reasoning, and optimizations for compilers, while the syntax, structure, and limitations of how you can write the function bodies themselves still enables code to have terse, verbose, readable, and extensible code.

Compilers (of compilers (of compilers (…)))

If you use any modern JavaScript toolkit, you likely use a compiler — or more specifically, a transpiler. This simply means we turn a piece of code in a language of one syntax into another language at the same runtime-level.

https://goo.gl/0Auoey — The screenshot above shows ES6 on the left, and its transpiled ES5 on the right.

By making use of transpilers and compilers, we can make use of language features and terse syntax that didn’t previously exist.

This solar system of JS — the encapsulating layers of languages that compile to JavaScript — has grown larger in recent months/years as developers discover that there are patterns and language features that would better serve the purposes of the browser/Node environment. Some languages/platforms, like Clojure Script (CLJS), are accompanied with large libraries originally written for multi-threaded, highly-available servers. Other languages and platforms, like Elm, are making a niche in the game-development arena.

All the while these languages are being compiled to JavaScript, other teams are embedding JavaScript runtimes inside other runtimes. Unity3D’s game engine and the Unreal engine SDK are bringing JavaScript interoperability with canvas, mobile, or desktop games; React Native is embedding JavaScript runtimes inside mobile and desktop apps; and numerous other platforms are taking JavaScript to new heights, depths, and platforms like medical care, robotics, space, airplanes, and so forth.
This, in my opinion, isn’t a result of JavaScript’s awesome language features (despite being a pseudo-Lisp and capable of implementing features from higher-order languages quite easily), but rather the platform’s portability and “lightness”. JavaScript is ubiquitous because it was likely never meant to be an all-in-one solution.
With all that said, JavaScript runtimes are only just now receiving multi-thread/multi-core features, and the Virtual Machine itself still needs improvement. Serverside environments, type-safety and type-systems, and other advancements — when added to the core interpreter — could make JS nearly as fast and scalable as Elixir. If and when this happens, JavaScript’s portability would only add to its ability to compete with the developer experience and cost-efficiency of other languages/platforms.

Functional programming, and the popularization of academia in web development

With the addition of new languages and compilers to the JavaScript toolchain, there’s an exponentially growing possibility of toolsets and configurations available on GitHub and npm that get a certain boilerplate structure setup for you.

As these toolsets and approaches mature, the yak-shaving needed to get results diminishes.

The effects have been astronomical for my own experience; especially as I have learned to put my own scaffolding project together, I’ve been able to better understand the lifecycle of these pre and post-processing tools and toolchains in my development environment, as well as configure them to work for me, not the other way around. ;-)

As my tool-fu grew, I learned to not write code to work for one particular project, instead recognizing abstractions and patterns that could be used nearly anywhere JavaScript is used.

These results in our newly-popular toolchains and projects arose because of sound-reasoning and wisdom from the world of academia, where languages are developed to push human reasoning, understanding, and capabilities.

“The Year npm blew up” — a serious over-saturation of “me too” projects

With that, there’s also been a plethora of copy-projects and a lack of progress in-terms of “which boilerplate/language/platform/build-tool to use”.

The only true answer (when you’ve had enough time to struggle, learn, and master the paradigms) is to configure your own. However we don’t all have that luxury, and we shouldn’t all be publishing stuff “just ‘cause” to npm.

Screenshot from https://www.npmjs.com homepage.

npm — as of Wednesday Dec 16th, 2015 — has over 200,000 packages. A search for boilerplate yields over 2400 packages.

And that was just a search for “boilerplate”… :-)

What. The. Fuck. Do. I. Use.

2016 will likely involve a serious, focused conjoining of projects, tools, and language features to merge the best and brightest packages/tools/boilerplates into more formalized projects. But let me make the distinction that fully featured and formalized are not the same. If you want to thrive in the industry as a developer — whether you’re using JavaScript or any other language — you must learn to use the language and its features extremely effectively. You’ll need to be familiar with the in’s-and-out’s of the language and syntax to fully-employ what functional programming can provide. Abstractions and tiny libraries exist because they are pseudo-SOLID, fast, have a memorable and tiny API footprint, and can be re-used in many projects. Many developers in the functional and JS communities have become quite empowered by these tiny/micro libs and a focus on paradigms/patterns over minutiae, leaky abstractions, and simple implementation details.

Bleak: likely many more “me too” projects to exist in the short and long-term future of npm / JavaScript, which obfuscates the selection of project/packages for junior developers. How do we know what is the right thing to focus on?

Happy: likely many more “effin’ brilliant” projects will also exist in the short and long-term future of npm / JavaScript. :-)

The fight between the contextual and contextless

Another artifact of functional languages is removing context from code. If you have pure functions, why would you want to use `this` in your JavaScript?

By in-fact removing `this` from the code you write — despite the advent of ES6 class syntax and the soon-to-come missing features from ES7 — you can write more re-usable, modular, and abstract functions that care only about the input and output data/data-type.

It seems backwards, but I think someone put it best…

‘this’ is infinity additional arguments

Indeed, why introduce unknowns to something that you could better reason about, write tests for, and optimize easier? Some may have notions / arguments for the opposite, but I’ve never felt more secure than when I removed the notion of `this` from my libraries and focused on variadic behavior and simple, pure, side-effect-free functions.

In JavaScript, `this`— to me — is a leaky abstraction.

ES6/7, Sweet.js, CLJS, Elm, PureScript, TypeScript, Flow…

Besides, with all these language extensions, why should we depend on particular, older parts of JavaScript? :-)

Abstractions (of abstractions (of abstractions (…)))

The most beautiful part of using function as building-blocks, instead of objects or simple-data-types, is that functions can be composed, curried, abstracted, and partially applied until there are no more customization needs.

This is one primary reason why I reason that, if you write code that has some constant or global variable, just wrap it in a function. You’ll never know when you’ll actually need to customize it, until it’s too late…

For example, if you’re familiar with the fetch() API, you’ll know that it returns a Promise.

It’s simple in that it makes a network request, and returns an ES6 Promise. But consider these new needs:

  • How do you cancel a network request, or at least stop its corresponding Promise from continuing? (XMLHttpRequests and jQuery Deffered’s can be cancelled, but this feature does not exist directly in the ES6 Promise API.)
  • How do you make two, entirely random network requests from fetch(), in two entirely different and disjoint pieces of code, automatically de-duplicate themselves into a single network request?
  • How could you make N unique, simultaneous network requests with fetch() in disjoint pieces of code, but have them all multiplex themselves into an automatically-combining-and-demuxing request, so 100 network requests could trigger only one single XMLHttpRequest?

Let’s consider implementations for all three, which have potential implementations that involve wrapping functions inside functions.

https://goo.gl/2HhruZ — A cancellable implementation that takes a “fetcher” as input, or rather, any function f that returns a Promise.
https://goo.gl/hZRO9T — A batch implementation that takes a fetch-like function and returns a function that can batch together simultaneous, inflight requests to the same URL as a single HTTP request

We can event, at this point, abstract away both functions to both batch and be cancellable:

fetch = cancellable(batch(fetch))
let x = fetch('https://api.github.com/users/matthiasak'),
y = fetch('https://api.github.com/users/matthiasak')
x.then(x => log(1, x))
y.then(y => log(2, y)) //--> never logged
y.abort()
Abstractions.

Abstractions.

Now, let’s multiplex a cancellable-batching function so that all simultaneous requests can be sent to a server as a single request (would require a server-side implementation to de-multiplex and handle the individual requests).

https://goo.gl/1TZNsw — The “mux” abstraction produces a function that sends simultaneous network requests as a single HTTP request with a payload of URLs to request.

The muxer, yet again takes a “fetcher” function, and some other options. In return, it gives us a function that acts like fetch, taking a URL and returning a Promise. However, this cancellable-muxer now batches the URLs together for the server to process.

This is one engineering feat that Facebook’s GraphQL and Netflix’s Falcor try to accomplish, however each of these projects also have a query language or server-side interop that requires much more legwork to integrate it with existing JSON APIs.

So in summary:

  • Functions make powerful, modular building blocks that let you make abstractions when you need those to occur
  • Most libraries available on npm won’t be as terse, or have a small API footprint
  • Functions that take functions as input and give functions as output are an advanced concept that makes code extremely fast and modular; so get comfortable with those “tough functional and ES3 concepts”!
  • A little yak-shaving while toying with functional building blocks can go a long ways
  • Functions as building blocks can help prevent those damned “leaky abstractions” from finding their way into your code. The only thing that should matter to your function is its input and output types.
  • GraphQL and Falcor are great projects, but it doesn’t mean you should just jump at the chance to adopt them into projects without knowing their in’s-and-out’s

“Dirtual Voms” — React, Mithril, Bobil, Vue, Mercury, virtual-dom, incremental-dom, …

Dan Abramov (creator of Redux, React Hot Loader, and other projects) recently begin investigating/reasoning about the structure of Virtual DOMs (specifically, React’s implementation).

Even this can be a step too far for initially understanding what a Virtual DOM is in the first-place.

So, let’s again take to Arbiter to break down what JSX, virtual elements, and virtual DOM Objects are.

https://goo.gl/RROheS — A simple demonstration of what JSX is.

Most people learn React starting immediately with JSX, too. This isn’t always the best thing to do, but 90% of the tutorials out there start with the same premises.

JSX is simply writing some HTML (or more specifically, for React/Ract Native, it’s XML) in your JavaScript. In reality this HTML is transpiled by your build tool into the function call on the right hand side. React’s library has functions for creating virtual elements.

https://goo.gl/rBPivR — Demonstrating what `React.createElement()` returns

The above screenshots/link demonstrate what the Virtual Element is… an Object!!! If we nest more elements inside the div, then the Object becomes somewhat of a recursive structure.

Let’s “prettify” this output so we can see it more clearly.

React stores this nested tree of Objects, and then renders them to the screen. This is that mythical magic of VDOM libraries — each one has figured out how to parse your JSX or API calls into a VDOM, and then has implemented a way to turn this tree back into HTML in the browser or in Node.

There’s no magic past this concept. The rest is implementation detail that involves a lot of creative problem solving to figure out where the previous and current version of a tree differ, and rendering the fewest and smallest updates to the DOM that would optimize its rendering speed.

You can check out my explorations of virtual DOM implementations if you’d like to dive into that. Especially the VDOM implementation I’ve worked on for a greater extent for learning purposes.


“Death to promises, long live promises”, CSP, observables, transducers, Flux, Redux…

There’s been a lot of talk about “state” and how to architect/organize your code and your data structures so that your code is easier to reason about.

Most notably, Facebook released Flux, and then several alternative and simplified Flux implementations came about, such as Alt, Reflux, Redux, and many others… hundreds of others, in fact.

The popular framework / flux implementation as of late has been Dan Abramov’s Redux. With such simple principles, it’s not hard to see why:

  1. Single source of truth — the state of your whole application is stored in an object tree with a single store.
  2. State is read-only — the only way to mutate the state is to emit an action, an object describing what happened and the smallest amount of information needed to morph the existing state into a new value.
  3. Changes are made with pure functions — To specify how the state tree is transformed by actions, you write pure reducer functions.

There are bonuses and limitations to these approaches.

  • A singly-sourced state could mean that your application, especially if used in larger, scaled applications, might not be able to fit all the information in, or it will use up the memory available. What matters is that you take great care in recognizing the difference between what should be stored in state and what should not, and learn the in’s-and-out’s of Redux flexibly so that you can plan for those engineering / design concerns.
  • Read-only state is a really smart & clean approach — by enforcing all code to use the API as it was intended, there are no side-effects, so the library (Redux) can guarantee how state is changed. Immutable libraries that make use of tries are capable of creating “clones” of Objects (so we can compare by identity, instead of actual data). However they do not need to create complete copies of an immutable Object each time a value is edited. This is why immutable structures can be faster — and easier to reason with in application code — than mutable counterparts.
  • By using pure reducer functions, they can — through functional composition — be used to create modular, “abstractable” state modifiers. However when creating “actions” (simple Objects with a name and some data about what is happening), there’s still a host of boilerplate code needed to get things working. In addition, asynchronous actions/code embedded in the flow of application logic can become muddled and unclear; which is why they are under the “advanced” section. :-)

Other platforms / projects are competing for the top spot, as well, such as Elm and its built-in Model-View-Update architecture.

Paradox of choice, again, rears its gnarly teeth.

Instead of freezing over implementation detail, let’s learn to choose the best for our projects — or choose the personal preference — by asking “what should the ideal store/state API look/act like?”

We’ll all come up with different answers, so I’ll just show some of my own reasoning here.

https://goo.gl/A66OfX — an ideal API, for my own use-cases
  • Creating an immutable store should be a simple, single-line call; and the option to create & use multiple stores should be available for memory intensive applications and scaling situations
  • Getting a clone of the state from that store should be similarly easy
  • Actions and Reducers should be simpler, where I should not have to remember the name/type of an Action, or have to learn a new set of procedures to have asynchronous reducers/actions.

You’ll notice that the store changes its internal state by “dispatching” a function/reducer that takes the previous state and a middleware-esque callback function called `next`. The new version of the state is simply passed to `next(newState)` as the first argument.

For full implementation of this, check out universal-utils (my learning repo), but here’s the down-low:

The use/design of the API stayed the same, and I fit my implementation to match the tiny footprint and API, instead of the other way around. This makes the code itself extremely easy to reason about.

Notice, also, that I was able to return a Promise from `Store#dispatch()`. No matter if I was dispatching synchronous or asynchronous reducers, the function signatures were exactly the same, and thus I could simply extend the implementation to use and return Promises as well.

Composition is king.


There have been all sorts of additional developments surrounding Flux and subsequent methods and paradigms of state/message/concurrency. JavaScript’s dominance of the web, and the web’s dominance of software, and software’s dominance of the growing / maturing US job market, has really helped mature our development practices/approaches.

For instance, the functional community (Clojure, especially) has been dealing with concurrency and parallelism in recent years. Clojure’s core.async library has been pivotal through the ClojureScript community’s contributions to JavaScript development. ClojureScript — referred from now on as CLJS — is the “compile-to-JS” package that compiles Clojure to JS for browser and Node development, among other platforms like React Native (a JS-runtime embedded in Objective-C/Swift apps for iOS, and embedded in Java apps for Android).

One thing Lisps (like Clojure, Scheme, etc… the languages with “lots of parens and tabbing”) and other Functional/Type-system-based languages like Haskell have really forced people to figure out is how to abstract/mature problem-solving methods in algorithms for a broad range of uses.

“What happens if I want to filter a potentially infinite series of events for those that involve clicking on a mouse?”

An infinite Array doesn’t exist, and even then we can’t easily/quickly program a simple map/reduce function to handle infinite sequences of numbers.

Can we be declarative (like a “reducer”, “mapper”, or “filterer” callback function), and handle infinite series at the same time?

Yes, we can! The best part of it all is that these approaches have existed for a while, such as Observables and Reactive Extensions.

All the meanwhile, people have been figuring out the other half of the equation to handle mapping/reducing/filtering infinite series (transducers), and combining them (Communicating Sequential Processes).

Again, there’s a paradox of choice, but less so, since these are either unattractively “academically-charged” or just really fresh so there’s not been the same level of contribution yet.

You can have a look at my endeavors to learn about transducers and CSP in universal-utils.


Hashimoto’s Revenge

(Hashicorp and the fight for automation) (AWS vs others… JAWS… Sails… npm-scripts, etc)

There are soooooo so so so many options out there for

  • continuous deployment and integration
  • hosting providers
  • domain name registrars and routing
  • IaaS and PaaS solutions
  • Database and Push services
  • pre and post-processing tools
  • build tools
  • style linting
  • static type checking
  • compilation and transpilation
  • packagers, bundlers, “concatter-minifiers”
  • … and many other topics here

What’s marginally more important is understanding the limitations and features. If you can script it, you ought to resort to that if you can, because in two months you’ll need yet another build-step or tool to integrate into your deployment chain.

Just make sure you investigate tools with skepticism and figure out the “gaps” as quickly as possible, otherwise you’ll lose plenty of time spending it on the wrong task. :-)

For instance, I’ve been investigating setting up Hashicorp’s Otto project in clean and dependable way. While the project itself has a number of missing features (such as redeploying to an existing server-id or reconnecting the Elastic IP address on AWS), the team is working towards fixing those issues so that Otto serves as a one-stop-shop scriptable tool that you can use to deploy VMs, Images, and Docker containers to a number of Infrastructure services.The focus is being able to deploy apps and infrastructure as a cooperating architecture to the Infrastructure as a Service that you desire, without needing to open any management console in your browser or terminal.

https://github.com/matthiasak/universal-js-boilerplate/blob/master/package.json#L66 — A node boilerplate project that focuses on tools and developer happiness, now with some preconfigured scripts for managing otto and other services/projects.

That’s about all I have to say for now. I will make edits, definitely, so please leave comments and I’ll make amendments to this write-up as needed.

Thank you for reading, and please check out some of my latest endeavors :-)