State of the Union.js
Note: This is a LONG post/rant/exploration that involves some happy and some bleak outlooks on the state of development, and where I believe we as an industry are heading.
Thoughts are mine and mine alone. :-)
2014 and 2015 came with blast. If you’ve been writing software, and writing about software as long as I have (and many of you longer), you’ll understand when I saw that the time felt reaaaalllly long while it was passing, but feels extremely quick and eruptive in retrospective.
It started slow, and largely unnoticed by the masses, and then it sort-of erupted into tool and theory overload on a community not quite used to (at least, in my opinion) the onslaught of academically-charged tools, languages, and practices that were set forth on the languages of the web.
So here we are at the cusp of 2015 and 2016, marled by the paradox of choice, and wondering “what’s next?”.
I have really good news for: you don’t have to be part of the rat-race to learn everything that the web development community — and JS community especially — has released into the world so that you feel valuable, or valid.
“What!?” I think I heard you just think, loudly, “how can that possibly be?”.
Let’s walkthrough a quick, incomplete, ever-growing braindump of nouns, pronouns, buzzwords, open-source tools, programming language names, and other items that have become far more popular in the recent year:
It isn’t the best feeling to read the list above if you’ve fought the mountain for years, and it just grew twice the size when you read it.
There’s so much to learn; and especially for a newcomer to really feel confident that they are conquering the next best topic/library at the right time, at the right pace, in the right order.
Again, you don’t have to be part of the rat-race to learn everything. Let’s start with some coffee, and an overview of some paradigms to cover to give you a better mental-map of what I’d like to review with you:
- Compilers (of compilers (of compilers (…)))
- Functional programming, and the popularization of academia in web development
- “The Year npm blew up” — a serious over-saturation of “me too” projects
- The fight between the contextual and contextless
- ES6/7, Sweet.js, CLJS, Elm, PureScript, TypeScript, Flow…
- Abstractions (of abstractions (of abstractions (…)))
- “Dirtual Voms” — React, Mithril, Bobil, Vue, Mercury, virtual-dom, incremental-dom, https://medium.com/@dan_abramov/react-components-elements-and-instances-90800811f8ca
- Death to promises, long live promises, CSP, observables, Asynquence…
- Hashimoto’s Revenge (Hashicorp and the fight for automation) (AWS vs others… JAWS… Sails… npm-scripts, etc)
- Let’s Encrypt
- The Web Platform Matures: web workers, graphics, SVG support, CSS modules, service workers, indexedDB, storage and caching mechanisms, WebRTC, Webcams/haptic-feedback/vibration/orientation APIs…
Eventually, the buck stops with machine-code / x86-level code that is run directly on the CPU.
The reasons these taxonomies and compilers exist are many, but a compiler can provide guarantees about the safety, speed, and features of a language that the program itself could either target specifically, or be happily ignorant and unaware of what the compiler is doing for it.
We’ve lived far too distant from these basics in the web field, and its important for junior-to-mid-level developers to be comfortable with these paradigms.
- Some of the hardest parts of implementing and using a library, framework, or tool is comprehending and understanding the in’s and out’s of ES3, such as using functions as building blocks, understanding `this`, `arguments` and variadic behavior, IIFEs and anonymous functions, Function call/apply/bind, and a plethora of repeatable patterns surrounding the process of functions and how they can hide or abstract away functionality. Asynchronous and synchronous callback functions and other patterns are also important to master, such as knowing how to implement/solve a large number of programming paradigms and problems with Array map/reduce/filter.
Being comfortable with these is 75 to 85% of the prerequisites to comfortably master topics like Observables, CSP, Promises chaining, async, etc.
- Up until recently, and even still now, there are popular libraries/frameworks that involve configuration and/or leaky abstractions, which can be taxing on the programmer to mentally juggle alongside the problem-solving she/he is having to solve.
Remember to always question what the type of input to a function is, and the type of output is. If a function is not pure, why isn’t it, and what are the possible side-effects of that function?
This is why type-systems and functional programming are becoming more popular in our daily-run-ins… the type-systems can provide safety, security, clarification of reasoning, and optimizations for compilers, while the syntax, structure, and limitations of how you can write the function bodies themselves still enables code to have terse, verbose, readable, and extensible code.
Compilers (of compilers (of compilers (…)))
By making use of transpilers and compilers, we can make use of language features and terse syntax that didn’t previously exist.
Functional programming, and the popularization of academia in web development
As these toolsets and approaches mature, the yak-shaving needed to get results diminishes.
The effects have been astronomical for my own experience; especially as I have learned to put my own scaffolding project together, I’ve been able to better understand the lifecycle of these pre and post-processing tools and toolchains in my development environment, as well as configure them to work for me, not the other way around. ;-)
These results in our newly-popular toolchains and projects arose because of sound-reasoning and wisdom from the world of academia, where languages are developed to push human reasoning, understanding, and capabilities.
“The Year npm blew up” — a serious over-saturation of “me too” projects
With that, there’s also been a plethora of copy-projects and a lack of progress in-terms of “which boilerplate/language/platform/build-tool to use”.
The only true answer (when you’ve had enough time to struggle, learn, and master the paradigms) is to configure your own. However we don’t all have that luxury, and we shouldn’t all be publishing stuff “just ‘cause” to npm.
What. The. Fuck. Do. I. Use.
The fight between the contextual and contextless
By in-fact removing `this` from the code you write — despite the advent of ES6 class syntax and the soon-to-come missing features from ES7 — you can write more re-usable, modular, and abstract functions that care only about the input and output data/data-type.
It seems backwards, but I think someone put it best…
‘this’ is infinity additional arguments
Indeed, why introduce unknowns to something that you could better reason about, write tests for, and optimize easier? Some may have notions / arguments for the opposite, but I’ve never felt more secure than when I removed the notion of `this` from my libraries and focused on variadic behavior and simple, pure, side-effect-free functions.
ES6/7, Sweet.js, CLJS, Elm, PureScript, TypeScript, Flow…
Abstractions (of abstractions (of abstractions (…)))
The most beautiful part of using function as building-blocks, instead of objects or simple-data-types, is that functions can be composed, curried, abstracted, and partially applied until there are no more customization needs.
This is one primary reason why I reason that, if you write code that has some constant or global variable, just wrap it in a function. You’ll never know when you’ll actually need to customize it, until it’s too late…
For example, if you’re familiar with the fetch() API, you’ll know that it returns a Promise.
It’s simple in that it makes a network request, and returns an ES6 Promise. But consider these new needs:
- How do you cancel a network request, or at least stop its corresponding Promise from continuing? (XMLHttpRequests and jQuery Deffered’s can be cancelled, but this feature does not exist directly in the ES6 Promise API.)
- How do you make two, entirely random network requests from fetch(), in two entirely different and disjoint pieces of code, automatically de-duplicate themselves into a single network request?
- How could you make N unique, simultaneous network requests with fetch() in disjoint pieces of code, but have them all multiplex themselves into an automatically-combining-and-demuxing request, so 100 network requests could trigger only one single XMLHttpRequest?
Let’s consider implementations for all three, which have potential implementations that involve wrapping functions inside functions.
We can event, at this point, abstract away both functions to both batch and be cancellable:
fetch = cancellable(batch(fetch))
let x = fetch('https://api.github.com/users/matthiasak'),
y = fetch('https://api.github.com/users/matthiasak')
x.then(x => log(1, x))
y.then(y => log(2, y)) //--> never logged
Now, let’s multiplex a cancellable-batching function so that all simultaneous requests can be sent to a server as a single request (would require a server-side implementation to de-multiplex and handle the individual requests).
The muxer, yet again takes a “fetcher” function, and some other options. In return, it gives us a function that acts like fetch, taking a URL and returning a Promise. However, this cancellable-muxer now batches the URLs together for the server to process.
This is one engineering feat that Facebook’s GraphQL and Netflix’s Falcor try to accomplish, however each of these projects also have a query language or server-side interop that requires much more legwork to integrate it with existing JSON APIs.
So in summary:
- Functions make powerful, modular building blocks that let you make abstractions when you need those to occur
- Most libraries available on npm won’t be as terse, or have a small API footprint
- Functions that take functions as input and give functions as output are an advanced concept that makes code extremely fast and modular; so get comfortable with those “tough functional and ES3 concepts”!
- A little yak-shaving while toying with functional building blocks can go a long ways
- Functions as building blocks can help prevent those damned “leaky abstractions” from finding their way into your code. The only thing that should matter to your function is its input and output types.
- GraphQL and Falcor are great projects, but it doesn’t mean you should just jump at the chance to adopt them into projects without knowing their in’s-and-out’s
“Dirtual Voms” — React, Mithril, Bobil, Vue, Mercury, virtual-dom, incremental-dom, …
Dan Abramov (creator of Redux, React Hot Loader, and other projects) recently begin investigating/reasoning about the structure of Virtual DOMs (specifically, React’s implementation).
Even this can be a step too far for initially understanding what a Virtual DOM is in the first-place.
So, let’s again take to Arbiter to break down what JSX, virtual elements, and virtual DOM Objects are.
Most people learn React starting immediately with JSX, too. This isn’t always the best thing to do, but 90% of the tutorials out there start with the same premises.
The above screenshots/link demonstrate what the Virtual Element is… an Object!!! If we nest more elements inside the div, then the Object becomes somewhat of a recursive structure.
Let’s “prettify” this output so we can see it more clearly.
React stores this nested tree of Objects, and then renders them to the screen. This is that mythical magic of VDOM libraries — each one has figured out how to parse your JSX or API calls into a VDOM, and then has implemented a way to turn this tree back into HTML in the browser or in Node.
There’s no magic past this concept. The rest is implementation detail that involves a lot of creative problem solving to figure out where the previous and current version of a tree differ, and rendering the fewest and smallest updates to the DOM that would optimize its rendering speed.
“Death to promises, long live promises”, CSP, observables, transducers, Flux, Redux…
There’s been a lot of talk about “state” and how to architect/organize your code and your data structures so that your code is easier to reason about.
- Single source of truth — the state of your whole application is stored in an object tree with a single store.
- State is read-only — the only way to mutate the state is to emit an action, an object describing what happened and the smallest amount of information needed to morph the existing state into a new value.
- Changes are made with pure functions — To specify how the state tree is transformed by actions, you write pure reducer functions.
There are bonuses and limitations to these approaches.
- A singly-sourced state could mean that your application, especially if used in larger, scaled applications, might not be able to fit all the information in, or it will use up the memory available. What matters is that you take great care in recognizing the difference between what should be stored in state and what should not, and learn the in’s-and-out’s of Redux flexibly so that you can plan for those engineering / design concerns.
- Read-only state is a really smart & clean approach — by enforcing all code to use the API as it was intended, there are no side-effects, so the library (Redux) can guarantee how state is changed. Immutable libraries that make use of tries are capable of creating “clones” of Objects (so we can compare by identity, instead of actual data). However they do not need to create complete copies of an immutable Object each time a value is edited. This is why immutable structures can be faster — and easier to reason with in application code — than mutable counterparts.
- By using pure reducer functions, they can — through functional composition — be used to create modular, “abstractable” state modifiers. However when creating “actions” (simple Objects with a name and some data about what is happening), there’s still a host of boilerplate code needed to get things working. In addition, asynchronous actions/code embedded in the flow of application logic can become muddled and unclear; which is why they are under the “advanced” section. :-)
Other platforms / projects are competing for the top spot, as well, such as Elm and its built-in Model-View-Update architecture.
Paradox of choice, again, rears its gnarly teeth.
Instead of freezing over implementation detail, let’s learn to choose the best for our projects — or choose the personal preference — by asking “what should the ideal store/state API look/act like?”
We’ll all come up with different answers, so I’ll just show some of my own reasoning here.
- Creating an immutable store should be a simple, single-line call; and the option to create & use multiple stores should be available for memory intensive applications and scaling situations
- Getting a clone of the state from that store should be similarly easy
- Actions and Reducers should be simpler, where I should not have to remember the name/type of an Action, or have to learn a new set of procedures to have asynchronous reducers/actions.
You’ll notice that the store changes its internal state by “dispatching” a function/reducer that takes the previous state and a middleware-esque callback function called `next`. The new version of the state is simply passed to `next(newState)` as the first argument.
For full implementation of this, check out universal-utils (my learning repo), but here’s the down-low:
The use/design of the API stayed the same, and I fit my implementation to match the tiny footprint and API, instead of the other way around. This makes the code itself extremely easy to reason about.
Notice, also, that I was able to return a Promise from `Store#dispatch()`. No matter if I was dispatching synchronous or asynchronous reducers, the function signatures were exactly the same, and thus I could simply extend the implementation to use and return Promises as well.
Composition is king.
One thing Lisps (like Clojure, Scheme, etc… the languages with “lots of parens and tabbing”) and other Functional/Type-system-based languages like Haskell have really forced people to figure out is how to abstract/mature problem-solving methods in algorithms for a broad range of uses.
“What happens if I want to filter a potentially infinite series of events for those that involve clicking on a mouse?”
An infinite Array doesn’t exist, and even then we can’t easily/quickly program a simple map/reduce function to handle infinite sequences of numbers.
Can we be declarative (like a “reducer”, “mapper”, or “filterer” callback function), and handle infinite series at the same time?
All the meanwhile, people have been figuring out the other half of the equation to handle mapping/reducing/filtering infinite series (transducers), and combining them (Communicating Sequential Processes).
Again, there’s a paradox of choice, but less so, since these are either unattractively “academically-charged” or just really fresh so there’s not been the same level of contribution yet.
(Hashicorp and the fight for automation) (AWS vs others… JAWS… Sails… npm-scripts, etc)
There are soooooo so so so many options out there for
- continuous deployment and integration
- hosting providers
- domain name registrars and routing
- IaaS and PaaS solutions
- Database and Push services
- pre and post-processing tools
- build tools
- style linting
- static type checking
- compilation and transpilation
- packagers, bundlers, “concatter-minifiers”
- … and many other topics here
What’s marginally more important is understanding the limitations and features. If you can script it, you ought to resort to that if you can, because in two months you’ll need yet another build-step or tool to integrate into your deployment chain.
Just make sure you investigate tools with skepticism and figure out the “gaps” as quickly as possible, otherwise you’ll lose plenty of time spending it on the wrong task. :-)
For instance, I’ve been investigating setting up Hashicorp’s Otto project in clean and dependable way. While the project itself has a number of missing features (such as redeploying to an existing server-id or reconnecting the Elastic IP address on AWS), the team is working towards fixing those issues so that Otto serves as a one-stop-shop scriptable tool that you can use to deploy VMs, Images, and Docker containers to a number of Infrastructure services.The focus is being able to deploy apps and infrastructure as a cooperating architecture to the Infrastructure as a Service that you desire, without needing to open any management console in your browser or terminal.
That’s about all I have to say for now. I will make edits, definitely, so please leave comments and I’ll make amendments to this write-up as needed.
Thank you for reading, and please check out some of my latest endeavors :-)
- Space City Conference Series — http://spacecity.codes — SpaceCityConfs
- Destination Code Unconference Series — http://destination.codes — Destination Code
- Me — http://mkeas.org — Matt Keas
- The Iron Yard — The code school whose mission I serve. Interested in learning more about how I or my team can help you adopt a career in programming? Reach out, and we’ll help you find your way, whether it’s with us or someone else.