JavaScript Everything?
A high level evaluation on the benefits of programming in the standard language of the Web
Recently I tweeted this:
Now admittedly anyone that knows me the slightest has noticed my non PC comments, which to my defence, always come from the heart. But looking past the awkward phrasing, why did I in fact remember this video?
Back in August 2012 when I first saw this interview I was honestly troubled, and it surfaced a number of times in my mind in the following months. And in December 2012 when my company came to a crossroads of adopting open web technologies or sticking to more “traditional” pipelines it became a catalyst for my decision making. But I didn’t give in - I didn’t let authoritarian opinion overthrow my instinct. I felt the right thing to do was to trust the web and its openness. And in the following months my feeling was crystalised into solid arguments.
Before I start listing them, I’ll have to clear the air by saying that my problem is with the content of the video, not John Carmack personally. I look up to the guy, and I realize that my admiration will only ever be one way… I appreciate his early achievements in programming 3D worlds - I still remember how annoyed I was that Wolfenstein3D on my friend’s 286 wouldn’t play on my Amstrad 086. I’m also happy with his recent involvement in the development of the Oculus Rift that gave those guys a push in the right direction and a breakthrough device in our hands sooner than later. I truly believe that he can achieve historical status, as one of the grandfathers of virtual reality.
But to get back to the subject of this post, why is JavaScript not that bad and why should we even consider it for anything? First of all, why is it bad?
JavaScript’s advantage is also its biggest weakness. The lower barrier of entry for writing code in JavaScript has enabled many people that are newly introduced to the concepts of programming to produce a whole lot of inefficient code. Which in turn led a number of tech professionals to believe that it’s an amateur sport. That, in combination with its slower processing compared to compiled code, are the main reasons why JavaScript is not taken seriously - until recently…
Recently “proper” programmers like Ryan Dahl and Jeremy Ashkenas have made serious efforts to create a real framework for programming in JavaScript, both on the server and the client-side. Although JavaScript is (rightfully) a blank landscape and it is perfectly fine for the average user to write some lines of js in an unstructured manner and get a direct response, writing JavaScript like this is the equivalent of typing code straight from the command line in Atari BASIC. There is no real application without structure and abstraction. Node.js and Backbone.js have made strides in that regard, paving the way for creating “real” applications using JavaScript.
Even so, the grim outlook towards JavaScript’s future, and the Web for that matter, almost makes sense if you consider the hard facts. JavaScript is slow, can be easily manipulated and it’s very liberal in user privileges. But a judgment based on that would be partial, not taking into account fundamental parameters of the world we live in or even needs of our human nature. A judgement that could lead us to the wrong conclusions.
As product development dictates, people like an easy solution that just works. And as a rule most people will pick the solution that does the most with the least effort. Online software is designed with those principles and has practically no competition in this area. It will almost seem like an anecdote in the future that we had to install software on each machine individually and that software only ran on one machine at a time, tied to the hardware. We can barely see an outline of the future with downloadable content, background updates and episodic delivery in video games. But that’s not even touching the true potential of the Web.
Networks live in the real world. And the real world is fundamentally built on exceptions. When does anything ever go as planned in the real world? New conditions are a constant and there’s only one exception error response - when your life ends. There is no such thing as “breaking the world” in real life - but it is very common in a compiled program.
Pre-scheduling all possible responses will never be as efficient as creating new routines “on the fly”. That’s why compiled languages still produce behaviours that are closer to machines than to humans. Any qualified gamer knows that a big portion of defeating a video game is by taking advantage the limitations of the enemy AI. You might say that’s a lack of technology from our part, but processing power is not what computers are lacking, it’s the ability to adjust to new conditions.
It’s not all about speed. Nature has taught us so. Our brain has built in latency and friction. It uses the spine to compensate for delayed reactions. Yet the brain is considered immensely superior to any artificial intelligence we’ve built because it has higher level processes that allow dynamic logic patterns to be formed and executed - what we may call imagination or foresight. The possibility of handling unpredicted exceptions and generating new code on demand are fundamental functions of true artificial intelligence.
Here lies the true power of JavaScript, not because of its syntax but because of its positioning. JavaScript can be executed dynamically and is understood across the Web, from the standard browser to “smart” home appliances. Anything connected online needs to speak the same language to establish a communication. Along with protocols like http, JavaScript is becoming that common language. And if JavaScript is the interfacing language for machines communicating online, it will be defining their behaviour in the real world - or shall we say, the application’s “personality”.
What’s crucial here is not to go overboard with using JavaScript. Native code has its place and is ideal for contained routines with predictable IO. In fact NOT everything should be in JavaScript. The development of Three.js is a really good example. I am a true supporter of the project and have endorsed it for our in-house projects, like the upcoming Havenbase. But that kind of API should be adopted by the browser vendors and interfacing with WebGL should be done in native speed.
To use the parallel with biology again, it’s the difference between the focus of a neuroscientist and a psychiatrist. The condition of your neurons may affect your personality. But we interact with the world on a higher level of abstraction (consciousness) rather than allowing (biologically) each individual component to interact with the world directly. A neuroscientist will care about the condition of your neural system and will go very low level, investigating the inner workings of your body, much like a native programmer wants to be as close to the binary code as possible. A psychiatrist will be more interested in how a person behaves and study any offsets from what’s considered to be normal behaviour. An application developer working with JavaScript is very much on that same level. There may be some overlap between the two disciplines but they have definitely complimentary rather than competing practices.
As much as native code is important for the small components where processing speed is pinnacle, the dynamic nature of JavaScript and it’s reach across the Web make it ideal for becoming that overarching “entity” that will weight a plethora of different variables and make intelligent decisions based on real world conditions. If John Carmack is one of the grand fathers of VR, the fathers of VR will most likely consider working with something like JavaScript.