A game with no winner by Phil Jones

Web applications don’t follow new rules

The last few weeks were full of lots of great articles about an old topic: should things on the internet be dependent on JavaScript and should apps work on the server, the client, or both?

There is a familiar and almost nostalgic feeling about this for me. The big “application” reason for doing things in a different way that violates “common practice” keeps cropping up over the years:

  • “We need Flash, because we build an app and HTML doesn’t have the fidelity we need”
  • “We don’t worry too much about accessibility, because we build applications and not web sites”
  • “Our users know what they want, they expect the application to work that way and are OK with things being different”
  • “Facebook is super successful with this, that’s why our 10 page wiki should be a SPA, too”
  • “We need to use solution $XYZ, because we build apps and web technology totally doesn’t give us what we need”

This is boring, and it is not helping. And worst of all, the argumentation about that topic is circular at best and utter waste of time at worst. Often there is not even a discussion, but instead the people advocating for a certain approach simply assume that the other side is a total hard-liner and doesn’t even want to understand.

For example, the other day I stated the following on Twitter as a response to one of these “application articles”:

“All modern websites, even server-rendered ones, need JavaScript” — No. they do not. They all can become better when enhanced with JS.

Notice, I did not say that JavaScript is the spawn of the devil and nobody needs to use it to create amazing experiences. Seeing that large parts of my career and many of my books are about JavaScript that would be quite the pivot.

A lot of the feedback, however, gave me the following impression:

When you question the use of JavaScript for everything, you automatically hate on the “modern web”.

One of the great things about JavaScript is that you can do everything with it: you can do computations, you can create HTML, you can dynamically style elements, you can manipulate images, play and create music, video, and nowadays do all the HTTP work of an app, too. JavaScript is not only the leatherman of the client-side web any longer, it now took over the server, too.

That is also one of the terrible things about JavaScript. Just because you can cover all the parts of an app with it, doesn’t mean you should. Not everybody who groks it, knows how to create a beautiful app experience, and those who know how to do that, don’t necessarily grok JavaScript. Living proof for that is the vast amount of “JavaScript developers” who are totally lost without jQuery. And you have “CSS developers” who need Bootstrap to start with and sooner or later will manipulate element.style in their JavaScript when it isn’t needed as it is neither dynamic, nor problematic across browsers. The same goes for the server. Just because you use JavaScript doesn’t mean you’re free of security concerns or your server will perform magically.

My favourite part of these arguments is the escape into edge cases. Whenever you talk about progressive enhancement and using JavaScript to make existing functionality more swift and more enjoyable sooner or later someone will dig up an example that would be totally useless without JavaScript. In this case, it was Google Maps:

@codepo8 good points & I agree with many, but sites like Google Maps just won’t fly without JS. Doesn’t mean you shouldn’t build them.

Maps were the poster child of AJAX. Granted, the Outlook Web Client was the first use case (and invention ground) of XHR, but when Adaptive Path gave us Ajax: A New Approach to Web Applications (yay, apps!!!), Maps was the thing that had the ooohhh and ahhhh factor the new tech needed.

And with good reason. I love maps. Hell, I worked on Yahoo Maps. They are damn useful. But would they really be impossible without JavaScript? I don’t think so. They used to be based on an API that runs on the backend and lots of the geo-logic happens on the server (partly because of IP issues). Google even offers a static maps API and I often take screenshots of maps so I can use them offline.

Well, who knew? Google Maps works without JavaScript!

Do I think people should build maps like that and not add interactive goodness the way Google maps has right now? Do I think Streetview was not needed as it used Flash? No, on the contrary. I love it. It allows us to push the web, it allows to find use cases that can be standardised and become native to the web. But it doesn’t mean that everything we build needs the same features or has the same technical cost attached to it. I don’t want to navigate WikiPedia as tiles; do you? For this problem, the solution fits. That doesn’t mean it is better or worse than others. All it means is that we got lucky and married the right tech with the right use case.

I am quite sure that if the maps had been architected with all these features from the get-go it would not look like it does now. It’d be a much more closed system. It is now a great solution because it went through an evolution. Things got added over time.

A flexible platform for a constantly changing set of demands

That is the great thing about web technology. It isn’t clean or well designed by a long shot — but it is extensible and it can learn from many products built with it. This is how we got the History API, this is how we got Web Sockets and many other APIs that offer a handle on very common web functionality, even if we replace them.

If we do everything client-side we do not only need to deliver innovative, new interfaces. We also need to replicate the already existing functionality the web gives us. A web site that takes too long makes the browser show a message that the site is not available and the user can re-try. When the site loads, I see a spinner. Every time we replace this with a client-side call, we need to do a lot of UX work to give the user exactly the same functionality.

We’re not quite there with perfect solutions to our answers as our question keeps changing.

And boy do we fail at that. How many times do you see a spinner and when you open your developer tools you see that there is an error and nothing would happen, no matter how long you stare at it?

Covering the faulty cases isn’t fun. Look around at technology demos and “hello world” examples of frameworks. You hardly ever see an error case, and if you do there’s probably a comment saying

//error case

Sorry, that’s not good enough. This stuff matters — probably more than the success case. Failing gracefully and with good information for the end user is a good thing. Many of our fancy apps fail either by locking up or by telling the user they’ve done something wrong.

Our discussions are based on assumptions looking at final products

The issue with our argumentation pro and contra dependencies (JavaScript, libraries, fonts, browser/environment — you name it) is that we always look at final products and applications as they are implemented and backtrack and make assumptions about their technological needs.

That’s silly. We don’t even know what the decisions were that lead to the final product. Is it based on React, Polymer, Sencha, Angular, Ember or whatever else because this was the superior solution? Or is it because this team of developers felt confident using that environment and had to deliver it to an important deadline as there was a press release that has to win TechCrunch that day? How reliable is that solution? Is the company praising it really dedicated to it or is it a flash in the pan thing?

It’s new, it’s better, it’s gone.

If you wanted to repeat that feat, you’d also have to become confident using that tool first, and add that your time allocation. We assume that JavaScript was needed to build a solution as we like the product and it breaks apart when it can’t be loaded. That doesn’t mean JS was needed, it could mean the error case was never even considered or fell through the cracks when the deadline loomed.

Why not analyse what went wrong? Why not question if the move to fully client-side rendering really is a great step, or if it just appears to be one as it makes it easier for a certain company to release a certain product at a certain time. Twitter back-tracked once before, this can happen to anyone else, too.

Fact is that a lot of stuff on the web is the result of a lot of weird, non-technical decisions we’ll never hear about.

People used shortcuts to make a certain deadline. Ninja Rockstar developers applied a great state-of-the art new paradigm without documenting their work before getting hired by the competition. Maintainers added more things without understanding the structure of what was going on as they never had a proper handover. Who cares? Move fast and break things! But, hey, let’s never, ever, blame the great innovations we made to fix the lame platform we try to build on and benefit from its user numbers. Always blame the web and its slow innovation cycle.

The web is messy, and we try to cover up for this by repeatedly inventing new ways of working and magical solutions to its inherent problems.

What we forget about is the history and why some things went belly-up. We are very quick to blame the technology stack. “Open web standards are not good enough” is the rallying cry. As we are intelligent developers who are expected to have superhuman ninja 10× skills, surely we can come up with a better framework to build web apps in, right?

When Etsy announces their switch to SCSS to make their styles more maintainable the finding should not be “SASS beats CSS”, but it would be interesting to learn what lead to over 400,000 lines of CSS in over 2000 files for an actually not that complex site in the first place. The article is very interesting and shows some great differences in people dealing with code:

CSS enthusiasts love the fact that CSS doesn’t stop executing when it encounters errors — it just skips them, which allows, for example, for browser prefixes. For Etsy, this was an issue, as it was impossible to pin-point where things went wrong in the CSS. Instead of digging, people just wrote more CSS with a higher number or selectors to overwrite existing styles. I love that the article also mentions the dangers of SCSS, and especially @extends allowing for the final CSS to bloat again. It doesn’t mention the gains that came from switching to SCSS, all it explains is that now they are thinking properly about how to write the styles for the product to avoid bloat.

So the bloat is not really the fault of CSS as a technology. To me, it hints at people creating CSS who didn’t quite grasp it or know how to debug it. This is no surprise. It happens all the time. Do we, as developers, have to understand everything in the stack? It seems to me, that this is what is expected of us. Well, good luck with that.

Applications are not a result of a single technical solution

Maybe it is time to take the term application literally — a result of applying the right ingredients.

What is an application? To me, it is a tool that allows people to reach a certain goal in the most effective fashion. What matters is not what language or technology you build it in. What matters most is:

  • that it is the right tool for the right audience,
  • that it does what is expected of it and not more,
  • that it is safe to use,
  • that it works in the environment it is most used in,
  • that it can be easily maintained without dependencies that only a few people know how to use,
  • that it is built with components that are reliable to use and not a “alpha” or “beta” or “not production ready” experimental technology
  • that we know how to maintain the thing, how to add new functionality and above all, fix security issues in the future without replacing it as a whole.

These are the things we should concentrate on. To find the answer as to what format this “application” will be, we need a mixture of skills of people working on the product:

  • researchers,
  • designers,
  • UX people,
  • content writers,
  • trainers to show people how to use the tool and how to put content in it afterwards and,
  • yes, of course, developers.

And this is the scary part: this costs money and a lot of effort. It also means that we have to think about communicating and building teams that are good at bouncing ideas off one another and find a good consensus. It also means it will take longer to build this.

All of this is anathema to people who have to show off to venture capital companies and stakeholders. We have to move faster, we have to be better. Less people, more products, quicker iterations, more features. It doesn’t matter what the product does: the most important part is that you show that it evolves and changes constantly.

This doesn’t work when you have to have a group of people with various skills involved. Apps are hard — let’s go shopping.

And the shopping we do is around frameworks, languages and products. A lot of them are not really fixing fundamental problems of the web. What they do is add developer convenience. They promise to get a handle on how your app will perform and work across all the devices in all the permutations in all the world — or, at least the most important one this month in the market where you can make the most money.

This would be totally OK, if we were honest about it. We use these things because we have a short-term goal to reach. We invent them to show off that we are working on “the web problem” and that we are probably the ones that can solve it where everybody else has failed. We release them to impress people and to attract developers to work for us.

Having fanboi fights over which one is better to use, solves the issues easier and gives you much more power as a developer is not getting us anywhere. In the end, all these solutions are products in themselves — even the open source and free ones. And these products need support and love and maintenance. This can become an issue really quickly when the company who released them to solve a short-term goal is not interested in them any longer. When they turn from convenience into a cost. Who will maintain them? A magical community that will rise up and spend their full time on them? That doesn’t happen too often.

A lot of our work on the web goes pear-shaped in maintenance. This is not a technology issue, but a training one. And this is where I am worried about the nearer future when maintainers need to know all the abstractions used in our products that promise to make maintenance much easier. It is tough enough to find people to hire to build and maintain the things we have now. The more abstraction we put in, the harder this will get.

Apps are software products with human interfaces. Web sites are that, too. The WYSIWYG dream of the 90s was telling people all they need to do is buy DreamWeaver and they’ll be able to build and be the same success as Amazon. This was nonsense. And so is pretending that an app needs to have a certain technology or framework to be a success.

What a good app needs is a team of dedicated people behind it. Good apps are good because talented people cared about them and pooled their skills.

We should not try to replace this with convenience methods, syntactic sugar or frameworks. All we’d achieve is to release cookie-cutter solutions that leave end users underwhelmed and us unfulfilled as we didn’t do the work.