A Zen (but Still Frustrated) Description of Modern Web Development

Ben Donaldson
13 min readJan 12, 2016

--

I know everyone loves to pick sides, offending and defending each other, and blaming their daily problems as a developer on their tools.

“It’s a poor craftsman that blames his tools” is a common comeback for that, and if you click on the link you’ll read an even better “counter-counterpoint”.

But for just this article, just this umpteen-hundred words, let’s pause all the cursing, personal attacks, and knee-jerk reactions (defined in the OED as “butthurt”), and describe the recent “pain points” of web development in a friendly way.

I don’t care about traffic (or I’d write this on some ad-filled blog), I don’t care about your individual philosophies and dogmas regarding process X or library Y. I just want there to be at least one article about this stuff that doesn’t get renamed as “a rant” by Hacker News.

So here it goes, folks! Hold on to your standing desks.

Quick Intro: 256 Shades of Grey

Arguing is fun! Anyone who disagrees is clearly an imbecile and we’ll let them know in the comments, right? That’s the joy of the Internet, a superficially free and open place where you can speak your mind and probably not get physically hurt. You can be like an old beagle, barking at everyone from the comfort of your own home, knowing they can’t get you.

And because arguing is fun, we make everything we want into an argument, with nice black vs white, red vs blue sides to choose from. But as the famous Yin Yang symbol depicts, those two sides are more intertwined than you think:

Trolls are necessary to bring balance to the Force.

Taoism believes that good and bad are inseparable; without bad, there is no good. It also helps explains why we can argue about anything; just by stating an opinion, we’ve drawn a line in the sands of thought, one that people can jump over and stand behind while they curse you out.

Personally I believe that there is no real line between “right” and “wrong” in any argument, but that we’re all points in a philosophical space with countless dimensions. We only choose “right” and “wrong” because of two reasons:

  1. Someone who likes to argue drew a line, and now we’re all on one side or the other whether we like it or not
  2. There’s a deadline

So I’m going to try and explain things without picking sides. Take that with a grain of salt, or sand.

When I Was Your Age, 5 Dollars Got You an Image Carousel!

Because I wrote code in the 90’s, I can pretend like there was a Golden Age of web development, where all you needed was a simple book and a dial-up modem. Where writing a few HTML tags and uploading a very small image meant that you were building professional web sites.

For some professional web sites, yes that was unfortunately true. But for the classy ones, you needed to understand more. Especially server-side code, because that’s where 99% of the logic was! And how to handle physical hardware problems without the redundancy of The Cloud.

And even the best results were hampered by your users, whose browsers were all different and could barely handle images, especially when their time online depended on one of these:

That’s right! We used to get our Internet from vinyl records!

“There’s Gold in Them Thar Hills”

In the 2000’s things gradually improved, in no small part because the Internet exploded in popularity and economic potential. Soon you had your choice of Firefox, Safari, Internet Explorer and Opera, none of which really agreed on new standards. At one of my jobs we had an old laptop with IE6, an old laptop with IE7, and and new laptop with IE8, because that’s how you had to test IE back then. Nowadays we have services like Browserling. Is that a better solution than before? ABSOLUTELY!

Of course, your simplest option for cross-browser compatibility was to join the dark side and program in Flash. Flash was proprietary, insecure in ways that old websites couldn’t even comprehend, and unable to perform well on these new things called “iPhones”.

Although new devs might not know about them (and old devs would rather forget), frontend work was severely held back by big issues:

  1. Connection speed
  2. Cross-browser compatibility
  3. Device performance

The Device Performance was especially nasty, because by the time desktop computers were getting reliably fast we switched to smaller laptops, then smartphones. Form factors were shrinking as CPU speeds grew, so devs could never expect their users to handle the computational load.

Corporate Dark Ages, Meet the Open Source Inquisition

Those three frontend issues I just listed? The last several years have changed things immensely, thanks to improvements in network speeds around the world, and CPU speeds in all device sizes. But how is cross-browser compatibility being solved? That is the result of Open Source Software.

In 2008 Google released their own browser, one that would be updated constantly and help drag the web forward. They did, slowly wresting the dominant browser share from Internet Explorer, to the point where only a few percent of users on web sites were arriving via IE6. To all of the web developers of the last decade, trudging through the mud of old IE versions on separate laptops, this meant they were closer to writing code once and running it in all browsers! And now if the web standards community wanted to add a new feature, Chrome and Firefox and Safari (usually) and Opera (bless their heart) could support it! Internet Explorer would have to join in or be left in the dust!

Oh, and like Firefox’s Gecko browser engine, Google’s Chromium engine was open source! Github also arrived in 2008, and open source projects were turning from a formally-built exception to a community-contributed rule! With HTML5, CSS3, and the newly-rediscovered Ajax, we’re taking back the future of the web!

For a great analogy to this whole situation, let’s go back to a quote 107 years ago from Henry Ford:

“A customer can have a car painted any color he wants as long as it’s black.”

The Ford Model T was the first affordable car, revolutionizing American manufacturing and transportation. Although Internet Explorer wasn’t the first browser, it was still the primary browser when the web exploded in popularity. The Model T completely dominated the market for many years, just like IE, and it refused to change, just like IE. And what did it finally take to overcome the monotonous consistency of the Model T?

Style.

General Motors knew that there was no reason to sell a car that still worked, so the only way to gain market share was to convince people that they were missing out on something new. So they created a variety of cars that were more expensive than the Model T but with new styles and features, and they made new models every year. And GM’s sales went through the roof! Suddenly cars, which Ford sold as a practical “gets the job done” item, became fashionable and interesting. The appeal of a new version and new features was enough to make people drop things that still worked fine. The secret to beating IE was already used a century ago.

So the browser market’s monopoly was busted, and the lure of new versions and features helped the web flock away from IE. Most users didn’t care which browser they used, until old IE’s market share was low enough that developers could justify neglecting it. It still took many years but it wouldn’t have happened without a carrot on a stick, and convincing us developers that new features were worth the code churn.

I Don’t Know About You, But I’m Comfortable

It’s notoriously difficult to estimate how long a bug or feature will take, and any sprint poker planner will tell you how rarely a developer will hit that estimate. I knew an engineering manager who kept secret multipliers for every employee, so when they said “That’ll take 5 hours!” She’d write down “(5 hours x 2.5 multiplier for Bob) = 12.5 hours”. It’s not my fault as a developer that there are so many variables (ugh) that go into development, but I do sometimes envy professions where a task almost always takes as long as you expect.

The problem is, if I can’t estimate how long my work takes, how can I be trusted to estimate the time I’ll save from a new feature? If using the old version of a tool takes me 4 minutes and using the new version takes 3 minutes, then I save 1 minute every time I do the task!

Let’s upgrade right now, it should be easy!

(15 minutes later)

Phew! Sorry had to check the README to make sure nothing would break. Anyway it’s done now, and…

(20 minutes later)

There was a bug in the new version combined with our dev setup, I needed to reference a PR until they re-release. Not a big deal. OK, so…

(10 minutes later)

Fine, I updated the wiki so everyone knows how to use the new version. It’s mostly just syntax changes.

(2 hours later)

Why are you still complaining about this, Bob? It’s just a syntax change! Read the wiki!

So in a theoretical hour, you’ve upgraded the tool that saves you one minute per use. The team uses it once a week, so it pays off over a year from now. That’s probably worth it. But how many upgrades have you done where the reading, refactoring, upgrading, patching, and discussing end up taking much longer than you estimated? And how many upgrades never pay themselves off?

As developers we (understandably) choose something that makes us more comfortable, but we can’t always quantify and justify the time savings. If we’re doing it for fun, comfort is the only priority, but at a job we have to take time savings into account. Even if our manager just says “I don’t know what the hell you’re talking about, so I trust your judgment” and the other developers who would challenge you are also choosing comfort over time savings.

Sometimes you have to choose between time savings and comfort. Your comfort and time savings will be different from someone else. And there is no “right” choice, so don’t waste too much time choosing!

The Onion

Open Source software has unleashed the creative freedoms (and ambitions) of developers around the world, and the web is embracing progress. So let’s make that future!

The only things holding us back are old computers, old browsers and stubborn users.

First we’re going to make ourselves more comfortable (and potentially save time) by adding cross-browser features to CSS for graphics, text, and layout. We can save time writing all of the browser-specific implementations of these features by using a prefixer. Then we can use a tool that sits in between our code and the CSS, doing the prefixing for us! That tool needs to run in a server, so we need another local server running on our computer, but now we can write CSS in a nice comfortable way, which gets transpiled to the uncomfortable cross-browser compatible CSS!

That is a perfectly reasonable thing to do, and it makes thing much more comfortable. It also saves a decent amount of time while writing CSS.

But here are the time savings from not doing it (which is also perfectly reasonable, even if it looks like a bad career move):

  • Not having to regularly learn CSS features and browser compatibility;
  • Not having to change your dev setup (or an old company’s dev setup) to support a dev build server for transpiling CSS assets;
  • Not having to deal with bugs in the tool, breaking changes, or time spent making sure the rest of your team is taught or self-learning.

It all depends on the site you’re building, and the size (and experience) of your team. I work on some products where I believe we do need the newest libraries and frameworks for what we’re making. Those products involve many hours of keeping up to date, bug fixes and build tool frustrations, but I still believe that the overall time savings are in my favor. I also work on some products where jQuery or Backbone are all the interactivity we’ve needed on the frontend for years, and I haven’t had to spend any extra non-coding time on them for years!

The new versions and features are exciting and enticing, just like they were 100 years ago! Things like greenkeeper.io show that staying on the bleeding-edge of things is clearly so much fun that it’s worth the pains from being a first-responder to any bugs. Abstractions like SASS, PostCSS, Coffeescript, and Babel can make coding more comfortable, and more productive! But abstractions add another layer to the onion, another bulb in the string of christmas lights between your desk at work and your couch at home. Build tools like Gruntpack, Webify, Browserulp, and Speedball are like your own personal dev ops team! But some developers are very passionate about the visual results and not passionate about the build process. The fact that beautiful UIs are requiring a growing amount of CS knowledge is absolutely and understandably frustrating for people who are not motivated by the process.

Even if these new tools aren’t required, they feel like they are. And so every non-programmer who wants to build a web site looks online, sees the new styles and features of Frontend 2016, and believes that’s exactly what they need. It’s partially their fault for being “gullible” to the web fashion trends, but in the same way that people are “gullible” for buying the newest iPhone, or buying the newest Subaru, or the newest version of anything (Flat UI design, Air Jordans, etc). It’s desirable.

And for those of us older developers (Hell I’m 32 and I feel like a Diplodocus) who used to sketch out a frontend and then write code more as a means to an end, we’ve spent the last decade becoming “real” programmers whether we liked it or not, because the layers of the onion keep increasing, and the versions are updating as fast as they can. Developers have no one to blame but ourselves for chasing these new tools whether we need them or not.

In the fable The Tortoise Dev and the Hare Dev, the tortoise ends up winning, and the hare doesn’t even notice because he was too busy speaking at conferences about his deployment speeds.

The Ugly Ducklings of Progress

Progress is absolutely wonderful, but there are a few things that haven’t been improved along the way and maybe never will be for some tools and libraries.

LTS

Node and Ember are two Javascript communities that are embracing the idea of a LTS (Long Term Stable) release. For the low cost of critical bug and security fixes, they give the less-flexible developers of the world a reliable snapshot of their amazing tools. It’s not going to fragment the world as bad as Internet Explorer versions did, because you still have an open-source community. You just give the enterprise folks a chance to join and contribute.

Docs

Docs are hard, especially when they change so often. Ember now has great versioned docs but for a very long time they did not, so everyone had to rely on Google, Stack Overflow, Github and various other parts of the web while manually checking every timestamp of every comment. I understand that in the name of progress most open-source projects are lacking in up-to-date docs, because that takes time away from new features. But churn can be a bad thing for anyone using a tool in a large company. If your library’s supported version changes faster than my IT department, I won’t use your library. And any big company should probably avoid you as well for substantial projects.

For another old example, take the Willow Run Airport, the biggest factory ever created at the time. If you want to talk about a build process, talk about 3.5 million square feet of factory floor, building giant B-24 airplanes with hundreds of thousands of parts so fast that they deployed “live” (i.e. they literally flew off the assembly lines) every hour! But it took a very long time to get production going at all, because the military kept changing small specifications, and every time the factory had to be reconfigured: build tool upgrades, adapters, and thousands of workers around the clock.

With software we can upgrade our process without thousands of workers, and without rebuilding lathes by hand. But every small syntax ‘upgrade’, every small specification tweak, every build tool that changes slightly makes us pause all of our deployments. We have to check the new changes even if they’re irrelevant, and if there isn’t a manual then we have to learn someone else’s codebase to understand their diffs. We have to explain to our coworkers, or managers, or angry customers, why we can’t fix something because a distant specification in a repository (a.k.a. a “supplier”) might have changed. It’s our fault for trusting a third party that gives us things for free, but if a repo’s documentation was treated with the same fervor and fashion as its code, I think everyone would be much happier. The only people who wouldn’t be happy are the people who hate writing about their code, in which case maybe we need a “Doc Coverage” Github badge like Code Coverage to whet their appetites.

Debugging

In a recent article that was more blunt about build tools, the author mentioned Elm and their efforts to make debugging easier, as in error messages that are as clear as possible.

Here is the amazing article they linked to.

If we can invest in build processes with a dozen layers of transpilation, compilation, minification, etc. to make our coding experience more comfortable, we should invest even more in making the learning and debugging process more comfortable. I know it seems like “you’re a programmer, you should just learn how to do it. Some things just aren’t easy”. We can’t say that to beginners (or anyone) and then spend a bunch of time configuring support for Babel and PostCSS. Everything in programming was hard at one point, but we’ve made most of it easier by orders of magnitude. That’s what we do, and that’s what we should keep doing even for docs and debugging.

In Other Words

We have come an incredibly long way. The Open Source universe is expanding and accelerating, and we could make a COSMOS episode describing all the stars and constellations on Github. But we’re going so fast we don’t think we have time to pave any roads or put up signs. We do have time, but the open source community needs to make the road pavers and map builders look as cool as the people who lay the foundations.

--

--