A response to a popular article.
The appropriate response to a perceived competence gap between web development and application development is not to fake admiration for the worst tools of web developers, but instead a concerted effort to improve tools and knowledge in both communities. If someone’s complaints about CSS make you feel like your skills are being belittled — well, that’s probably an indication that you need to improve your skills, and learning why your preferred tool is bad is a good first step.
The fact that some people are capable of making impressive things with a tool does not make the tool good. Making impressive things with bad tools (or with good tools that are intended for a completely different purpose) is a tradition in the tech community; it’s called hacking. Writing a text adventure in postscript is impressive only because doing so is a terrible idea. Likewise, modern web development is impressive because HTML and CSS are limited enough to make most things that would be easy in other domains very difficult in a browser. This is not a point in favor of CSS; it is a point against it. A tool is good if easy things are easy in it and hard things are only slightly harder; CSS fails this test.
Normalizing the use of a poor tool in which a great deal of effort is necessary to solve common problems has knock-on effects. If an absolute beginner can’t perform extremely common tasks (in other words, if new users are buried under an avalanche of gotchas), those tasks are pushed onto intermediate users; more difficult tasks are relegated to advanced users; difficult tasks that need to then operate consistently and reliably — well, that’s just something nobody has time with. And, if you cut corners and bring on somebody who is slightly less skilled than is necessary, you’re more likely to get inconsistent and unreliable results even for simple tasks, because the difficulty curve is all screwed up and beginners don’t know the snags they haven’t researched yet. This is a pattern that will happen with any poorly-designed, over-complicated, inconsistent tool: anything created with the tool will be systematically slightly worse than anything created by someone of similar competence with a well-designed tool.
Pretending a bad set of tools is good lowers the bar for good tooling. It encourages an environment where bad tools are the norm, and encourages people to learn only bad tools. Just as the web is an absolute horror show (ultimately just because Tim Berners Lee cut a bunch of corners in 1992), we have big groups of people who think using hadoop & hive is a good idea when a single unix command line running on one core will do the same amount of processing in 1/80th of the time, and we have academic fields where significant numbers of statistical errors in published papers are resulting from bugs in Microsoft Excel. Bad tools should be shamed, and use of bad tools should be limited and careful.
Computer programmers have spent a lot of their history in the sandbox. In the 60s and 70s most of the interesting things being done on computers didn’t have to be stable or reliable; our modern programmer culture derives mostly from the group that “shot from the hip”, rather than from the serious and conservative professionals who were crunching numbers on IBM boxes during this era. From the late 70s through to the mid 90s, personal computers were mostly not networked, and for part of that time permanent storage was limited — the cost of a mistake was that the end user had to reboot the machine, usually, and even though hardware memory protection facilities existed on PCs after 1987, they remained unused for the next ten years. Meanwhile, those who used the internet were universally technical and could expect to fix their own problems. Sloppy development, and development tools that made non-sloppy development difficult, became normal. But, we aren’t in the sandbox anymore; poor decisions made for toy projects in the early 90s are coming back to bite us daily. Poor tools and sloppy decisions are no longer acceptable.