5 Reasons why ‘the year of the Linux desktop’ never happened

And isn’t going to happen any time soon.

Michael Biggins

It seems sometime around 2010 people proclaiming that the next year would be the ‘year of the linux desktop’ suddenly went quiet. Maybe it’s because of the constant mocking or the realisation that those doing the mocking were ultimately right year after year, but whatever the cause that little spark of hope that you might one day walk into your local retailer selling to Joe public and see systems being sold running Ubuntu simply faded out with no fanfare. I work with Linux on a daily basis both in desktop and server environments and despite the fact that I could install Gentoo without the handbook I find myself frustrated by its solvable shortcomings.

“Open Source” isn’t a feature.

I could probably write a short book on the virtues of open source software (despite being a developer of closed source software) — however the reality of the situation is simple. Nobody cares. If you’re reading this there’s a good chance you’re a developer, a Linux user, or an open source enthusiast the likes of Stallman but outside of this niche community, the people who ultimately use the software could not possibly care less. They’re not going to contribute code to the project, they’re not going to do security audits, write documentation, heck you’d count it a minor miracle if you got a tiny number of them to submit a semi-coherent bug report (which I’d argue to be the smallest action that constitutes a contribution to a project). If your users have even heard of the concept of open source to them it is essentially just a byword for free in the ‘I don’t have to pay for it’ sense so beyond that there’s no use pointing to this detail when asked why you should consider Linux.

The people who ultimately use the software could not possibly care less. They’re not going to contribute.

Choice is bad. Very very bad.

From an engineering perspective of course choice is good, you can pick whatever projects and systems meet your needs the best. But having choices means actually understanding the differences between your options and what the impact of those differences will be so you can make an informed decision.

Different distributions can have huge amounts of differences between them, some of which are immediately obvious such as the standard desktop environment, and some of which are not such as the choice of package manager or the use of certain system libraries that may affect your ability to run certain applications (glibc v.s. musl for example). Ultimately your typical users choice will likely be based on aesthetics alone, he who has the prettiest out-of-the-box UI wins. The Linux Mint project seems to have taken this most to heart and provided a reasonably sleek and different-but-familiar interface as standard.

Live CD screenshot of the latest release. Sticking to the standard desktop model with a windows-esque menu.

Ultimately your typical users choice will likely be based on aesthetics alone.

Dual-Boot v.s. Virtualize Duel — moot.

In a scenario oft-seen on forums, Reddit and the like — a curious user expresses their interest in running and experimenting with Linux but are wary of replacing their current OS (Likely Windows) outright and are looking for options to let them mitigate the risk. More often than not you’re presented with the two seemingly-obvious choices. Dual-Boot or Virtualize — and neither of these are the right answer.

While setting up a dual-boot environment is much easier than it has been in the past, it only takes an odd partitioning scheme caused by an OEMs odd choice of sizing and placement for recovery partitions to screw this all up and leave the automated solutions staring idly at their feet. And if it does go off without a hitch to start, it only takes a single Windows Update to leave your Linux installation unable to boot again. And while you and I dear reader know you should take a full backup of your system before making major alterations to your partitioning scheme, college students can’t even be persuaded to keep multiple copies of their final year dissertations. So that’s not going to happen.

Virtualization nominally solves the problem but introduces new caveats. While risk free it does fundamentally mean a crippled experience, usually in the form of knackered GPU performance (if the virtual GPU even wants to play ball in the first place) resulting in heavy productivity applications (GPU accelerated video editing/encoding) and games performing slowly, incorrectly or not at all. Linux is best enjoyed full fat and hosted in this manner leaves for a very watered down experience.

If you’re using a modern system with USB 3 then a decent capacity, high speed flash drive is a viable option however and what I generally recommend for these kinds of experiments. Something like a 64GB SanDisk Extreme with it’s almost SSD-like 245MB/sec read speeds and practically 0 seek time results in a very fast and responsive system. I’d definitely not recommend doing this with a USB 2 drive of any kind.

When things go wrong, the ability to ‘get under the hood’ isn’t helpful if it’s a necessity.

One of the great strides Microsoft has been making over the last 15 years or so is how to handle situations where Windows completely falls over. This started off with Safe Mode in Windows 95 which loaded the most minimal set of components possible to get a desktop up and make repairs but this didn’t help your typical end user, it just made it easier for techs to get in and solve the problem. In Windows 2000 and Me they added System Restore, which allowed to you snapshot important parts of the system state so that in the event of the system being damaged those important files could be easily restored without affecting the user’s documents. This system was far from perfect however it was the first step in partially automating system recovery and making it accessible to the user.

Windows 7 introduced Startup Repair, where a separate recovery environment is stored in a separate partition so that certain types of problems can be fixed automatically (and to access features like system restore or a command prompt for advanced users) in the event that the primary installation become unbootable.

Windows 8 for all its faults did introduce the ‘Reset this PC’ feature — an almost one-click piece of functionality to do what IT professionals would usually resort to as their last measure (format and reinstall) on-demand without even requiring that you have your installation media, and would even automatically keep all your documents (if you want). Users can do this themselves with no technical knowledge and be left with a working system at the end of it when all else fails.

Format and reinstall on-demand without even requiring that you have your installation media, and even automatically keep all your documents.

The theme as time has gone on is automating the recovery process and reducing the technical knowledge requirement to keep a working system. At no point have end users asked for more access to the guts of the OS (but the ability for those with the relevant knowledge and experience to get stuck in remains unhindered) so touting the ability to get your hands dirty in a Linux environment is not a good argument, because it’s not so much an ability as an absolute necessity if a bad update leaves you with a non-booting system.

Where are all the designers?

In my first point I touched on the issue of user contributions to open source being essentially non-existent. But on top of that there is rarely much scope for non-coder designers to contribute. Very often the meat and guts of user interfaces in Linux applications aren’t in some external XML or other markup language, properly isolated from any of the real functionality such as to enable UI changes without having to code. In the Windows world Microsoft have made great steps in this direction with the combination of the XAML markup language and design-centric tools like Expression Blend which allow non-coder designers to put together functional user interfaces with no coding knowledge, and even handle things like animations and transitions. If the backend code exposes its fields properly then the UI just magically works and interacts with everything. But this kind of isolated and segmented design is extraordinarily rare bordering on nonexistent in the Linux world. Qt Designer kinda gets part way there but we end up back at my second point very quickly (Qt v.s. GTK centric desktop environments).

The result is we largely end up with user interfaces built by engineers. People who can code and produce something functional but with little understanding of affordances and UX principles which results in an unfriendly and inconsistent experience.

Honourable Mentions

  • No distro ships today with across-the-board high-DPI support.
  • GPU-driven vector-based UIs are only just starting to make an appearance, but are critical to solving the previous issue.
  • No hardware compatibility check during installation. Why not identify hardware not supported by available drivers to warn the user before installation and save headaches?
  • Package managers suck. This article was almost 6 reasons because of this. Users don’t need to know about library packages, if a dependency doesn’t add a menu icon then users don’t need to know about it. A uri-scheme so vendors could link to their app easily would be useful.

Michael Biggins

Written by

Gamer, coder, Tesla owner and EV and Renewable Energy evangelist. Director of @CubeCoders

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade