Why we build GUIs again and again

Brett Uglow
DigIO Australia
Published in
12 min readFeb 21, 2018

--

(…and will continue to do so for the foreseeable future)

Hi! My name is Brett Uglow and I’d like to talk about an uncomfortable truth: graphical user interfaces (GUIs) for many applications are being redeveloped over & over again, at great expense. Thankfully, there are some ways that organisations can minimise the cost of redeveloping GUIs while still achieving their business objectives.

For the purposes of this series, we will define a GUI as what you see on the screen of an electronic device. For example, a tablet, a smart phone or desktop computer — anything that has a screen and an input method (keyboard, mouse, touch, gesture, voice, etc.) — all have GUIs on their screens. While the focus of this series is on graphical user interfaces, it is mostly applicable to other user interfaces too (such as vocal/aural, gestural, haptic, etc). In contrast to a GUI (a human-to-computer interface), an application programming interface (API) is a computer-to-computer interface that software programs use to communicate with each other.

In Part 1 of this series (this article), we will look at the main reasons why organisations rewrite their GUIs. In Part 2 we will explore different factors for organisations to consider when rewriting their GUIs, and strategies to allow organisations to “future-proof” their GUIs (or at the very least, increase the longevity of their GUIs). In Part 3, we will look at some of the changes coming to GUI development, and how organisations can prepare for those changes.

Part 1 — Why we build GUIs again and again

Meet Bob. Bob works for a large organisation as a product manager. His tech teams have been using Angular to build client-facing applications for a few years, but now he’s experiencing more project delays and requests to write new features using something called “React”. Upon reflection, he remembers the same thing happening a few years ago, only it was to replace jQuery with Angular. So Bob is wondering, “Why are we spending money re-writing these applications every few years? Why can’t we build on-top of what we have already have, like the API teams do?”

Is Bob right? Are organisations really re-writing their client-facing applications every few years? And if so, why?

Firstly, “Bob” is not a real person. But his story is all too real. As a software engineer in Australia (and avid follower of web and UI trends around the world), I’ve seen first-hand the situation described above; large organisations have spent millions of dollars re-writing client-facing applications multiple times using different technologies. And if you were to compare the initial version to the latest version, you would notice massive improvements in the user experience (UX) over that time. However, there would only be a 10–20% increase in functionality between the initial & latest version. Which begs the question: is UX a sufficient reason to rewrite a GUI?

For many organisations, the answer is, “Absolutely!”. Improved UX leads to more revenue through increased conversion & brand experience. But that’s not the only factor driving GUI rewrites. Aside from business drivers (which sometimes necessitate changes to GUIs (e.g. new business rules)), there are 3 other factors:

  • Hardware — in the form of incremental changes (when existing devices are improved) through to the introduction of completely new GUI devices
  • Software — when the software used to create GUIs changes, as well as software engineering practices
  • Fashion — as technology has allowed GUIs to become more customisable, organisations that invest in their brand consider their GUIs a key part of their brand

These factors provoke continual changes in customer-facing GUIs as organisations compete for customers by offering ever-improving user experiences. If we were to graph the effect of these factors over time, it would look like this:

Let’s deep-dive into each of these factors, which will hopefully explain why we keep rewriting GUIs.

Influence of Hardware

In the 1980s, computer hardware was limited and expensive. Initially computers only had a keyboard and a display device (monochrome CRT). The user interface was text-based via early operating systems like CP/M and MS-DOS. When computer mice were added as an input device, it facilitated the introduction of basic graphical user interfaces based on the WIMP-model: Windows, Icons, Menus and Pointers. Although WIMP interfaces had been designed 10+ years before (by Xerox and others), they were only available to the masses when the Macintosh and Windows 1.0 were introduced.

The introduction of the mouse (or the notion of a pointer) signified a generational advancement in GUI development. The advances that came afterwards — high-colour displays, increased resolution & refresh rates, touch devices, retina displays, TFT > LCD > LED > OLED, touch devices — represent incremental hardware changes that have had incremental effects on GUI development. GUIs are still essentially using the WIMP-model, although sometimes they are disguised to look like real-world objects (natural or skeuomorphic design) or like a series of rectangles (Windows 10).

Two additional hardware “bumps” that are worth mentioning are AR/MR/VR and mobile. AR/VR/MR (augmented/virtual/mixed reality) devices are designed to provide immersive experiences. Walking down the street with a headset on while using a mouse & keyboard is not going to work. Neither is showing a 2D GUI in a 3D world. So these UIs are a currently created through a combination of voice-control, joysticks and 3D models (plus other things which are not in scope of this series).

Mobile devices introduced gestures and multi-touch as additional input methods, as well as screen orientation changes. This impacted GUI design by popularising the ideas of responsive-design, mobile-first design, and causing designers to think about how to map gestures-to-actions instead of using menus. This has been a major reason for the redevelopment of GUIs in the last 10 years.

Influence of Software

Hardware changes provide the platform for software changes. Increases in hardware performance allow software to do more things in the same amount of time. In Windows 3.1, when a window on the screen was moved, only an outline of the position of the window was drawn until the window was “dropped” into a new location. But by the time Windows 7 was released, computers were performant-enough for the contents of the window to be redrawn in real-time as the window was relocated. As hardware became more powerful, GUIs were rewritten to take advantage of that power by providing more-frequent screen updates, animations and transitions from one state to another (e.g. maximising a window) — in the name of better user experience (UX — see below).

The look & feel of GUI applications were initially limited by the APIs exposed by operating systems. Mac OS v1 applications had the same “window chrome”, used the same colours, had the same text size, etc. (This was not a bad thing from a usability perspective, as it promoted learnability & transferability; if you could learn to use one application, you could transfer those learnings to the next application). But by the mid 1990’s, operating systems provided APIs that allowed GUI developers complete freedom as to the appearance of their applications (see image below).

Example of the creativity of GUI developers once operating systems permitted complete graphical freedom.

The introduction of web browsers heralded a new way to create GUIs. Though initially limited in their graphical capabilities, Adobe Flash and the Dynamic HTML (DHTML) APIs led to an explosion in creativity and GUI design. Developers began to see the web as a way to provide “serious” applications to customers without the need to physically distribute the software. The advent of smart phones provided organisations with a dilemma — should they target applications for mobile devices or for desktops? Should they create web apps or OS-native (native) apps? Organisations with lots of money often choose to develop 3 or-even-4 versions of a single application — iOs, Android, mobile-web and desktop-web. And then when there needed to be a new feature, they would build it in 4 GUIs. Ouch.

In the midst of these changes, patchy & inconsistent implementations of critical web browser APIs lead to the creation of JavaScript (JS) libraries, which tried to abstract-away the differences between browsers to make programming easier. As more GUIs were written using web technologies (HTML, JS & CSS), it became clear that the use of the page paradigm for web-sites was not suitable for web-applications, which were really designed using components. This situation lead to the most recent software-driven GUI redevelopment factor: JS frameworks.

JS frameworks are designed to provide an abstraction that makes writing web-based GUIs similar to writing native GUIs. The basic unit of a GUI is a component, instead of a page. These frameworks also brought more software engineering discipline into the development and testability of web GUIs. As developers learned the “right” way to build GUIs, the imperative to rewrite older GUIs increased — as did the number of JS GUI frameworks.

The ability to implement a GUI using any one of 20+ different frameworks has produced GUI-churn as developers & organisations try to pick “the best” framework while the frameworks are rapidly iterating and innovating. Few organisations want to invest in any technology that might only be around for 2 years. Similarly, few developers want to spend a year working with an unpopular technology while their peers at other organisations are learning the latest technology. (Note that I’m talking about popularity of the technology, not the merits of one technology over another. The latest technology may be terrible, but if enough people jump on board, it’s harder to resist. A bit like “happy pants” in the early 90s).

Yes, we actually thought happy pants were “a good idea” in the late 1980’s — early 90s. Likewise Java Server Pages, monolithic apps, Facebook, CoffeeScript …

Influence of Fashion

Each operating system (OS) has its own look & feel. OS-native applications are “encouraged” to follow the OS’s user interface guidelines. But each new OS version contains updates to the guidelines. Essentially, each OS has it’s own set of UI-fashions. A classic example is the Windows “Start” button (see image below).

Windows start button changes from 1995 to 2013. The button’s functionality has barely changed over the years, but the button must remain consistent with the latest OS guidelines/fashion (source).

Because web-browsers are no-longer constrained by OS conventions, it has become viable for organisations to create their own UI guidelines (often referred to as style guides, brand guidelines or design patterns) (see below).

Style guide heaven or hell? (source)

You may think that when an organisation defines their own style guide that they can finally stop redeveloping their GUIs, because they’ve defined the appearance and use case for every component. History shows us that that is not the case. Brands still get refreshed & tweaked occasionally. Logos change. Fonts change. In 2015, there were 20 major US organisations which updated their logos. These “small tweaks” are expensive because when the style guide is updated, all of the customer-facing applications should be updated too. Usually the style information has not been implemented in a way that allows it to be changed centrally and be reflected in each GUI.

Influence of UX

The notion of user experience is as old as customer surveys. Organisations know that if they can give customers a great experience, then customers will keep coming back and buying from them. The tricky part has always been trying to accurately identify what customers really want. Historically, gathering this data has been in the form of surveys, market research and trialling new products in a few locations before rolling out further.

In the digital marketplace, the same historical techniques are used but the data can be gathered far more easily, cheaply and accurately than ever before. Many native applications have their own survey or feedback mechanisms built in. Operating system vendors like Microsoft and Apple routinely collect customer-usage data, which they claim to use to improve their products. It is now an expected part of GUI applications that customer-usage data — usually mouse clicks, actions & errors — is collected and sent to an analytics platform.

Analytics platforms (such as Google Analytics or SiteCatalyst) are able to show customer-journeys through an application or even across multiple applications, how long customers spent on different screens, where they came from and where they went to after leaving the application. They can also facilitate things like A/B testing, which compares the performance of one GUI against another. This is really useful information which UX designers can use to identify the good parts and the bad parts of a GUI. But this information only tells half the story — what the customer does.

In the last 20 years, as online organisations began to focus on improving the UX of their applications, they increasingly sought to understand why customers behaved the way they did. Getting to the “why” requires additional UX techniques broadly covered by the term “user research”. By understanding the underlying motivations of users and potential customers, organisations have tuned their GUI applications (and other products & services) to address the needs of their audience. And as many online organisation have found, creating GUIs with better UX directly leads to increases in revenue, brand loyalty and return business.

UI software is fundamentally different to all other software

Bob also wondered why the GUI teams couldn’t just build on top of what was already there, like the API teams do. The shorter answer is that you can do that, but it will cost you. The longer answer is that all UI software is fundamentally different from other software, because of users. You. Me. We are the consumers of these interfaces, unlike APIs which are consumed by other software.

Human Diversity

Humans are diverse. Our requirements are diverse. There’s a whole field of research — anthropology — which studies all aspects of our behaviour including speech, movement, preferences and biases. In contrast, APIs (or any computer-to-computer interaction) can define the actions that will produce particular results. Computer-computer interfaces can be well defined and strict. Human-computer interfaces must be tolerant and somewhat flexible to try to cater for the wide variety of humans that will use the interface.

This need to cater to a wide variety of people is most evident when talking about accessible interfaces. Accessibility is the field of engineering and design that looks at how to make interfaces (including visual, aural, interactive, etc.) usable by everyone. For example, did you know that people who are completely blind can use an iPhone to read the news and make calls? Or that a deaf person can detect when their phone is ringing? Or someone with limited control over their hand movements can still navigate a web site effectively? None of this would be possible without engineers considering how to make a human-computer interface work for as many humans as possible. Some computer-computer interactions do offer flexibility too. For example, Firebase provides client libraries in multiple languages (e.g. JavaScript, Swift, Android). But the diversity of humans is greater than the diversity of computer clients.

Out of sight, out of mind

The second key difference between UIs and other software is that humans don’t notice non-UI software. When a person is using Google to search for something, they see an input box and a “Search” button. They don’t see the PageRank algorithm, or the HTTP requests that take their input and provide a list of results in JSON format. They don’t see the analytics service silently capturing every keystroke and mouse movement they make. And this is intentional — there’s no need to complicate the user interface with information about what is happening in the other layers of the application.

The consequences of this tendency to notice only what we see are:

  • Non-UI software receives much less scrutiny since only engineers see it. If an API is really slow, it will be noticeable in the UI, investigated, and only then will it receive more scrutiny.
  • People over-care about the appearance of the UI at the expense of other qualities (such as reliability, utility or performance). The UI is the application, as far as they are concerned. But software must be both usable and useful.

Summary

GUI software has become more expensive due to the wider variety of use cases that need to be supported. GUIs are going to continue to change due to advances in hardware & software which facilitate new GUI fashions while increasing opportunities to improve usability. The complexity in GUI software is only going to increase in the future as the number of use cases and device/OS capabilities increases.

So what can you do in your organisation to help manage the cost of GUI redevelopment? When should Bob allow his teams to upgrade to React (or whatever the next GUI fashion is)? In part 2 we will look at some strategies you can use to answer these questions.

Originally published at digio.com.au on February 21, 2018.

--

--

Brett Uglow
DigIO Australia

Life = Faith + Family + Others + Work + Fun, Work = Software engineering + UX + teamwork + learning, Fun = travelling + hiking + games + …