How to Minimise the Cost of GUI Development

Brett Uglow
DigIO Australia
Published in
18 min readApr 4, 2018

--

This is part 2 of a 3-part series looking at GUI development. In Part 1 we looked at why GUIs keep getting redeveloped. In Part 3 we will look at some of the changes coming to GUI development, and how organisations can prepare for those changes.

In Part 1 we met Bob, a fictional product manager working for a large organisation. He now understands that GUI development is becoming more expensive because GUIs have to work in a wider variety of situations and different devices. So what can Bob do to get the most value for his organisation’s GUI development budget? Will upgrading to React or Vue or “X” make a material difference in development costs?

Most of what is written here applies to software development in general, but this series is focusing on GUI software development.

Minimising the cost of GUI development — do we spend more than we need to?

In my experience, absolutely yes! Regardless of the size of the GUIs being developed, organisations spend more than they need to for a few basic reasons:

  • Lack of planning
  • Lack of leadership
  • Lack of feedback

Lack of planning

This does not refer to planning at a project level, or big-up-front-design; it refers to not having a plan to migrate to newer technologies as they become available. This statement presupposes two things: that new technology will arrive, and that new technology will be better than older technology. Let’s quickly look at those two suppositions.

History has shown that change in GUI tooling, frameworks, languages and devices is ongoing. No matter how popular a GUI technology is right now, it will be usurped in 5 years by something else. Even in the relatively slow-moving world of browsers adopting W3C standards, writing a web-app 5 years ago is very different to the approach we use today.

Given that technology is always changing, we can put an end to arguments about technology “A” being better than “B”. The most we can ask of a technology is, “Does it satisfy our current business and technology requirements?”. In many cases, multiple technologies will be satisfactory. Some will be better than others for non-technical reasons (popularity, ease-of-use). But things change, so we need a plan to deal with constantly-changing, ever-improving technology.

Not having a plan to deal with this change means that the following situations will occur and continue occurring:

  • Technology teams will ask for money to re-write GUIs that are less than 2 years old because there is no technology plan to guide them as to what they should invest in and when.
  • Project teams will unexpectedly receive advice that they need to change to use a different technology because somebody, somewhere, made a decision that affects them.

Lack of leadership

Related to not having a plan is not having technical leadership at an organisational level. There are many different ways of organising people to get work done. But when technical decisions need to be made, an organisation needs to have leaders making those technical decisions. And then they need to assign a “use-by” date to those technology decisions so that they can be reviewed when the technology landscape has shifted.

In my experience, many organisations have technical leaders that are experts at writing database queries, data modelling and server-side applications (e.g. Java or .NET). Ask them to design a fault-tolerant server-less architecture for an enterprise — no problem. But ask them which GUI tech stack to invest in or how to evolve the stack over time, and many will struggle due to their inexperience in this field. In many cases GUI decisions are left to individual teams or delegated to a less experienced developer to set the direction for an organisation. This leads to lower-quality, short-term decisions being made, resulting in the re-development of GUIs once the consequences of the short-term decisions are identified.

For example, I know of a company where there was no single, GUI technical leader. There were lots of senior developers, but no-one was singularly responsible for making technical decisions related to GUI technology. When Angular 2 came out, no-one said to developers, “Maybe wait a few months until it stabilises”, or “We are switching to React in 3 months’ time — don’t use Angular”. So some teams went ahead with Angular 2 and spent about 20% of their time figuring out how to get the stack to work, rather than solving business problems. Had they waited another 6 months, the same stack would have been easier to use and faster. But there was no qualified leader to ask (and no upgrade plan to follow).

Lack of feedback

It has never been easier to measure the performance of a GUI — both in a quantitative and qualitative sense — than it is now. Analytics tools with dashboards are part of most GUI development platforms, and they can provide valuable feedback on how your GUI is used. The hard part is defining what you want to measure. Recently I’ve had the pleasure to work with a product manager by the name of Simon Darcy from Punters. He explained that companies should “…monitor what matters to their customers” and measure API usage rather than screen view impression counts. “It will help you build out performance metrics and focus on what gets used and how, including error rates”.

Getting early feedback is a key idea in many agile-software development approaches too. “Fail fast” is the idea of running experiments (proofs-of-concepts) to test out ideas and see if they work. Deploying small changes regularly allows real feedback to be gathered and incorporated into the next versions of software in a matter of days, rather than months.

One reason that some organisations have not adopted true agile software development processes is they have a fear of failure. They don’t want to release any software that hasn’t undergone 6 months of system integration testing and been blessed by the CTO personally, in case it has bugs and they look bad. This fear actually increases the risk that the software will not meet customer expectations because in the 12 months it took to get the software from design into production, customer expectations will have changed. And there will still be bugs in the software no matter how long testing process goes for.

How to minimise the cost of GUI development

We’ve looked at the reasons why GUI development costs more than it needs to (despite the underlying reasons that drive GUI redevelopment as discussed in Part 1). What this means for Bob and his technology teams is that the choice of technologies for their GUIs matters much less than how they manage technology changes over time.

The choice of technologies matters much less than how technology changes are managed over time.

Let’s now look at some steps that Bob (and perhaps, you) can take to minimise the cost of GUI development.

What kind of organisation are you in?

Step 1 — Identify the kind of organisation you are in, or that you want to become

How are decisions made in your organisation? Do they come from above (autocratic), are there committees where everyone gets a say (democratic), or do people just talk to their colleagues and make decisions themselves (laissez faire)? (It’s interesting to note Conway’s Law at this point — an observation that the way decisions are made and communicated in an organisation will be encoded within the designs of the software written by the organisation).

Or put another way, where is your organization on the Spectrum of Homogeneity? How similar (homogeneous) or diverse (heterogeneous) is the organisation, in terms of its use of technology? Can people, teams or departments choose their own technology stacks, or is there a technology roadmap which dictates what people must use?

Let’s look at some characteristics of 3 kinds of organisations:

Homogeneous organisations:

  • Have organisation wide standards & process.
  • Tend to be authoritarian (top-down leadership).
  • Produce consistent documentation (using templates).
  • Change slowly (due to the need to co-ordinate changes across the whole organisation).
  • Have some form of tech-governance that decides what tech to adopt and when.
  • Produce GUIs that to tend to look uniform because uniformity is highly valued.

Free-for-all (Heterogeneous) organisations:

  • Allow teams and people to choose the tech that they wish to use.
  • Deliver rapidly because there are fewer organisational constraints (friction) and fewer dependencies.
  • Have less code-reuse and less uniformity across teams because the focus is on delivery. Teams end up writing the same components multiple times.
  • Don’t have a single place to refer to for organisational standards. This leads to increasing divergence of GUIs over time.

Stable trunk with innovative shoots (50/50) organisations:

  • Use a common tech stack by default.
  • Look for opportunities to trial newer technology — often with newly-formed projects.
  • Encourage new-tech teams to demonstrate the merits of the new-tech to “mainstream” teams and influence the adoption of the new-tech into the organisation’s common stack.
  • Trade-off technology (and GUI) uniformity for innovation. Sometimes this works, sometimes not.

There is no “perfect” organisational shape for producing great software. Depending on the size of Bob’s organisation, he may have little ability to influence the organisation’s shape anyway. But there are steps that Bob can take which will minimise the cost of GUI development.

Strategize

Step 2 — Choose a development strategy

Most organisations try to organise their GUI software development using one of three strategies: Best of Breed, Single codebase + single runtime or single codebase + multiple runtimes.

Best of Breed Strategy

This strategy is about choosing “the best” technology for each platform. For brand new platforms that emerge, it is simple to identify “the best” technology as there is usually only one option. As platforms get older, more options emerge. For example, initially Java was the only way to write Android apps. But today there’s Java, Xamarin, Kotlin and Flutter — what is “the best” tech for Android now?

The main advantage of this strategy is that organisations can take full advantage of the platform they are writing their GUI for. There is no intermediary layer (e.g. a virtual machine) between the application code and the operating system code. The application can access all the capabilities exposed by the platform (such as geolocation APIs & sensors). Additionally, GUI developers can use the platform’s native GUI conventions, components and styles.

If an organisation is creating GUIs for a single platform, the best-of-breed strategy is no-more expensive than other strategies. But it becomes expensive when developing GUIs for multiple platforms because the underlying technology used for each platform will be different (e.g. Xcode for iOS, Java for Android, HTML/CSS/JS for web browsers). This results in multiple development teams implementing the same GUI using different technologies.

Cons:

  • Cost — multiple teams each building same GUI.
  • Release coupling — teams operating at different speeds may still be required to synchronize releases.
  • Design coupling — designs may still need to be synchronized across platforms before commencing each development cycle.
  • Duplication of code & effort — no ability to re-use common GUI code.

Pros:

  • Low/No coupling — teams can work independently.
  • Isolation — less likely for a GUI bug on one platform to appear on another platform (due to different implementations on each platform).
  • Testing scope — each team is focussed on building & testing a single platform.
  • Performance — GUIs are able to access all platform APIs and run at native speed.

Single codebase + single runtime

The main aim of this strategy is to minimise duplication of effort and code. This is achieved by having a common codebase and a single runtime (or virtual machine) for each platform. There are still decisions to be made about which technologies to use for the single codebase, but there is practically only one runtime that is available on almost every platform: the web browser. If a company is targeting fewer platforms, there may be other runtimes available (e.g. React Native or NativeScript).

The use of an intermediary layer — the web browser — on each platform means that extra work is required for GUI apps to access platform APIs which are not exposed to the browser. It also means runtime performance is as-good as the performance of the platforms’ browsers. However, browser performance in 2018 means that for 90% of GUIs written, users would not notice a difference in performance between a platform-native GUI and a browser GUI. And the gap narrows with each new generation of hardware.

Development effort is minimised because developers are really targeting a single platform (the browser). However not all browsers are equal, so extra effort is required to write code that works correctly across different browser implementations. Having a single codebase also means that code can be shared — if one team implements a component for one feature, another team can re-use that component later on.

Cons:

  • Code coupling — teams may be dependent on code written by other teams in the form of shared components.
  • Testing scope — team must test GUI on all platforms.
  • Single codebase — a bug on one platform means a bug on every platform.
  • Non-native — access to platform specific features that are not exposed to browsers (like biometric APIs) requires extra effort.
  • Platform design considerations — designers must take extra care to build GUIs that still adhere to a platform’s GUI conventions, or risk users feeling like something isn’t quite right with the GUI.

Pros:

  • Cost — fewer resources are required to build a single GUI that works on every platform.
  • Single team — one team can build a GUI for multiple platforms, which has obvious cost & efficiency benefits.
  • Release coupling — it is easy to deploy the same GUI to multiple platforms.
  • Single codebase — fixing a bug found on one platform fixes it everywhere.
  • Common language — all GUI developers are writing and speaking the same language.

Single codebase + multiple runtimes

The third strategy is the most technologically complex, as it aims to combine the benefits of a single codebase with the benefits of platform-native code. An example of this approach is Xamarin. Google’s Flutter also belongs to this category if Android and iOS are the only target platforms. The Dart language is another option, as it can be compiled to native code and to JavaScript too.

The technical complexity of this approach is not in writing applications, but in finding compilers which can compile the source code into the correct code for each platform. Writing a compiler is not trivial; writing N-compilers for N-platforms is even harder, and it takes time for these compilers to become available from third parties. However the benefits of a common codebase and high performance are good reasons to consider this strategy.

Cons:

  • Time — need to wait for compiler technology to become available.
  • Third-party dependency — relying on complier providers to continue to support multiple platforms, which is not guaranteed.
  • Code coupling — teams may be dependent on code written by other teams in the form of shared components.
  • Testing scope — team must test GUI on all platforms.
  • Single codebase — a bug on one platform may mean a bug on every platform — or it could be a bug with a compiler for a particular platform (which is even worse).
  • Platform design considerations — designers must take extra care to build GUIs that still fit within a platform’s GUI conventions, or risk users feeling like something isn’t quite right with the GUI.

Pros:

  • Cost — fewer resources are required to build a single GUI that works on every platform.
  • Single team — one team can build a GUI for multiple platforms, which has obvious cost & efficiency benefits.
  • Release coupling — it is easy to deploy the same GUI to multiple platforms.
  • Single codebase — fixing a bug found on one platform fixes it everywhere.
  • Common language — all GUI developers are speaking the same language.
  • Performance — GUIs are able to access all platform APIs and run at native speed.

Before moving on, a reality check is necessary. Bob’s boss (Alex) may say that they are really keen on re-using technology and following software engineering practices. But if Alex then implements a project-funding model (fixed amount of funding) instead of a platform-funding model (time and materials), then Alex is living in a fantasy land (this one, not this one) and will not achieve their goals.

Characterise and categorise

Step 3 — Characterise and categorise your GUIs

Not all GUI development is the same. If Bob considers the following questions, it will help his teams make better development decisions.

1. How important is it that all GUI applications look & feel the same?

Maybe it’s more important for one group of GUIs to be the same than for another group of GUIs. If that’s the case, the design team could produce a light-weight style guide for less-important apps to follow which is easier to adopt and doesn’t change for 5+ years. Heck, it may be as simple as saying, “Here’s the header, logo and footer. Knock yourself out”.

But if it is important that GUI applications look & feel the same, how can an organisation ensure that that happens? Here are some ideas:

  1. Implement a build-system for all GUI applications that detects changes to visual components, and automatically rebuilds deploys GUIs that use those shared components.
  2. Treat layout and style as separate characteristics (see next question below). Manage layout at the platform or application level, but manage style centrally and push out updates independently of applications (if possible — some platforms have immature external-styling capabilities (iOS)).
  3. Implement a series of regular GUI-update releases. These releases are the only ones that may contain significant GUI changes, and are co-ordinated across all platforms.

On the other hand, if GUI uniformity has low importance compared to other priorities, consider this: how can customers know that the GUI they are using was actually produced by a bona-fide organisation, not by a hacker trying to steal their account information? GUI uniformity increases trust and makes it easier for customers to detect non-genuine applications. (It increases learnability and transferability too).

2. How frequently do you want to be making changes to GUIs?

Making changes daily may be desirable, but if you also want high uniformity across GUIs, you will need a system (see above) that will allow those changes to be rolled out in a co-ordinated way. Co-ordinating changes across multiple platforms takes extra effort compared to allowing each platform to deliver changes according to their own cadence.

Categorising GUI changes would allow certain kinds of changes to be made regularly without inter-platform co-ordination (such as small visual changes that do not affect the layout of a GUI) while larger changes could be deployed less frequently and co-ordinated separately from other deployments (e.g. changing how an address lookup component works across 3 platforms).

Here’s one possible way of categorising GUI changes, in order of smallest-to-largest impact:

  1. Text changes (labels, static text, legal text).
  2. Style changes (visual changes that have no/low impact on the layout of the GUI, e.g. increasing the padding around menu item text by 1px).
  3. Component interaction changes (e.g. changing the events that a component responds to).
  4. Compound-component interaction changes (e.g. changing the focus order or accessibility behaviour of a component that consists of other components, like a navigation menu or login component).
  5. Layout changes (the order of GUI elements, either visually and/or in terms of accessibility/interactivity).
  6. Navigation changes (how views are linked together).
  7. Business logic changes (e.g. validation rules that are implemented in the GUI to improve the user experience).

An additional axis upon which to categorise these changes is to consider the number of GUIs that need to be changed for any of the above changes. For example, while changing layout may be the highest impact change, if it is only for a single GUI, it has less impact than changing a shared component that is used by 20 GUIs.

Plan for your preferred future

Step 4 — Define a GUI development plan for your team / department / organisation

A GUI development plan is not a project plan or a schedule. It simply contains the answers gathered from the previous steps so that it can inform the decision-making process when writing GUIs.

It should answer the following questions:

  • What is the current development strategy? (e.g. best-of-breed)
  • How teams are organised to do GUI work (hint: cross-functional)
  • How teams are funded (hint: as platforms or products)
  • How does each kind of GUI change (see Step 3) reach production?
  • What is the current tech stack (if following a homogeneous or main-branch strategy)?
  • When will changes be made to the tech stack?
  • Plan for change. Do not over-invest in GUI technology that will be redundant in 2–3 years.
  • When should teams build components and when should they buy/re-use them?
  • Use libraries and frameworks to minimise the amount of boilerplate code you write
  • Performance is always a feature, so balance “Use libraries…” with “measure performance regularly” to detect when new code causes performance issues.

Before, during and after this development plan has been defined, involve people. Involve product owners, designers, product managers and stakeholders. Explain why a plan is needed (hint: to minimise GUI development costs). Explain the consequences of having a plan (which could be that no tech-stack changes are permitted for 2 years, or that the company branding guidelines are frozen for 5 years, or everyone can do what they want but don’t expect consistency or code-reuse — whatever you plan to do). Get buy-in from the highest possible level.

Bob’s Plan

After considering the problem, his organisation and some strategies to improve the situation, let’s look at what Bob did next:

He met with the IT department heads and asked them how they planned to do GUI development:

  • There are about 1000 people in the company
  • Their tech stack has started to diverge from the original standard — the IT team wanted to use a best of breed strategy but the business leaders want new features as soon as possible (even if the look & feel is not consistent).
  • The IT department decided to use a main-branch strategy, with offshoots of innovation for fast-to-market features.
  • The “official” tech stack was defined as “what the biggest software dev team is using”.
  • They agreed to revisit the tech-stack roadmap in 12 months, then every 2 years after that.
  • After Bob explained the options around building component libraries or buying them, they agreed to not invest shared component libraries yet, but review the situation at the next tech-stack review.

Bob’s development teams were already cross-functional (as GUI design & development is a multi-discipline activity), so he didn’t need to re-organise his teams.

Bob knew he didn’t have the influence to change how the company funded software development (it was project-based), so he accepted the limitations that this funding model imposes (reduced ability to change direction if the designs don’t meet business or customer objectives). This meant it was even more important to have a plan for dealing with any GUI changes.

Bob held a meeting with his development teams, where they:

  • Agreed to categorise the GUI changes that were required for each new feature
  • Prioritised delivery over GUI uniformity. This meant that each team could deploy changes independent of each-other, but the GUIs would look & feel a bit different across apps.
  • Agreed to have a design-alignment release every 3 months. This release would be co-ordinated across teams, and contain all of the look & feel changes that were needed to make the apps GUIs look & feel uniform.
  • Nominated a single UX design team as responsible for designs across all their apps, rather than different designers for each app doing their own thing.

After these meetings, Bob put together a four-page plan documenting the decisions that had been made, then held a brownbag to share the plan with his department.

Six months on, Bob is getting great feedback from his teams. The design-alignment releases have helped people to avoid over-prioritising GUI changes at the expense of new functionality, as everyone knows that any GUI issues will be addressed in the 3-monthly design alignment, for all apps. Also, the design team that was given the responsibility for designing across all platforms has started to put together their own style guide. This makes it easier to have uniformity across GUI apps. Things are looking up for Bob!

Summary

GUI development costs can be minimised by having a plan and sharing it with everyone who is affected by it. This finding is not revolutionary. If we were talking about enterprise architecture, it would be inconceivable that organisations would invest billions of dollars without some sort of road-map or plan for their investment. GUI software development should be no different — every organisation needs a plan for how they are going to manage GUI technology over time.

In the third and final part of this series, we will look at the changes coming to GUI development, and how organisations can prepare for those changes.

Footnotes:

[1] The World Wide Web Consortium (W3C) is mostly responsible for defining how browser APIs should work. This process takes years and has historically meant that browser vendors implement their own versions of an API and then try to get their API ratified as the official W3C standard. The collaboration between vendors has improved in recent years, increasing the consistency of API implementations across browsers. However, if a vendor (such as Apple) decides not to implement an API that most other vendors implement, it effectively means that developers cannot use the API if they want to reach the maximum number of users. The W3C is not to be confused with the ECMA, which is responsible for the recent innovations of the JavaScript language (which has been relatively fast-moving in the last 5 years).

[2] There are some classes of software where the consequences of a bug can result in injury or death. Obviously for this kind of software, human life is more important than meeting customer expectations.

Originally published at digio.com.au on April 4, 2018.

--

--

Brett Uglow
DigIO Australia

Life = Faith + Family + Others + Work + Fun, Work = Software engineering + UX + teamwork + learning, Fun = travelling + hiking + games + …