Stock trading in the future? (source)

User Interface Development in the Next 5–10 years

Brett Uglow
DigIO Australia
10 min readSep 2, 2018

--

This is part 3 of a 3-part series looking at GUI development. In Part 1 we looked at why GUIs keep getting redeveloped. In Part 2 we looked how organisations can minimise the cost of GUI development by planning for technology change & investing appropriately based on plan. In this part we will look at what the future holds for UI development in the next 5–10 years.

What should Bob do in the future?

The last time we met Bob (our fictional product manager) he had accepted that GUI technology was going to continue to change. So he worked with the technology leaders in his company to formulate a GUI technology roadmap. The roadmap allowed his organisation (from business units through to project teams) to make better GUI technology choices & investment decisions because it was tailored around their culture and ways of working.

After watching the latest Marvel Cinematic Universe movie (this article will never date 😉), Bob started to wonder how his job might change in the future…

The future… is muted cardigans with long… dramatic… pauses.

No, not the “boldly go” science-fiction future. Let’s consider what Bob might be doing in 5–10 years from now. To do that, we need to consider the trends/factors which influence UI technology within the contexts of technology trends, global trends, and the things that make us human beings:

A conceptual model of the trends affecting UI development in 2018. Credit: Me.

This conceptual model tries to illustrate that each trend is affected by the surrounding set of trends, moving from the specific domain of user interfaces all the way out to the domain of human factors — the things that make us human. Let’s examine these factors, moving from the inside to the outside.

UI Trends

Amazon’s Echo Spot is a multimodal UI — it can be interacted with in more-than-one-mode, by voice-input as well as input from a GUI. As devices become more capable and connected, and as human beings experience this technology more, it will become natural to be able to interact with devices using the most convenient mode for a particular situation. These interfaces will also be contextual — the ability to “understand” their context of use.

Ever since movies like Minority Report and Iron Man 2 featured spatial interfaces with gesture interactions, people have been waiting for these kinds of interfaces to reach mainstream society. Virtual/Augmented/Mixed reality (VR, AR, MR) technologies have started to become available to the technical community, but are yet to enter mainstream society.

Holographic display with gesture detection, eye-tracking, facial recognition, biometric security… in a movie (Iron Man 2).

There are two main (technology) reasons for this:

  • Spatial interfaces (including virtual reality and AR) need a lot of computing power and hardware. Some of that hardware (like holographic displays and AR/VR headsets/googles) is still being developed & perfected.
  • The tooling to create spatial interfaces is designed for really-smart 3D-programmers. The demand for spatial interfaces (including 3D GUIs) has traditionally been in the games and entertainment industries, rather than in the world of business apps. Tooling that allows business-app programmers to easily create 3D/spatial UIs does not yet exist (Note: I’m happy to be wrong about this — let me know what tools would fit this category in the comments!). However, some people are actively working on creating tools to create 3D worlds. Perhaps tools/games like Minecraft can provide insight into how to make 3D design accessible.

Devices are becoming smarter and offering more input (and output) interfaces than was previously possible. Unfortunately the tooling to create non-2D UIs is still under active development. In 5 to 10 years time, Bob should expect to be able to easily create spatial interfaces.

Technology Trends

Gatebox — AI/ML + Holographic display + IoT device + virtual assistant + anime

Artificial Intelligence (AI) and Machine Learning (ML) dominate the headlines in today’s technology industry, with good reason. The industry has reached the point where computers can give appropriate answers to general questions, translate languages in real-time, and beat humans at computer games.

One of the areas that AI/ML has improved on in recent years is contextual awareness — the ability to “understand” the context of use. For example, contextual awareness technology allows Google Home (an IoT virtual assistant), to understand when you say, “What time is the flight?”, that you are talking about your partner’s upcoming arrival by aircraft at Melbourne airport (because it “looked” at your calendar to provide some context to your question).

The Internet-of-Things (IoT) is the trend of connecting ordinary devices — like toasters and light bulbs — to the internet. By doing this, those devices can be conveniently controlled from the couch or the workplace… but hackers can control these devices too! The problem for owners is that it’s not obvious that a device has been hacked — there’s no “flashing light” or notification to say, “Hey, this device is not operating normally — it could have been hacked”.

For the most part, consumers seem reasonably happy to accept the downsides of IoT devices for the extra convenience they offer. A side effect of this trend is that more “things” are going to need a network and a user interface, to work. More work for Bob!

John Gage from Sun Microsystems Network is credited with coining the phrase, “The network is the computer” in the 1990s. This was in the age of personal computers and Intel producing CPUs that kept meeting Moore’s Law in terms of computational power. So this phrase seemed quite strange when the internet was only just being discovered by mainstream society. However, services like AWS, Google Cloud and Azure demonstrate how correct John was. Arguably, it is the ability to network computers that has laid the foundation for every technology advancement we see today, particularly AI, IoT and distributed systems.

In 5–10 years time, graphical user interfaces will no-longer be the only way of interacting with most of the technology we have in our homes or workplaces. That’s not to say that GUIs will disappear entirely; rather, GUIs will sit alongside voice-activated UIs that are connected to a network.

Bob needs to think about how his company can support virtual assistant integration, and integrate UIs with distributed systems.

Global Trends

Privacy by rosefirerising

Privacy is something that everyone values, but we are continually asked to give it up in exchange for security or convenience. As a society, we now think it’s normal to have cameras & microphones in our homes, GPS trackers for our children, and to share our every thought, photo and opinion (although FaceBook et al tricked us into that). However a number of incidents in the last 15 years — the Cambridge Analytica & EquiFax data leaks, to name two recent incidents — are beginning to get people to question the value of privacy in relation to other things. The European General Data Protection Regulations(GDPR) — which were first drafted in 2012 — are designed to protect the privacy of European citizens. Government intervention is necessary because history shows us that some organisations cannot be trusted to use & protect our personal information without some incentive (massive fines). So privacy will become more valuable to people in the coming years.

In a world where everything can be copied, simulated and experienced in a virtual environment, authenticity is becoming more precious. If we are buying products (and services) that affect the health of our family (such as baby formula), we want to know that these products are authentic. Inauthentic products can and do kill people.

Closely related to authenticity is the world’s ongoing need for trust. Governments & corporations around the world have, particularly in the last 20 years, done their very best to make the general public distrust them (e.g. Global Financial Crisis (GFC), Venezuela, FaceBook). Blockchain technology, which was invented around the same time as the GFC, has been exemplified by some segments of society as the solution to everything (no, really — read this), including the ability to create trust without centralised systems.

While blockchain technology may find a bonafide use-case, it still requires people to trust something (be it a smart contract, an exchange or a network). The challenge for organisations is how to convince people to trust them more than their competitors. This is primarily a marketing and psychology problem rather than a technology problem.

To illustrate the importance of marketing, consider the little green lock icon in your browser’s address bar when you visit a secure site:

Does it make you trust the site more? Why is that? Have you verified that the technology involved is actually securing your internet connection? It would be fair to say that less than 0.001% of the world population could actually prove that the technology behind the green-lock works. For everyone else, they rely on the information (and marketing) produced by browser vendors to give them that sense of trust.

In 5–10 years time, GUI applications will focus more on assuring customers that the software is trustworthy and that private data is handled securely, while still offering a level of convenience over alternative approaches.

Human Factors

What could possibly be wrong with this design?

Brilliant technology is useless unless it is used. New technologies must consider human factors — how people think, feel, behave and perceive the world; and also consider the variations in these factors across different cultures — to be successful in the market place.

For jobs that involve working with objects that exist in the real world, AR/VR makes a lot of sense. For example, council workers could use AR to “see” the pipes under a road and avoid digging into them. But the current AR/VR interfaces are more tiring to use than non-virtual interfaces. If someone has to wear a 10kg helmet for an hour, the tech isn’t viable. Thankfully the hardware powering AR and VR is getting smaller and more ergonomic.

Will we see accountants or software developers performing their work using VR googles or haptic gloves?

Maybe. But not in 5–10 years.

And there’s a simple reason why: it currently takes more concentration and effort to do most kinds of “knowledge” work in VR than using existing input methods (keyboard, mouse, touch screens). Even dictation software (remember Dragon Dictate?) demonstrates that while it might seem faster to create a document by speaking, the extra time it takes for us to write words down or type them gives our brain time to form sentences that are coherent and capture the ideas we were trying to express. (However, for people that cannot use ordinary input devices, dictation software is much more efficient than the alternatives).

Human-factors will continue to influence the interfaces that we choose to use. If you consider that tasks are “problems-to-be-solved”, human beings generally want to solve problems as efficiently as possible. If new user interfaces can make problem-solving more efficient, we will use them. The challenge, then, for the VR/AR/alt-interface industry is to create interfaces that are not just entertaining, but also more efficient than the existing interfaces.

In some areas, these alt-interface devices are making existing tasks easier. Consider Amazon’s Alexa: you can say, “Alexa, play me hits from the ‘80s” — and if you have a Spotify account, you will hear such hits as “Billy Jean” or “Sweet Child O’ Mine” in about 3 seconds. Compare that to finding a CD and playing it (1 minute) or finding the song on your MP3 player/phone and playing it (10 seconds). Being able to speak a command and get a result in 3 seconds is clearly better. An additional benefit with voice-activated devices is that you can often multi-task while talking to the device (especially for one-shot commands as per the “play music” scenario).

The tools we create (including user interfaces) must adapt to our form, rather than us adapting to the tools. The consequences of tools not being designed for the human-form are ultimately damaging to our health. But human-centred, ergonomic, efficient & intuitive tools can increase our productivity (and enjoyment) many times more than doing things “by hand”.

Conclusion

In the next 5–10 years:

  • UIs will handle private data more-securely.
  • UIs will provide a “verify authenticity” mechanism which will indicate if the UI is genuine & can be trusted (or if it has been hacked).
  • Marketing & psychology will continue to be the primary means by which people will trust different UIs; the technology is only as good as the marketing on-top.
  • UIs will incorporate virtual assistants to become more aware of their context-of-use and support conversational interactions.
  • UIs will begin to be multimodal, but a single-mode will still be the most common kind of UI (e.g. 2D interface).
  • Tooling will gradually improve to allow spatial UIs to be created more easily. A key challenge will be to make ease-of-use & task efficiency better than existing UIs.

As for Bob, now that he’s become aware of the what the future (maybe) holds, he’s feeling quietly positive about his future:

  • He will probably continue to have a job working with GUIs, but will need to consider multimodal UIs in the coming years.
  • His job will get more complex and divergent, allowing for further specialisation in this field.
  • The most important aspects of UI design will remain: usability, intuitiveness, affordances (features in the interface’s appearance that suggest how it can be used) & learnability.

Thank you for reaching the end of this 3 part series, but this is not the end! What do you think UI development will look like in 5–10 years time? Write your response below…

--

--

Brett Uglow
DigIO Australia

Life = Faith + Family + Others + Work + Fun, Work = Software engineering + UX + teamwork + learning, Fun = travelling + hiking + games + …