Why Accessibility is a Vital Part of SaaS Innovation

My father never wore a pair of jeans and didn’t play catch with me and didn’t teach me how to ride a bike but taught me something which is one million times more valuable to me: he shared his love for technology and his opinion that software should be accessible, immediately understandable and perfectly integrated within our lives.

If you’re trying to market your Saas solution, it is really important to understand how hardware and UI have evolved to make software more accessible over time because accessibility (together with usability) can make or break your software.

And most importantly, you need to know where we’re headed and what trends are currently shaping user experience because things are moving fast and falling behind means losing customers over competition, revenue and eventually your business…

A positive User Experience is primarily connected to Accessibility and this has nothing to do with how user-friendly your interface is

Now in his late sixties, my dad (the guy who always wears a tie) is one of the few people of his generation I personally know who used a computer for carrying out calculations on statistical data for his dissertation when he was in college.

I love to hear the story of how difficult it was to book computer time back then.

Researchers, PhD candidates, professors and students had to elbow their way through the calendar and often work late at night to have access to the mainframe.

And even more interesting to me is hearing about the fact that this gigantic machine, that only had less than half of the computing power of an average smartphone, had to be operated by a professionally trained technician.

And now…any three year old now can use Youtube to find an episode of Peppa Pig and we can perform complex operations by simply using our finger to interact with data as if we were interacting with actual objects.

Image Source

The question, though, is: what current or future technologies are going to change the rules of the game and make software even more accessible, easier to understand and to market?

Organic evolution: briefly looking at the past to understand the future

The way I see it, there are basically four interconnected dimensions that define accessibility:

1. Hardware Interface

2. User Interface

3. Availability

4. Complexity

Let’s take a brief journey together and let’s try to understand what revolutions had a dramatic impact on accessibility in the past and then move on and look at current trends you need to be prepared for in order to be competitive before finally trying to envision the future and dream of all the possibilities which might be just around the corner.

First element: Hardware constraints AKA “where the hell am I supposed to put this darn cable?”

Once upon a time, computers used valves and cables to process and patch electrical signals. The only interaction with the machine was through this very awkward interface that didn’t allow users to see anything that was going on.

That’s when we coined the term ‘bug’, since computer errors were actually triggered by cockroaches and other insects that would bite into the cables and block the system. We still say that we have bugs in our system but, depending on how clean our offices are, we rarely use the term literally!

Along came batch computing and punched cards became the standard way to input data into the system together with paper or magnetic tapes.

In both eras, machines had to be used by system operators who were professionally trained by large producers like IBM. And yet..they clogged and jammed or simply screwed up a lot!

As far as punched cards are concerned, these bad boys had to be created separately using clunky typewriter-like devices. Decks of cards for programs and for datasets had to be fed manually while magnetic tapes could also be used to provide additional datasets or supporting software. That’s why our current Big Data trends were not even conceivable.

At the very beginning of our computing journey, hardware seemed to be the biggest enemy of accessibility. But things soon changed.

Second element: Hardware revolutions that transformed the user interface

Being able to type means being able to use words as input. And this revolution also requires a way to monitor the words we’re typing…at the beginning using a paper roll and then screens.

First step toward accessibility: move away from tubes and patch cables and enter binary code with cards and tape.

Second step: allow people to type words and give instructions to the machine using a more “natural” language and simply integrate some sort of interpreter within the system that can translate the instructions in electrical signals that the processor can actually use.

That’s exactly what happened when we first introduced command-line interfaces that permitted users to interact with software in “real time” (or with a slight delay when paper was still involved). Programs became interactive. This was the time when Multics, UNIX and timesharing computers became popular.

Ken Thompson and Dennis Ritchie working on their greatest creation: the UNIX operating system. Source.

But again, gigantic leaps in computing power and hardware capabilities led to video display terminals that eliminated latency problems by implementing screens.

Programmers were able to go back and forth through their lines of code and delete, edit or add text. Basically they could finally test and fix problems while still working on the program and make as many mistakes as they wanted.

And we all know that creating a sandbox-like space for messing things up and playing around simply translates into a long series of experiments and innovation.

Coding was suddenly truly interactive and entering and processing data swiftly became a “cakewalk”….well, there still was the big problem connected to the operating system and programs and their obnoxiously long list of commands, variables, attributes and whatnots that needed to be learned by heart to interact with the machine.

People were already used to typewriters. Using a keyboard as interface was not an obstacle. And people were used to paper and TV screens. Hence, printers or monitors totally made sense.

At this point, hardware didn’t represent a constraint anymore and software was finally accessible to a larger group of people.

UNIX Command Line Interface

That was the time when we created the first video games and electronic mail systems to share jokes with geeky colleagues. And no, I’m not just kidding: jokes actually represented one of the first form of content that was shared when electronic mails were first introduced in the late sixties!

A little animal that brought us one step closer to software accessibility

You now have interactive software, games and emails. What more would you need, right?

Apart from obvious aesthetic reasons, old computers with their eerie -looking kind of Neo-don’t-follow-the-frickin’-rabbit weird output systems weren’t particularly inviting!

Hardware only serves the purpose to contain software, allow users to input data in the machine and see the output. But no matter how easy it is to physically interact with the machine, a command line is not an intuitive way to work. You might argue that it is the most efficient, fast and flexible way to interact with a machine and I agree with you…but definitely not intuitive or inviting.

Abstract concepts like files and directories, libraries or compilers have no real corresponding concept in the real world.

As we know, though, a good user interface should take inspiration from the real, physical world people are used to and mimic its basic functionality to make the virtual world look natural.

Even though the mouse was invented in the mid sixties it took quite a long time to become a standard element of hardware equipment.

This tiny little animal-like device defined our modern day computers and changed the way we designed software for ever.

A very clever Computer Mouse :)

Instead of typing commands, people could move a file from one place to the other by simply dragging something that looked like a real object around.

The opportunity to interact with a machine by pointing at things or grabbing virtual objects led to the need of creating a GUI (graphic user interface) to allow for this virtual objects to be represented in some form.

Time to create folders, icons and fancy windows to organize things and sliders to scroll through the content of an entire window…If this sounds familiar, it means that you once used a Xerox Operating system.

You didn’t but this still sounds familiar?

Well, that’s simply because Apple integrated this system in their first Macintosh computer and later on Microsoft created their first graphic interface that would allow people to use MS-DOS by working with their newly developed shell, aptly named Windows.

Xerox Star 8010–02

A little piece of hardware changed the rules of the game.

Users didn’t need to enter long lines of text to do things and were able to simply drag rejected drafts into a trash can and use sliders, dials and levers that replicated the function of everyday objects.

Hardware was not a hurdle between humans and machines and finally software followed suit.

Keyboards, monitors and mouses (yes ‘mouses’ not ‘mice’ but let’s not pull at this thread! :D) changed the way we and programmers looked at computers and opened the road to a better GUI and user-friendly programs.

Third element: Availability

Programming and debugging software was finally less painful and cheaper. Companies were able to market software more easily thanks to practical supports (discs and then cd-roms) and hardware also became cheaper thanks to basic economies of scale.

My father had a PC in his home office back in 1982. But this wasn’t common at all. However, by the early nineties a lot of people had access to a personal computer. Mostly an Intel386 with a Windows 3.1 GUI unless they were into graphic design or music and went for a Macintosh or an Amiga.

Oh…the nineties! :D Image Source.

But by the time people started debating on whether or not Ross and Rachel were on a break, Microsoft launched Windows 95 and people finally became addicted to Free Cell (a fancy version of Solitaire) while being introduced to Netscape or Internet Explorer and to the world wide web.

At this point, computers were always available because people could finally annoy colleagues at work with their “reply all” useless emails and then chat with random strangers on AOL from home.

Hardware and software weren’t a big pain in the neck anymore while, finally, people didn’t have to wait one month to have one hour of computer time (and a week or two to get the results of their calculations!)

Software was easy to use and accessible at any time and from everywhere. Except on the go…

Every step you take, everywhere you go…how better hardware led to mobility

Programs and sites were created to guide people. Developers started working with designers to create better interfaces. And apart from a few mistakes like (in)famously forcing users to click on ‘Start’ to actually stop or quit their OS, smart UI design made things much, much easier and contributed to a better user experience.

The next step in the evolution of software design is again connected to new hardware.

When Apple decided to use a multi-touch screen for their iPod and iPhone devices they disrupted the whole human/machine interaction in two ways:

1. They lowered the barriers between humans and software by pushing the envelope in terms of skeuomorphic design (now contrasted by the flat design movement.)

In the first iOS version your address book really looked like the one that your grandma put on the phone table in the living room, the calculator looked like a standard pocket calculator and for taking quick notes the system already offered some handy sheets of legal pad.

Interaction with these objects is natural because we’re familiar with them and because we can use our own hands to interact with the virtual environment; something that previous portable devices totally missed on because of their keyboards, styluses, and desktop-PC-like interfaces.

2. Bypassed all the limits connected to availability. Software and information suddenly became accessible 24/7 and truly mobile. And mobility undeniably opened the doors to new applications and this trend is destined to continue thanks to wearables and augmented reality.

Ok…we solved all constraints now. We interact with our machines organically using our own finger to delete conversations we’re not longer interested in and we don’t need to learn a single command line to swipe people on Tinder while we wait for the subway.

One dimension to go, then.

Fourth element: Complexity

Quick…tell me what Facebook is used for.

Sorry…I don’t think you got it right.

When Google was launched they mentioned two things:

-That they would focus on one single thing and do it really, really well

-Their mission was to retain users on their site for the shortest possible time and re-direct them to the most suitable source of information

Would these statements apply to any other company nowadays? Nope!

To make profit with a “free” online service, it’s obvious that the main point is retention.

The longer people stay within your platform the more effective and expensive your CPA or CPM ads will be.

A few years ago, you would obviously use Twitter to send a public text ranting about your coffee being served cold and then you’d go to LinkedIn to look for a job, spy on your ex on Facebook and watch funny cat videos on Youtube.

Twitter now integrates pictures, videos and live streaming.

Facebook is a messenger, a recruiting platform, a substitute for a company’s website or online store and the most used video streaming platform.

Google sells products and youtube produces shows while Snapchat and Instagram seem to be doing more or less the same thing. Oh…and Vine became at this point kind of redundant.

The mantra here is occupy users for as long as possible and avoid that they open other apps to avoid distractions.

The best example is obviously WeChat. Try to ask a Chinese person what they can do with WeChat…

Think of WeChat as Google (Search, Calendar, Gmail, Picasa, Maps…) meets Facebook, LinkedIn, Yelp, Ueber, Expedia, AirBnB, Eat24, Whatsapp, Booking, Stripe, Skyscanner, PayPal, Youtube, Instagram, Periscope and a bunch of other apps you probably used at some point in life.

The more services within a single roof the better. The very same strategy that killed services like MSN or MySpace a few years ago seems to be the best way to go while trying to entice digital natives and millennials.

Saas, complexity, accessibility and conversion rate

So far we’ve talked about complexity for companies that rely on advertising as a form of revenue.

What about Saas?

What is currently happening in this sector in terms of complexity?

How can Saas companies be sure that their product is fully accessible to anybody?

As we’ve seen, intelligent UI design created a language which is more or less universally accessible.

But the problem with accessibility is not connected the UI.

APIs allow users to chain applications and services to accomplish tasks more efficiently even though the process can be painful and utterly inefficient.

Therefore, great Saas solutions interconnect different areas of service within their product to allow their users to accomplish all their tasks within a single ecosystem.

For instance, project management tools already include documentation and instant communication capabilities together with scheduling, planning, brainstorming, sketching, charting, writing, calculating, analyzing and reporting.

Getting lost in the sea of options

By offering different levels of complexity and introducing new layers of service Saas companies can segment clients into different pricing categories and also allow users to carry out as many tasks as possible without drifting away from the main working platform.

Average Users and Complexity

The evolution of hardware and user interfaces made software extremely accessible because users need little to no know-how to interact with it.

Back in the late seventies, nobody thought that creating specific software for different professions was viable. So my geeky dad had to code his own software And while people around him were convinced he was just wasting his time, his solution was totally tailored to his needs because he had programmed it.

Little by little, software companies found out that they could develop applications for pretty much any profession and market them easily even though these solutions are not customized to meet each customer’s needs individually.

UX designers developed an easy language that adopted familiar objects to carry out complex operations very easily, but processes within software applications are now becoming always more complex and branching out into several possible patterns in order to make the user experience totally customizable.

Nonetheless, even though more and more people have access to computers and can operate basic applications, for a lot of users, learning how to use complicated pieces of software is still a frustrating experience because they don’t want to invest time in learning a new tool.

The lower the time required to interact with a piece of software, the lower the number of people who will either give up from the beginning or at a later stage.

We develop products for the average user but we try to offer customization tools to support advanced users and novices at the same time in the attempt to adapt products to the needs of a larger target group.

In terms of accessibility, though, we often seem to forget that not all users are at the same level of proficiency. And since processes and number of services become always more intricate, by not guiding users through our services we simply go for a specific target group and ignore fringe users.

This group of users include all those who are not self-learners and who become more reluctant to work with a service the longer they need to become fully proficient with it. For these users, not only is the UX frustrating, but the feeling becomes stronger the longer they endure the process.

These fringe users include all the people who might be unfamiliar with the language you develop within your UI or all the people who immediately understand your UI but feel suddenly lost due to the complexity and ramifications of the software processes.

Guiding these users means that they can immediately start working with your software and execute any kind of operation without learning a new language.

Because to be accessible, a piece of software needs to be usable from the first interaction.

The present: self-explanatory UI and guided process

Does anybody remember when software used to be distributed with an actual handbook?

Or when instructions were provided in pdf form in a separate CD ROM?

Don’t know about you, but I feel frustrated just by recalling those days when I had to use my flight simulator program and have a book on the side with shortcuts and commands.

Then video tutorials came along and the learning experience became a bit less frustrating. Video was not interactive and the experience totally asynchronous. But after a couple of clips about PhotoShop people at least managed to crop away their ex from their holiday pictures.

This solution doesn’t really guide users in real time and apart from introducing obnoxious characters like Clippy (MS Word) that just made you want to hit your screen with your most robust golf driver, software developers focused on improving usability through a self-explanatory UI but didn’t offer any self-contained solution for supporting and guiding users through different steps while actually going through a process.

But the more layers we introduced the higher the risk of losing potential customers over switching costs connected to a steep learning curve. And the user experience is connected to both UI and process flow.

At the same time, it is highly inefficient for existing users to leave a program and look for instructions and information in forums or other repositories.

In a world where you car can guide you to the next Arby’s and your fitness tracker can kick your ass for eating there, it is perfectly clear that each piece of software should have a virtual assistant that guides you through all the steps you wish to accomplish.

And this is the first step leading to full accessibility: the ability to operate any kind of software immediately without needing to learn a new language or interface design.

On screen, floating guides that usher users while working in real time are becoming a standard for every truly innovative browser-based software. Once this norm becomes truly ubiquitous, then there will finally be no more obstacles that make the interaction between humans and machines more complicated than it actually should be!

A quick glance at the future

New hardware will again shape the way we design processes and user interfaces.

And this obviously is going to integrate all our current developments in portability and interaction through wearables, AR, holograms and all sorts of future hardware revolutions.

We currently need to interpret 2D images on our screen, attribute a meaning and connect them to a specific function.

The ability to interact with 3D objects that occupy our space will make the UI even more intuitive and the virtual representations of processes will become basically indistinguishable from the interaction with actual objects also thanks to sensorial stimulation.

Interaction with the digital world will totally resemble our everyday sensorial experience.

We’ll lift a pen to write or a phone to call.

The force that is pushing UI towards reality will make so that the digital and real world will look and behave in the exact same way and often overlap.

And while the concept of UI will slowly become synonym with reality, processes and ramifications will become more complex while users will be able to transform software according to their individual needs in real time.

But at that point, knowledge will probably be implanted or injected directly into our brains and we won’t need to worry too much about complexity anymore :D

How to prepare for the future and avoid missing on important opportunities

Evolution solved many problems connected to hardware interfaces while, in terms of UI, we’re developing a common language that becomes more intuitive every day.

Since immediate knowledge is not available at the moment, the best way to guarantee the best user experience and make sure that users feel comfortable with your product as soon as they start interacting with it is offering a virtual assistant that can explain each step in detail while the user goes through the process.

Written by Andy Mura, Inbound Marketing Manager with Userlane.

Please, like and share if you found the article interesting and/or useful! :D