The Direct Interface

Our intimacy with robots is growing steadily. We demand attention from our beloved technology and won’t take no for an answer. As interaction experts, we try to facilitate human-computer demands in increasingly seamless interfaces. Integrate computer interaction in a user’s life and creating the ideal user experience, that’s what we do. But what kind of experiences are we looking for and how intimate can we get?

Kostas van Ruitenbeek
Beast
Published in
6 min readMar 22, 2017

--

Keeping it simple

From a functional perspective a computer interface should be easy to understand. Clicking the correct button should be simple. But to decide what this concept of ‘simple’ means is quite difficult as users might use your interface in different situations and with completely different backgrounds. A young travel enthusiast may know what to look for when planning his trip in an app, but grandma might experience confusion when asked to connect her Uber account to her itinerary. Defining your target groups clearly could help to narrow down your users options.

Two-way-bonding

Sometimes we should help users to get aware of what their real needs are; of course we need some kind of user input for this. At the moment applications use general form input, location and perhaps even face recognition to narrow down the context. This makes it possible to remove clutter from user interfaces and to keep it simple. The first steps to love.. don’t be a hassle to work with. Quick and easy access to technical functionality is a key success factor to an interface; making the user forget the complexity of an interface and letting him experience a seamless integration of technology in his life. In addition we should make the experience addictive and fun, to make sure a user continues to use our work. That’s probably the second step to love, being loveable.

As much as we love to create the most heart warming interfaces, we’re still handling it mostly from the outside. Our machines tend to wait for our user’s ‘manhandling’ before acting upon it. But this behaviour is now changing rapidly. Techniques which scan our emotions are emerging more and more. The importance of this is that robots learn to understand our emotional context; which greatly unveils our previously hidden needs to a machine. After years of frustration, the time has finally come for robots to (literally) understand our feelings towards them. But reading emotions from audiovisual cues alone is not enough to truly read our mind.. We must get even closer to each other.

As intimate as it gets

I’ll just get to the point, one of the best ways of measuring a person’s state of mind is measuring their brainwaves. With a little bit of our help, a computer will be able to figure out our feelings by analyzing the things that happen in our head. Our brain activity tends to vary based on our thoughts, actions, emotions and general well-being. The so-called neural oscillations in our central nervous system are characterized by frequencies, amplitudes and phases. These properties can be measured and visualized with the help of techniques like Electroencephalography (EEG) scanners. That’s not as scary as it sounds anymore; buy an Emotiv EPOC+ online, place some electrodes on your scalp and connect it to your laptop, that’s it.

Your brainwaves convert to emotions on your screen in realtime. Our mind is like a big bucket of data for robots (worth 2,5 petabytes if we may believe professor Paul Reber of Northwestern University). It costs you around 800 bucks to show all your emotional secrets on screen.

We want more..

Ok, so primitive things like emotions are quite easily read by a computer. But those just form the basis of a clear communication. At this moment an interface could adapt to your feelings, but at best we’re still demanding things manually from the computer like we would from a human being. We’ve got a direct line to your cockpit going on here, there should be more possibilities, right?

We want the computer to actually take action based on our brain activity. Like every human-to-human relationship needs some time to warm-up, a computer also needs to learn to understand the way your brain works. Every brain is organized and structured differently; of course relatively simple primitive core functionality exists (emotions, motor movements etc), but demanding the latest Netflix romcom is a bit more complex.

Communication is key

All brain scanning technology will bump into the same wall; as all brains are different, there is no way to standardize commands. We need to teach the computer how each individual it interacts with thinks. My brain activity while thinking about a cheeseburger can be the complete opposite of yours, so making, for example, a game character move forward based on thought needs some training. You must first tell the computer what you want to achieve and then focus on imagining the action. By imagining something, your brains form a specific pattern which can be measured. A computer can associate this thought pattern with an action and recognize it later on.

In theory you could connect this thought pattern to many different actions; as long as you’ve trained your thought pattern to be distinct enough. In theory, this completely removes the need of an interface in many cases.

These human-computer interactions work with EEG, but are still not ideal when considering accuracy and efficiency. It takes quite a while for your brain and a computer to get ‘in sync’, but the possibilities are definitely there. Other types of brain scanners could work, but they are usually not as mobile or as real time as EEG. As time progresses this will surely be overcome though.

Interface be gone

Technologies like this give a completely new perspective to human-computer interaction. No audio or motor skills are required for a human to make use of a robot. People with disabilities might enjoy a completely different life in the future, as things like bionic arms are more revolutionary than ever. It even works both ways; the first steps are made to return the sense of touch, which forms an important advancement in the usability of this technology and the way we identify with our extended cyborg body parts. The more examples we find in this field, the more we should become aware that robots are becoming one with us and the needs of an external interface will slowly become less and less.

This brings us to a new way of thinking about the simplicity of an interface. If we don’t need to influence the user experience with external factors, we skipped the user-interface barrier. The synchronization and learnability of a computer command becomes the most important, as techniques which are quickly compatible with most brains will gain more popularity. Our ways of creating an ideal audio/visual interaction design might get ‘old-fashioned’ in the future. We will be able to move an exoskeleton, drone and coffee machine solely with our ‘brain interface’, all while experiencing the traffic flow of our blog and subconsciously interacting with the flow of Wall Street.

It might even be possible to have two human brain interfaces connect to each other, exchanging concepts and feelings in a new way. Our love for robot could lead us to a better understanding of our fellow human travelers, our friends and even enemies. The intimacy we create with technology is therefore perhaps not only a next step in overcoming the human-computer interface barrier, it might even be the first step to a perfected human-to-human interface.

Combining all our experiences together on a new level, aiming for a truly efficient way of sharing thoughts and missions.. Wouldn’t that be the true user experience we should try to achieve?

--

--