After several years working in humanoid robotics and one in Virtual, Mixed and Augmented Reality (XR), I want to share what it is like to be a UX Designer in emerging technologies from my point of view. I’ll give an overview of UX Design challenges given by new technologies and differences with more common products.
Being a UX Designer in the emerging technologies industry isn’t a comfortable position, yet exciting
First of all, we can’t fully rely on standards, like in the web. We have decades of perspective for web users’ behaviors, many studies have been done and we have built guidelines to teach at school. It is quite easy to apply them even if there are some gradual evolutions.
It isn’t the case for emerging technologies, they are too young. Each device has its own specificities, waiting for a standard to emerge. We are still figuring out the basics.
Plus, nobody knows the audience, because it isn’t even set yet. The real user needs or habits aren’t clearly identified. That’s why these technologies developments grope around and iterate with a lot of tries and errors, searching for the killer device and apps.
It isn’t a comfortable position but everything is new, challenging, really exciting… and fun!
Robotics and XR examples
To speak about UX in emerging technologies, I will talk about what I know: I worked for 3 years at Aldebaran on Pepper and NAO humanoid robots, and now at Opuscope on Minsar, a software to immersively design 3D experiences in Virtual and Mixed Reality.
Working on robotics and XR products is quite the same on some levels
We have to try to make parallels with proven usability facts. We need to find what we can use among them and adapt it to our innovative interfaces, being aware of ideas that wrongly appeared to be a good.
For instance, whatever the interface is, the user needs clear signs and feedback to know: what they can do, how they can do it, and that their action has been taken into account by the machine.
In Minsar, the cursor has different states for the user to know which type of interaction they can do, what they are doing, and what they did (like in regular great interfaces… Nothing new here).
For Pepper, we needed a sign for the user to know when they can speak to the robot. Indeed, the robot can’t listen while it speaks or processes. We chose to ligth up shoulders LEDs with a blue color: they are easy to see while speaking to the robot and they aren’t too intrusive for the interaction.
Besides, there isn’t a lot of users since these devices aren’t common. We can’t use real user feedback in their daily lives, or benchmark similar products since there isn’t a real market yet. We are torn between 2 opposite user types: people who have never experienced XR or robots, and fans who own a device or take every chances to test them. There are no regular users for now.
Our only help is the experience we have acquired working on other products, and user testing. Confronting a set of representative target users to your design is the best way to validate it. You have to put them in conditions as near as possible from the reality and pray for it to work ☺. If it doesn’t, iterate!
Both HoloLens and Aldebaran robots are Environmental User Interfaces. They have sensors (e.g: cameras, infrared, microphone) to understand the user and their environment, and provide a contextual experience. These technologies are integrated into the user’s life, they don’t belong to a simple desk or pocket. They are part of the early days of ambient computing. Robots aim to live with humans in their home or shops, and HoloLens aims to guide them in a new world between reality and virtual reality.
Therefore, designing for such interfaces implies to manage several new inputs and outputs. The user is no longer using a mouse and a keyboard in front of a screen… Their inputs are more natural: their moves, facial expressions and voice in a full 3D environment.
By their characteristics, humanoid robots and XR products should be fully consistent with real world knowledge and use it to be more immersive and not break the experience:
- Humanoid robots design should benefit from established humans’ behaviors to be easily understandable. Body language for example, noding to tell the user they have been understood, is better than a green light.
- Interactions in XR should respect basic physics rules not to confuse the user. For example, touching an object should have a physical impact on it.
As a complement to this, both technologies need to offer special features, because only mimicking the real world isn’t really interesting…
Robots can have a tablet to display other information, like Pepper, or unlimited knowledge thanks to the Internet, comparing to a human.
XR can make magical interactions possible, like remotely control or grab an object, or display anything in the air.
Designing for devices like the HoloLens is both constraining and infinite
The user has few ways to interact with the application (voice, gaze cursor, airtap and tap & hold gestures), but they have no limit regarding their position (except the real world one, obviously). The UI needs to adapt to this new type of use: it should always be available but not oppressive. That’s why we chose a user-locked menu. We did a lot of iterations to try to find correct values and formulas for its behavior, we still have to work on it to set the perfect ones. Globally, it is present on the sides of the user’s field of view so that it is always usable without being intrusive.
This behavior should take 3D into account, this is really new for a UI. The nearest common area for this is video game, even if it isn’t mixed with the user’s reality. Our menu is also designed to smoothly follow forward and backward moves.
Since our app isn’t HoloLens exclusive, we have to adapt this design to VR devices (like Windows Mixed Reality headsets, HTC Vive and Oculus Rift). The main difference is that the user is fully immersed, but that’s not very far from the HoloLens in terms of UI for us. The behavior is quite the same. The real difference is the way to interact: no more gaze cursor or vocal commands, but hand controllers (laser, triggers, touch pads and thumbsticks). We keep a common interactions set to all devices to be consistent. Switching from one device to another is smooth and easy. But we also have to design specific mappings (which button does which action) to provide an optimal experience depending on each device possibilities.
Menu example: on HoloLens, the menu can be minimized if the user wants to focus on the scene, and then only the maximize button is displayed with the same behavior as the menu.
Since there are hand controllers for VR devices, this feature is replaced by the “stick to controller” one: by clicking on a dedicated button, the menu goes on the controller and stick to it until the user clicks again on the same button to release it. Menu buttons can be used with the laser of the other controller but also with the controller touchpad (HTC Vive) or thumbstick (Windows Mixed Reality and Oculus Rift).
Jump into the challenging UX Design of emerging technologies!
If you are interested in emerging technologies, just give you the opportunity to work in this challenging field! Don’t be afraid of errors, you can’t always be in the right direction, you need to take a lot of steps back to ensure your design fits well your target users. Learn from your mistakes, and be open to your co-workers’ ideas, whatever their job is. A diverse team is essential. Everybody can have a great, innovative idea, it’s your job to frame and adapt it in a relevant way. Maybe one day this idea you shaped together will be the new UX standard.
Aside, beware because I think it is quite hard for a junior UX Designer to work in such fields (without a dedicated mentor). In my opinion, it is better to start with more regular interfaces to apply their university knowledge in the concrete world. Then build a real experience to extract themselves from this learning and get the needed perspective to design never-seen-before UX.
I know that some counter-argument is “since it’s new, we don’t need special experience to work on it, a junior will start with a new eye”. I’m not saying that a junior can’t do it, but I feel that strong usability understanding is a great advantage to design a relevant experience with emerging technologies. Humans are complex, thus is designing for them. It takes time to integrate usability knowledge, then adapt it to fields with a limited perspective.
This opinion is based on my own experience: I started my career working on video games at Ubisoft. This wasn’t a regular field for a UX Designer by then. During my internship, my teachers were afraid of video game as a first product to work on, even if it has quite established rules. I had to prove them that I was still applying the knowledge they taught me. It would have been way harder for emerging technologies with almost no best practices.
At the same time, I was discovering what it is like to work for a company, confronted to the concrete, constraining reality. Every juniors need to take a step back from theoretical knowledge during their first work experience. It is a challenge per se and I don’t think I would have built strong basis and been as efficient in emerging technologies at that time.
Anyway, whatever your experience is, keep you informed on a daily basis and test as much experiences as you can! This kind of technology evolves quickly and you can’t just rely on what you have learnt. The community is important.
Thank you for reading my article, please 👏 👏 👏 if you enjoyed it! Feel free to give me some feedback, ask me questions, or share your ideas.
If you want to know more about how to become a UX Designer, how to integrate UX Design in a startup process, or user testing for uncommon products, you can read my two previous articles: