Reporting in from Intelligent User Interfaces conference in Tokyo

This post reports about noteworthy conference subjects, with some added thoughts on interfaces, AR and VR.

Johan Ronsse
5 min readMar 13, 2018

As a conference, IUI was very heavy on the academic side, which was unusual for me. I am used to conferences like Fronteers or Build (RIP) which are pretty much industry-focused. At IUI many speakers were presenting their PhD research, and in general attendees were mostly from the academic world.

I met people from Tokyo University, Osaka University and from private research labs like NTT’s.

Books, books and more books

The conference attracted people from all over the world. IUI is held in a different place each year, with IUI 2019 taking place in Los Angeles, California (see: their website).

I missed parts of the conference due to a severe cold, but I’d like to highlight some of the more interesting the ideas that I’ve seen.

One presentation that was very interesting to me was about the concept of ether toolbars (paper). The general idea here is to have a way to provide touch controls that are actually outside a tablet.

Demo setup

Researcher Hanae Rateau and her team used a clever technique comprising of a mirror and sound recording from the on-device mic to be able to provide a demo where you could draw a shape on the paper on the right side, which could then be used interactively in the drawing app on the tablet itself.

One of my interests is to make computing more human. When I spent a good month researching VR I was interested in its capabilities to make computing more physical. I was super interested in applications that let you create in a physical manner like 3dSunshine.

3dsunshine in action — My favorite VR application

I find the concept of ether toolbars quite related; I can imagine an AR-based solution in the future that basically allows you to carry a small tablet, where the center area of the tablet displays content only, and any controls are off-screen, but visible through your AR glasses.

Sorry for the poorly exposed photo, but this should give you a better idea about the concept.

Another memorable study — presented by Renate Häuslschmid— was about research surrounding heads-up displays (HUDs) for motorcyclists.

Motorcyclists are poor at judging their own speed and most accidents are related to speeding. The problem is that as a motorcyclist you have to look down at your cycle to actually see your speed.

By providing a display inside the helmet with a speedometer (or just the km/h number really) in your peripheral vision, the driver can concentrate on the road.

The research was focused on finding out the best way to actually go about this. Which information should you provide? What is the ideal positioning of the “widgets”? A brief glance at previous papers will show you the line of thinking here.

A recurring theme (and the theme of a workshop on the last conference day) was explainable smart systems. The gist here being the following idea: if the computer want to decide something for you, how can you evaluate how it got to that decision?

One of the use cases here was once again in the medical field. In one practical use case a doctor would get recommendations on documents that would help her/him answer a medical question. But as a medical professional, one might wonder how the system got to these recommendations (See: paper).

A research told me about current VR/AR research at Osaka University, and how they were experimenting with AR and VR to provide better healthcare. One part of the research involved a sensor embedded in VR goggles (e.g. inside an HTC Vive) which would be used to measure a patient’s pupil dilation. Pupil size can be used to measure several things including pain levels. When one experiences physical pain one’s pupils widen immediately. This in turn could be used in a medical context.

It reminds me a lot about a session I saw in Belgium about using VR-induced hypnosis to reduce pain, especially chronic pain after treatment. I am a bit skeptical about these things, but I guess when you are in pain you will try anything — and it might even work?

One of the most fun parts of the conference were the demo sessions. For the demos the conference room changed into a setup with about 24 booths showing off various demos.

These guys made combined 3d printing logic with a cotton candy machine to be able to prototype using soft materials.

And this weird contraption below was part of a demo called “cocktail maze”. You would put on the VR goggles and go through a maze, but the tall device would emit some fruity smells for that extra “fourth dimension”.

So, conclusion time.

It was interesting to see a different world with a whole different process. This was all about deep research and possibly invention.

My day-to-day work is entirely practical. After a few days I got into the habit of just introducing me as being from “industry”, to avoid the question of which university I was from.

In any case it was interesting to see a conference that’s coming in from a different perspective. This is also why I chose to go there, because to be honest with most of the talks at web design conferences I just have the feeling that I’ve already seen that talk.

--

--