If you asked Max Weber what he likes to do in his spare time, he’d probably say it’s sculpting sound and warping signals. Max never expected his passion for digital signal processing (DSP) would lead him to be one of the top haptic researchers in the world.
During his master’s studies in audio and communication technology, Max started working with Lofelt, the Berlin-based haptics company developing audio-to-haptic technologies just a few blocks away from his home. While learning more about haptics, Max realized how much it overlaps with the ways we process and think about sound. However, unlike sound, there is no easy way to design and play back haptic effects across different devices.
So, Max set out to write a thesis that would provide an overarching framework for making haptics ubiquitous — just like audio and video.
“Similar to how you can seamlessly display images and listen to music across a wide range of devices, we want to easily deliver high-fidelity tactile effects across those same devices,” Weber said. “And, we also want to provide easy workflows for people familiar with audio and graphic design to create remarkable haptic effects without knowing how to code.”
Naturally, some companies have the resources and proprietary technologies to create haptic experiences, but they don’t carry over to other products or platforms. This is why a common framework for haptics using a standardized system is so important.
“We started by exploring audio-haptic conversion where, for example, we would process the audio coming out of a video game and generate the corresponding haptic effect in real time. We got this process to work very well, but it has limitations,” says Max. “Mainly, content creators have no control over the experience because they can’t curate which sounds are translated into haptics, and which ones are not.”
A technical solution would be blind source separation, but this complex task still leaves the question of the designer’s intent and control over the overall experience open.
“We don’t want an algorithm or AI to decide what and when haptics should play based on the audio stream. We want to move this process further up the design chain to allow the designer to make creative decisions early on in the process — the same way they do it for graphics and sound.” says Weber.
In his publication, Weber proposes a new framework for haptics as seen on the chart below.
This new framework starts with a designer on the top of the stack. The designer uses tools to create and edit a tactile signal, which is encoded into a tactile data model. That tactile data is then integrated into an application or transmitted to a device, which is, in turn, able to translate the intended experience of the designer to an end user, regardless of the device being used.
According to Weber, the question people should be asking is: How do we make sure the intent of the designer is accurately reflected and delivered regardless of the technology and device being used?
“We realized haptics should be integrated earlier in the design process,” he explains. “You don’t want to rely on something at the end of the pipeline to make decisions about the tactile experience. Rather you want to be able to make them while you’re designing the entire experience.”
Making the entire design process agnostic to the playback systems is the biggest challenge, and creating a platform-agnostic data model and file format is a viable solution.
Complicating things even more is the fact that talking about haptics is also a challenge. “We rarely talk about haptics in our daily lives — so there’s no established vocabulary around it.” Weber explains.
To address this issue, the Haptics Industry Forum (HIF) — which consists of leaders from academic, medical, accessibility, health, gaming, AR/VR, telepresence, teleoperation, and robotics fields — is currently working together to establish a vocabulary and define standards so that when people talk about the quality of haptics, it’s clear to everyone what is being discussed. For instance, it’s important to define what “high definition” means for haptics and how it can be measured.
“Another important aspect is that we need conventions on how we deliver tactile patterns, so that anyone can design an experience, which can be transported and reproduced on the other side the way it was intended,” Weber says. “The goal is that the format or data model being used should be able to interface with different types of APIs across various platforms.”
What this means more concretely is: Lofelt is taking the conceptual framework proposed by Weber and using it to build the software stack needed to consume the data model and translate it ad-hoc to the available platform or API.
“Ultimately, we want to get to the point where you don’t have to worry about the problem of this framework,” Weber said. “Rather, you can sit on top where the designer is and improve their workflow to make designing and experiencing haptics easy and enjoyable.”
— — — —
In July of this year, Max — along with Charalampos Saitis from Queen Mary University of London — published a paper, “Towards a framework for ubiquitous audio-tactile design,” which provides an overview of his thesis research. In August, Max presented the paper at the 2020 International Workshop on Haptic and Audio Interaction Design, and in September, he successfully defended his thesis, entitled “A Framework for Audio-Tactile Signal Translation” at the Audio Communication Department of the Technical University Berlin. You can read Max’s paper here or watch a short presentation on a related topic at the HAID 2020 conference here (begins at 41:21).