UX PoC in Spatial Computing #7: Transform Animation
The 7th episode of a series to share Opuscope’s work focusing on building the best possible User eXperience (UX) in Spatial Computing, thanks to Proof of Concepts (PoC). Maybe they will help other professionals through their Spatial Computing journey.
To get more context about how we made PoCs, you can check out the first episode:
This episode focuses on a very complex feature we developed: Transform Animation. With Transform Animation, the creator was able to record an animation on an object and then edit it thanks to a timeline.
Goal
With this PoC, we wanted to find an intuitive way to create a simple animation by manipulating an object in a Spatial Computing headset. This was a big challenge, especially because we were using the Magic Leap One. We had very few options on the controller.
This was a huge feature and we divided it into several steps to iterate separately on them:
- Record the manipulation to create an animation of the object.
- Edit the animation on the animation curve.
- Edit the animation on a timeline.
Timelines are common in animation or video editing tools. We wanted their users not to be lost in our feature and also novice users to easily create nice animations.
Animation recording is possible in some VR tools, but being able to edit it afterward is quite rare. Tvori offers this possibility with way more complex animations than Minsar (our creation app), we had to find a great balance.
Manipulation recording
Designing a hand-driven object motion animation feature in spatial computing applications presents a set of unique challenges. The feature allows the user to grab a virtual element and move it around, with this movement being recorded to generate a 3D curve with animation keys. Various complexities arose in the process of designing this feature.
Start recording
One of the main challenges lay in intuitively initiating the recording of the animation. We could think that the user could simply press a button, but it wasn’t that easy. Only pressing a button would have meant a delay between the button selection and the manipulation, quite annoying! 🙈
Another way could have been to put a countdown, like for screen recording on iOS. We thought we could do better in our case.
We chose to make the user press a button to enable the recording, then the recording only commenced once the element was grabbed to ensure accurate capturing. It stopped once the element was released.
Create the animation curve
Another challenge was the presentation of the 3D curve. The curve was created on the go during the recording for the user to track it.
Once the recording was done, we had to display it in a way that users could easily understand and interact with. This included applying smoothing algorithms to the recorded movement to make the animation appear more natural and less jerky, which was a complex task given the raw data’s unpredictable nature.
We also tried several styles and chose a white dotted line to be visible, but not oppressive in this immersive environment.
Generate animation keys
Generating animation keys based on meaningful variations in the movement was quite difficult. We needed to strike the right balance for capturing changes in position, speed, and rotation that would result in meaningful animation. Too many keys could have led to an overly complicated animation, while too few could have lost the detail of the motion.
A lot of trial and error.
Display animation keys
Finally, finding the ideal format for the keys posed an additional issue. Keys and curves must be displayed so they were legible from any user position. This involved considerations of size relative to the distance to easily interact with them, ensuring they always faced the user, maintaining contrast, and so on. We chose a white diamond shape to match the app design.
Since the curve could be anything, we decided to number the keys for the creator not to be lost, and be able to easily identify them on the animation timeline, we’ll get back to it later 👀
Manipulation could also be anything: move, scale, rotate, all three at the same time… For each key, the user needed to see what the animated element looked like. A true challenge was deciding whether to display the element as it appeared on a particular key or to standardize the format for all keys, allowing the element to move along the keys so users could visualize what it looked like. We chose the second solution as the first one would have been very confusing and overwhelming with multiple versions of the same asset displayed at once.
This solution brought new issues: visual conflicts between the element displayed on the keys and the keys themselves. We thought of many solutions, like displaying a transparent ghost of the asset, but it wasn’t satisfying. We simply displayed the element over the key and linked it to its twin in the animation timeline (see the dedicated part about the timeline).
As you understood, these display challenges required thoughtful design solutions to ensure the user experience remains seamless and intuitive.
Edit the curve and keys
The next level of challenges arose when considering the manual editing of the recorded animation keys. This required enabling the user to manually adjust the element’s format at the level of an individual animation key.
Ways to edit a key
One primary challenge here involved finding effective ways for the user to manipulate the element when it’s on a key. Given the immersive spatial computing environment, the user interface must be designed in a manner that supports seamless, intuitive interactions with the 3D object.
Once the object was on a key, the user could:
- Move it by dragging and dropping it.
- Scale/rotate it with the dedicated gestures.
The key was automatically edited with the transform modifications.
If the object wasn’t on a key and the user moved, scaled, or rotated it, it automatically created a new key. This way the user had full control of the animation!
Note that to fill advanced designers’ needs, we offered a way to multi-select keys on the animation curve.
Adapt the curve after the edit
It’s great to be able to edit and create keys with transform modification but to be fully functional, the curve required to accurately adapt in real-time. The significant challenge was smoothing the transition between two keys in light of manual modifications. As users edit the format of the element at an individual key, it was critical to ensure that these changes do not result in abrupt or unnatural shifts in the animation. This involved creating algorithms that could accommodate for these changes, allowing for a fluid transition between keys despite the modifications. This was a complex task that requires careful thought, and a deep understanding of both the user experience and the technical aspects of spatial computing and animation. It was a delicate balance to strike, but crucial in desinging a truly immersive and intuitive spatial computing application.
Note that on this gif you see a spoiler of the animation timeline 😉
Create a new key
The ability to create new keys from the animation curve, specifically through the manipulation of an element between two existing keys, introduces another layer of complexity. Implementing this feature required precise understanding and control over the 3D environment, ensuring the seamless creation of new keys while maintaining the overall integrity of the animation. We used the same smoothing as for the key editing.
Animation timeline
The next layer of challenges concerns more advanced animation editing capabilities, allowing for more detailed control over the animation. This involved developing a 2D timeline interface that mirrors the animation over time, thus presenting a new set of design hurdles.
Timeline
We designed a familiar interface for people used to animation or video editing, but simple enough to be used by anyone. There was:
- Record button: to start the recording
- Navigation buttons: back to beginning, previous key, play/pause, next key.
- Delete key button
- Undo/Redo button
- Create key button
- The timeline with the playhead and keys
The user could scroll and zoom out/in the timeline, but we had to define the default size to offer the user at first, depending on the animation they recorded. Depending on the zoom level, we also had to handle the conflict between several keys when they were too close. The diamond shape helped distinguish really close key thanks to its small top (compared to a square for example). Finally, we gave a different style to the first and the last key for the user to know the animation didn’t get further.
Did you notice the zone we defined next to a key? 👀
If the user releases a key next to the playhead, the key is snapped on it.
This PoC illustrates very well how a great user experience relies on details.
Select a key
An important point lay in managing the selection of keys both on the timeline and on the animation curve and establishing a link between the two. Users needed to be able to intuitively interact with the keys in both these interfaces, which required careful consideration in their design. We had to find the best size and behavior for the timeline interface to make the keys easily selectable, and the whole interface small enough to be seen in the user’s very limited FoV. Plus, we chose to highlight the selected key and put it above others on the timeline to make the link with the one on the curve where the object was. We designed a different style for hovered keys to have distinct visual cues for the different states of the keys. They were discernible in both the timeline and the 3D animation curve environments.
Navigate between keys
Another key challenge related to navigating the timeline with the Magic Leap controller. Indeed, it doesn’t have that many buttons compared to the Quest controller for example. Functionality for next/previous buttons, zooming, and scrolling needed to be incorporated in an intuitive way, ensuring that users could smoothly navigate through the timeline while maintaining the immersive experience of the application.
To make the navigation natural, the user could drag a key outside the timeline viewport to automatically scroll. The user could also drag the timeline directly by grabbing it, grab the scrollbar to navigate through the keyframes, and press the touchpad (up or down) to zoom in/out.
Edit keys
When it comes to the timeline, enabling users to move keys to adjust animation speed, delete keys, and manually create new ones presented its own set of design complexities.
The ability to multi-select keys on the timeline, as on the animation curve, was also a must to be implemented.
Create a new key
To edit the animation, we also provided the user with a way to quickly create a new key in the timeline: they simply had to put the playhead at a spot without any key, then hit the “+” button. Nothing innovative here, I know but since I’m detailing the PoC… 😉
Re-record a part of the animation
In terms of more advanced editing functionality, we thought of allowing re-recording of only parts of the animation. This involved designing triggers for new recording instances and smoothing transitions between existing and newly recorded animations, among other considerations. Concretely, the creator was able to select a spot on the timeline with the playhead, then hit the record button and grab the animated element to start recording from there.
Isolate keys
A great animation tool requires the ability for users to lock keys to prevent editing, thereby ensuring the stability and consistency of the animation.
To do so, we added an “isolate” button to isolate the selection, ignore other keys, and make them less visible. The user couldn’t hover nor grab them, only isolated keys could be manipulated. They could hit the isolate button again to go back to normal.
Timeline display
We faced the same issues as for Minsar menu for the display rules of the timeline. See more details in the dedicated article:
We chose to use the same solution: allowing the manual displacement of the timeline. It provided users with greater control over their viewing experience.
We also worked on automatic adjustments to keep the timeline within the user’s view as they navigated the spatial computing environment. Unfortunately, as for the menu, we didn’t succeed in finding a great result.
Challenge, challenge, challenge…
I repeated “challenge” a lot during this article 😅
Offering a smooth, immersive, and fully adapted-to-the-context experience was, and is still in my opinion, a huge challenge! We can see it through this PoC.
Each of the challenges I shared represents a unique intersection of user experience, technical design, and the specific constraints and opportunities offered by spatial computing. Meeting them effectively greatly enhanced the immersive experience and usability of the feature. Even if we didn’t manage to do everything we wanted 😉
This is the first time I write an article with Chat-GPT’s help. If you read some of my previous articles, feel free to tell me if you felt the difference, in a good or bad way!
If you didn’t, here are a few links to some of them ⬇️