Designing Gestures For Spatial Computing.
Gestures, challenges and consequences of the intelligence age.
Spatial Computing — What is it?
Spatial Computing is a complex concept with a range of technologies falling under its umbrella. The way I see it, its being used to intertwine digital and physical words with emphasis on empowering human and computer interactions making it more natural and immersive.
This concept has been written about for a while now. In 2003 Simon Greenwold wrote about spatial computing as human interactions with machines that manipulate experiences in real life. More recently, in 2017 Walt Mossberg writes about the disappearing computer. Our computers are getting smaller and fading into the background while computational performance increases exponentially.
A range of companies have been working within this space and a few devices that we might have seen are Meta’s smart glasses, Apple Vision Pros, Humane’s AI-pin, Microsofts HoloLens headset, Microsoft Mesh, Google Glass, Google’s Soli, Meta Quest headsets and so many other tools and platforms that are going to come out soon!
In what Humane calls the ‘Intelligence Age’ they talk about the disappearing computer, in agreement with Walt Mossberg’s writing. They imagine a world where the power of compute increases while the size of computers decreases. (2) If this were to be true how would we imagine our day-to-day look like? What are our priorities?
Gestures is an interesting space to think about, now more than ever. With all the new devices making their way, we might begin to be moving to a more gesture based society.
Recognizing Gestures
If we think about it, we spend so much of our times doing one or more of micro gestures like pinch-to-zoom, slide-to-unlock, swipe, and scroll, etc. Personally, I have become so familiar and used to them I caught myself pinching to zoom on a piece of paper, haha, a humorous yet interesting gestural consequence. These gestures have been now engrained within us and are associated with certain functions and scenarios. Beyond gestures I wonder how this would change or encourage new thinking, if we enter the intelligence age.
Speculating new gestures
If we were to exist in a new intelligence age, how could ambient computing aid in forming new behaviors? What would our priorities be then?
There’s an exciting space here to think about systems, gestures, and their consequences. When designing for this space, a crucial question to consider is whether we want to rely on older gestures or create new ones. A classic design principle is to make interfaces familiar to users to ease the transition but perhaps we can revisit this ideology to create space to introduce new kinds of gestures. A slow introduction, like the finger double tap on the Apple watch is a great gateway into the world of gestures that the vision pros could bring.
The way I see it, there is a strong emphasis on having powerful compute to allow us to be more present in our everyday lives. The concept is strong with humane’s product demo or teaser when Imran Choudhari gave his Ted talk on the disappearing computer. The VisionPros have EyeSight, a feature implemented to have the sense of physical reality and being present; bringing the digital into your space, spatially.
With more devices entering the spatial computing era, I cant help but think about how I’m going to be using them in my everyday life and for mundane activities. For example, I enjoy reading physical books, writing notes in them, and the tactile experience of turning a page. However, I wish I didn’t need to pick up my phone to check the meaning of a word. That’s what I loved about my e-books.
I imagine a device that can sense the physical and seamlessly integrate the digital, whether it’s the AI pin, Vision Pros, or even Google Lens. Using hand gestures, similar to how we use in the digital space, to look up meanings of words and highlight your favorite lines in the physical world. And Inventing new gestures for bookmarking a page.
Although devices like the AI pin and VisionPros are not similar, they exist in a similar theme of spatial computing. Whether they are meant to be standalone devices or ones that complement the smartphone. There are questions on how we use social media or take photos and videos. For example, the camera in our smartphones provide us with a pre-set frame to capture images. Do we use gestures or our eye movements to dictate functions? This could raise questions about how taking photos and videos in public would work. Would there be an indication that recording is taking place? Would there be an alert when people are in the frame? Or would there be a blinking red dot reminiscent of camcorders from the 1990s, bringing back some familiarity?
Another thing to note is how gestures and devices have consequences, I mentioned gestural consequences previously but our behavior has shifted too. Some of the gestures and behaviors that have emerged in the past 5–10 years include becoming more comfortable talking to non-living devices or objects, modifying our back and neck posture due to prolonged smartphone use(4), and developing muscle memory for texting and typing. Humane talks about how our devices have enslaved us and it shouldnt be that way. How could we move into a future where we use them intentionally? What if we could charge our small devices through alternate methods? Imagine charging your device simply by walking outside during the day (solar energy) or by walking (kinetic energy). Implementing these new techniques could subconsciously alter our relationship with our environments, values, and overall health, reshaping our relationship with objects, provided we consider how these objects can instill values and principles in humans.
We are in just the first generation or first round of “iterations” for spatial computing devices. If and when we enter the age of intelligence + spatial computing there would be more designers focussing on gestures, eye movements, and give a whole new meaning to “body language”. The future sounds exciting, a lot more intentional questions, ethical challenges and innovative input mechanisms. As a designer and an experiencer, I can’t wait!
References
1 — https://twitter.com/i/status/1706409567620596127
2 — Chaudhri, Imran. 2023. “The Disappearing Computer and a World Where You Can Take AI Everywhere.” TED Talks, August 10, 2023. Video, 12:30. https://www.ted.com/talks/imran_chaudhri_the_disappearing_computer_and_a_world_where_you_can_take_ai_everywhere/transcript.
3 — Velasquez, Sergio. Learn Hand Gestures on Apple Watch. October 26, 2021. iDropNews. https://www.idropnews.com/how-to/did-you-know-you-can-control-an-apple-watch-with-hand-gestures-how-to/171743/.
4 — Clover, Juli. June 8, 2023. Macrumours. https://www.macrumors.com/2023/06/08/apple-vision-pro-gestures/.
5 — Janwantanakul P, Sitthipornvorakul E, Paksaichol A: Risk factors for the onset of nonspecific low back pain in office workers: a systematic review of prospective cohort studies. J Manipulative Physiol Ther, 2012, 35: 568–577
Other Resources
Carter, Rebekah. “What Is Spatial Computing in Simple Terms?” XR Today, June 13, 2023. https://www.xrtoday.com/mixed-reality/what-is-spatial-computing/.
Carter, Rebekah. “What Is Spatial Computing? The Complete Guide.” UC Today, June 14, 2023. https://www.uctoday.com/unified-communications/what-is-spatial-computing-the-basics/.
Sean-Kerawala. “Instinctual Interactions — Mixed Reality.” Instinctual interactions — Mixed Reality | Microsoft Learn, n.d. https://learn.microsoft.com/en-us/windows/mixed-reality/design/interaction-fundamentals.
“XR Design Guidelines.” XR Design Guidelines — Ultraleap documentation, n.d. https://docs.ultraleap.com/xr-guidelines/.
“Hand Interfaces: Using Hands to Imitate Objects in AR/VR for Expressive Interactions.” HI Overview, n.d. http://handinterfaces.com/.