We’re Thinking About Spatial Computing All Wrong — It’s Dimensional
Spatial computing has been around for a while, notwithstanding an original research paper from Simon Greenwold back in the early 2000’s if you look up Google Scholar it is littered with references (albeit not in the same context) so it’s hard to find just who first coined the term for realsies.
But it doesn’t really matter because it’s all a bit Khan Noonian Singh.
He is intelligent, but not experienced. His pattern indicates two-dimensional thinking.
“Spatial computing is human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces. It is an essential component for making our machines fuller partners in our work and play.” said Greenwold back in the day, and more recently Apple defines this with their new device as “an infinite canvas for apps that scales beyond the boundaries of a traditional display and introduces a fully three-dimensional user interface controlled by the most natural and intuitive inputs possible — a user’s eyes, hands, and voice.”
But this is still limited thinking about a spatial computing paradigm.
The trigger for this thinking is how you perceive these spatial realities, for they are layers within layers, a bit like the dream states in the movie Inception. As explained in an earlier post where someone built a functioning copy of Minecraft within Minecraft — depending on the flexibility of new platforms, and if they hold true to the ideologies of web3 being open, composable and decentralized, then building fully functioning virtual worlds within an existing virtual world instead of being a new world linked to one could create some very interesting results. Will virtual worlds built inside an existing one inherit the properties of the parent for example, like most parent-child relationships in software? Or are there some other 4th dimensional properties that need to be taken into account?
Long story short; building apps and user experiences in augmented reality should impact user environments in any reality in real-time and in sync with everyone else who is in that environment at the same time. While spatial computing discusses creating 3D applications mostly centred around augmented/ eXtended reality for manipulation around real world context it doesn’t go far enough.
Which is why we should be thinking dimensionally as well as spatially.
Humans are dimensional thinkers as well as spatial. Being able to flip between different layers of reality is what will make spatial computing become what it is meant to be and that will take dimensionality to succeed.
Dimensional computing also needs different levels of application development and testing. What works as an interface in augmented reality may not work in virtual reality or even in a metaverse or flat, 2D web context, the user experiences are different as are the tools. If I execute something in one layer and another user experiencing this within a different interface and environment wants to manipulate this, in real-time, then they may require a different UI and input devices. But what they experience, and reciprocally what the originator receives has to work between layers with the results being experienced at the same time.
Apple is trying to push us wholly towards their spatial vision but it’s lacking, it’s Wrath of Khan territory again.
This isn’t spatial, it’s dimensional.
Similarly, using the example of Minecraft Inception, nothing stops me from building a virtual world within an existing augmented reality with the need to adopt and adapt a different spatial interface to work between them. This is interspatial — that veil between layers that must exist to allow this level of manipulation, interoperability, control and creation.
Spatial computing as we are right now being dictated to by the likes of Zuckerberg and Cook is still as flat as the internet is right now because it is just one layer thick.
For humans to create a new computing paradigm we not only have to think spatially within different realities, but spatially across those realities and within themselves.
And judging from Apple’s announcement and Meta’s rebuttal I don’t think we’re anywhere near that yet.