Beyond Telepresence
A future where we connect beyond the computer screen.
Ironically, “staying connected” today means disconnecting from real life. When we enter the placeless-ness of cyberspace and the rigid schedules of Zoom calls, we leave behind our 3D environment: our bodies, our homes, neighborhoods, and our impromptu interactions. Our 3D world flattened to a 2D screen.
Our 3D world flattened to a 2D screen.
Many emerging smart homes products (Amazon Echoes, Nest thermostats, door locks with “smart” cameras) promise to connect the Web with our physical environment. But getting an Echo update when an Amazon package arrives or having my Nest thermostat auto-adjust hardly fulfills the potential of real connection.
Even products like Facebook Portal specifically focused on telepresence is still just another screen. It just doesn’t compare to running into someone in real life (IRL). Wearables and mixed reality headsets like Hololens are completely individual. Others in a room have no context on your reality. With the filter bubbles of the Web already channeling each one of us into our individualized realities — creating real spaces where information is shared becomes even more important. As we descend deeper into the void of the screen (admitting modern Web is a miraculous technological accomplishment in and of itself); we have to ask ourselves — where do we go from here?
Measuring the Great Indoors Class
Social Experiences > User Experiences
Spatial Interfaces > User Interfaces
In our class “Measuring the Great Indoors”, architect and urban design students investigate the spatial, tangible, and impromptu potential of web connectivity. Our aim was to heighten our connection to the physical world and each other by designing digital interactions beyond the screen: with social experiences and spatial interfaces, students designed interactions that engage the five senses and all three dimensions.
An air bag that forces breaks when you’ve been at your computer for too long.
Left: what I see;
Right: what my computer sees
Students hacked together smart home products, cameras, and projectors with microservices like IFTTT, which allow custom “routines” to connect devices, and further programmed logic between these devices with flexible programming environments like Processing and P5.js. Despite mixed reality glasses sophisticated, high tech solo experiences, actually, projectors, a seemingly commonplace 100 year old technology affords so much more. Projectors create a social experience — projections can be seen and shared by anyone in a room generating communal experiences. They also create spatial interfaces — they overlay visuals on the environment grounding information in place and allowing the body to interact with information.
Architecture Meets Digital Architecture
All of the hardware technology students used was off-the-shelf. Its impressive how much can be prototyped in a few weeks with a few lines of code and some taping products and services together that don’t usually talk.
While this course involved technical aspects of building science and digital technologies, our primary aim was to encourage students to consider dynamic spatial and environmental qualities in their design work and to design the phenomenological aspects of “the great indoors.” Our “home base” for student projects was our homes, allowing students to intimately live in their own experiments.
Architects Think In 3D
Architects and urban designers have a tuned perception to how the built environment impacts people’s lived experiences: how it influences behavior or how it makes us feel. By looking into the phenomenology of space, architects tune into the blind spots ignored by modern technology. Take for example how the tunnel vision of targeted news feeds optimized for a single function: get users to spend more time on a platform, which is ultimately flaming the rise of political extremism — a very real life consequence. When people look at their phones, they do so in a room, in a neighborhood, in a city, next to a family member, with or strangers. UIs shouldn’t always be oblivious to this context — oblivious to our real life, our immediate spatial and social realities.
“Smart home” and IoT devices typically seek to optimize our behavior and hijack our attention. But what if we could use these devices in a different way — to resist the attention economy and the commodification of our everyday lives? In many projects, students were prompted to create “recipes” using IFTTT (micro-services) with a goal to “do nothing” — inspired by Jenny Odell’s book, “How to Do Nothing: Resisting the Attention Economy.” Students use “recipes” to focus their attention back to their homes, the environments around them, the lighting, the people sharing those spaces, or to create and tune into communal experiences for students across the globe.
Our realities are right under our noses (or maybe behind us if we’re staring at our screens). Tech that acknowledges our bodily, social, and physical reality is critical if we want a future where we are awake and tuned in to our world.
“We experience the externalities of the attention economy in little drips, so we tend to describe them with words of mild bemusement like “annoying” or “distracting.” But this is a grave misreading of their nature. In the short term, distractions can keep us from doing the things we want to do. In the longer term, however, they can accumulate and keep us from living the lives we want to live, or, even worse, undermine our capacities for reflection and self-regulation, making it harder, in the words of Harry Frankfurt, to “want what we want to want.” Thus there are deep ethical implications lurking here for freedom, wellbeing, and even the integrity of the self.” ― Jenny Odell, How to Do Nothing: Resisting the Attention Economy
Authorizations of the Physical
When you’re with a group of people, how do you decide whether the light gets turned on or off? How do you decide where to set the temperature?
When we begun comparing our interior environments to web technology, we started to see the similarities and differences in the affordances of these systems. Smart home products are typically controlled by one individual user profile. This structures the governance of the smart home system under a single person even if the system impacts roommates or family sharing the home. An “old fashioned” light switch has no permissions for control. Whoever can reach the light switch can turn it off or on. A former student, Wenya, pointed out that there are many other ways of structuring the governance of our physical spaces. Lighting can be adapted based on the number of people in a space or based on voting of who wants the lights off or on. In a crowd, a light could be dimmed if a single person’s mood was bad. There’s a lot of permissioning capabilities that can control our indoor environments when you dive into it. A lot of ethical questions have yet to really be investigated in smart homes.
These experiments show that spatial/social tech is not a distant dream. Within a few weeks, a handful of architecture students created a myriad of provocative prototypes with tech that already exists. A future that fundamentally connects us to each other and tunes us back into our physical realities is still possible.
Architecture, Human Computer Interaction, Tangible Computing, Service Design, Interaction Design, and User Experience Design: these all have wisdom to contribute to spatial/social tech. We need a cross-disciplinary approach where we recognize the whole environment as interconnected — its space, its digital layers, its culture, etc.
If we were to invest further in technologies like projection, smart home products, and connected cameras and speakers, just imagine how different or perhaps more natural our experiences could be.
In 10 years time, I hope we’re not all still buried in our phones or locked to Zoom screens. I can’t imagine that’s possible with this suite of technologies burgeoning with spatial potential — it’s only a matter of time and the desire to move beyond the screen.