Making Connections: How I Got to Where I Am Now

Bibiana Bauer
Project Notusia
Published in
6 min readMar 12, 2019

I’ll be the first to admit it’s a shame I didn’t start writing regular blog posts about project Notusia from the beginning. That being said, going forward it makes sense that what I write is current and relevant to the ongoing work that is happening with Notusia instead of a constant backlog of things that were done weeks if not months ago.

However, there is a certain amount of contextual background that’s inherently necessary to understanding what I’m doing now, and so I shall endeavor to give a brief overview of where I’ve been leading up to this point.

Embodied and Spatial Cognition

I first learned about the psychological theories of embodied cognition and spatial cognition (also related to spatial memory) in my sophomore-level Cognitive Science class. I won’t get into the details, but in essence, we think using not only our brains but also our bodies and the environments around us. I was fascinated with these concepts and their possible benefits if/when applied to digital products and experiences. Being surrounded by people staring at their phones and laptop screens all day, I had already quickly become disenchanted with the idea of applying my Interaction Design skills towards designing yet another app or screen-based digital product. Instead, I was intrigued by the thought of enabling people to take advantage of their own bodies and spaces around them as a way of increasing beneficial cognitive function while simultaneously challenging the paradigm of screen-based design experiences.

3D Information Architecture

Information Architecture, Organizational Systems, Data Structures, etc. I love them all. I couldn’t tell you why exactly — but I think it has something to do with the comfort and ease of knowing where things are meant to go. As anyone who knows me well will tell you, one of my favorite sayings is:

“A place for everything, and everything in it’s place”

In fact, one of my favorite “relaxing” activities (even as a child) was organizing my room. There’s just something so satisfying about spreading everything out, sorting through it all, separating things into piles based on type, splitting piles into sub-collections, designating places for certain collections, and then neatly putting everything away. A good organizational system is both logical and intuitive. It’s logical in that you can follow a hierarchy to locate an existing item or to find a place for a new item. It’s intuitive in that you can learn the organization structure/model easily enough that you no longer have to think about it.

But I digress…

While many computer systems work with relational database systems that conceptually function in multiple dimensions, almost every single user-facing system for organization is two-dimensional — maps, spreadsheets, hierarchy trees, flow diagrams, etc. Furthermore, the world we live in is three-dimensional and we humans, evolutionarily-speaking, are adapted and designed to work quite optimally in this physically tangible way.

Early sketch of what 3D Information Architecture would look like

All this, combined with the aforementioned theories of spatial and embodied cognition, led me to think about opportunities that might lie within the overlapping capabilities of digital relational database systems and our human intuition for working with physical things in space.

Augmented/Mixed Reality

I realized the most obvious way of creating a digital, spatial, interactive experience —that would also allow for integration with a back-end database system— would be augmented/mixed reality. At the time I was taking a Design for Emerging Tech class with Apurva Shah where we were learning the basics of how to create augmented reality apps for iOS using Unity. However, even that was a challenge, and I quickly came to the realization that I simply wasn’t going to be able to build up the necessary skillset in time to build an augmented reality prototype in Unity — let alone build and connect a back-end database system.

One of my classmates, sparsh sharma, with my paper prototype “3D Infotecture”

One the one hand this created a huge setback for me as I had this concept I had been so eager to build and test but with no way to do it. On the other hand it created a critical turning point in my thesis work. I ended up testing the concept of 3D information architecture using a simple paper prototype instead, and came to the realization that not only was it a difficult concept for people to grasp, but what I was really interested in was actually less about the output of the system (the display of information, 2D vs. 3D) and more about the input of the system (how people control and interact with information).

Body Language & Dance

In conjunction with my inspiration from the psychological research of embodied and spatial cognition, I found myself taking a step back to my own experiences growing up. From ages six through eighteen, I was taking weekly dance classes in classic ballet, contemporary, jazz, and tap. I also happened to grow up doing a lot of music — thank you Mum and Dad for supporting 10 years of violin lessons and singing around the house — and I could easily write a whole separate blog post on my theories about how music, memory, systems, and “flow” are all interconnected, but what really caught my attention for project Notusia was the embodied version of music: dance. At its most fundamental level, dance is a form of innate human expression — even babies can dance. At the same time though, dance has been built up in various forms and practices to be a skillset of mastery — a system of movements with names that can be taught, learned, reproduced, and eventually mastered. In essence, dance can (and has already) become a set of learned behaviors, serving as a form of communication both independently and as a part of a larger system (e.g. with music, or as two or more people dancing with each other). Sound like anything else we do?

Typing on keyboards…

Tapping buttons…

Scrolling and swiping…

These are all learned behaviors. But most of them don’t take advantage of our bodies much, besides our fingertips and eyeballs.

So with all this in mind I dove into research on dance and how it could parallel the work I was trying to do with Notusia. I started taking dance classes again and even discovered the hidden practice of “dance notation” where mapping out the human body and writing a dance down on paper ends up looking more like a series of ancient hieroglyphs than anything else.

Kahnotation symbols for denoting different types of tap dancing steps

I became particularly interested in the opportunities that lay within the practice of tap dancing given its distinct movements, sounds, rhythms, and the fact that it was conveniently isolated to just one body part: feet. I used this as a jumping-off point for a series of prototypes/experiments exploring how people felt about using their feet as a method of input and thinking about all the possible combinations of different movements and patterns that can come out of just using feet (toe vs. heel vs. flat-foot, one foot vs. both feet, one movement vs. a pattern of multiple movements, etc.)

Motion Control

I’m still continuing to explore and prototype with feet, but as the final showcase for project Notusia draws closer I’ve been looking into expanding my prototypes into a more full-bodied experience. This started out as a series of expert interviews with professionals in the field who have worked in the space of motion data capture and gesture-control, and is now taking form in my own process as I experiment with existing technologies like the Microsoft’s XBox Kinect and Rebecca Fiebrink’s Wekinator which is an open-source software for automating machine learning based on input and output communication via OSC (Open Sound Control) messages.

Screen capture of the Wekinator data training UI

--

--