Assimilating Hyperphysical Affordances

Researching Intuition and Embodiment at the Frontiers of Phenomena

Gray Crawford
Design Studies in Practice
16 min readDec 14, 2018

--

Introduction

Spatial computing (VR, etc) reveals an expansive and underexplored possibility-space of interactions wherein the physics subtending affordances and phenomena can itself be designed, rewarding novel approaches to interaction design.

Through reviewing literature and prototyping spatial interactions, I am exploring the impact of previously-unencountered physical dynamics upon the development of intuition with systems and am identifying significant representations of external objects and the body itself, with an eye towards the larger goal of transformative tools for thought.

While identifying promising avenues from the convergence of literary sources, I researched and surveyed applications of interactional dynamics by designing prototypes of spatial interactions given different materialities and computed physics, gaining insight through direct engagement with novel spatial phenomena. These VR prototypes illustrate design considerations for now-accessible interactional and material unorthodoxies, recognizing consequences and applications for embodiment and sensory-coordination.

Literary ResearchPrior Theory

The literature I surveyed spans the gamut from modern research into spatial computing (Leithinger et al’s Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration, Won et al’s Homuncular Flexibility in Virtual Reality), to foundational texts expounding on the nature of perception (Gibson’s The Ecological Approach to Visual Perception, Piaget’s The Origins of Intelligence in Children). Generally, the areas of interest covered literature relevant to VR, affordances, embodiment, perception, learning, neuroplasticity, schemata, and UI design.

The scope of my research into unorthodox effects of spatial, computed systems upon interaction design ended up converging around a trajectory through the relevant areas:

Expectations from Prior Physics

Sutherland, The Ultimate Display;
Blom, Virtual Affordances: Pliable User expectations;
Golonka & Wilson, Ecological Representations;
Gibson, The Ecological Approach to Visual Perception

People build familiarity with ordinary materials and objects, the “interactional grammar” of physical affordances. This is a challenge if computed environments can diverge from that familiarity, and users expect certain behaviors from the start that confine the designer’s hand (and mind) to provide only what aligns with expectation. On the other hand, leveraging these expectations while selectively breaking them with confined novel behaviors provides opportunities to slowly wean users away from their ossifications.

Coherence and Coordination of Phenomena

Gibson, The Ecological Approach to Visual Perception;
Piaget, The Origins of Intelligence in Children;
Chemero, Radical Embodied Cognition;
Sutherland, The Ultimate Display

This familiarity is built up via repeated exposure to consistent observed physical behavior, where covariance of stimuli unifies the parallel streams of input into singular percepts. Relevantly, this incentivizes designers to provide multiple sensory responses for a given phenomena or user action, fleshing out the validity of the subjective experience. A difficulty, however, is that without coordination between designers across experiences, the preponderance of divergent interactional grammars and hypermaterial depictions might inhibit users from developing overarching familiarities.

Engagement with New Physics

Sutherland, The Ultimate Display;
Piaget, The Origins of Intelligence in Children;
Disessa, Knowledge in Pieces

The usefulness of an environment is a function of its physical capacities, and thus the expanded set of hyperphysics within simulated systems supports, in principle, a proportionally-expanded usefulness. Direct bodily engagement is possible not only with simulations of micro- and macroscopic phenomena, but even more esoteric and unorthodox phenomena not directly realizable within our universe’s laws. This vastly expands the space of interaction design, and rewards open and explorative mindsets and design approaches. Our neuroplasticity enables us to attune ourselves to the nuances of whatever our senses happen to provide, and this expanded space of computer-mediated experience supports untold applications of that plasticity.

Affordances

Gibson, The Ecological Approach to Visual Perception;
Dourish, Where the Action Is: The Foundations of Embodied Interaction

Hyperphysics supports novel behaviors that have no necessary analogue in ordinary physics. Thus the entire structural, visual, and dynamic “language” of ordinary affordances is inadequate to fully cover all possible transformations and behaviors that hyperphysics supports. Even fundamental material behaviors are not in principle guaranteed. However, the greater space of possible physical behaviors offers opportunities to create new affordances with new interactional grammars that can take advantage of the specificity of computing power and the precise motion tracking of the body.

Embodiment; Homuncular Flexibility

Heersmink, The Varieties of Situated Cognitive Systems: Embodied Agents, Cognitive Artifacts, and Scientific Practice;
Won et al, Homuncular Flexibility in Virtual Reality;
Leithinger et al, Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

The body’s relationship to tools is often quite fluid, where prolonged use allows tools to be mentally fused with the body, and engagement with the world is perceived at the tool’s interface with the world rather than the body’s interface with the tool. The ability to depict the body in novel and hyperphysical ways, while still mapping the depicted body’s movement to the base movements of the user, enables startlingly compelling computer interfaces such as increasing the number of limbs or changing the physical form of the hands to better interface with a task.

Tools for Thought

Sutherland, The Ultimate Display;
Gooding, Experiment as an Instrument of Innovation: Experience and Embodied Thought

Ideally, the increased adoption and bodily engagement with hyperphysics will provide us with new tools to understand and represent the world around, inside, and ahead of us. As the scale and complexity of problems experienced by humanity grows, it is critical to augment our problem-solving ability, a large part of which involves the creation of new forms of representation, ideally giving us a better grasp on the most fundamental questions.

Prototypes

Approached within the Unity 3D authoring tool, I designed a series of VR prototypes to evoke and explore and manifest areas revealed in my review of the literature. I was already familiar with certain physics libraries and rendering methods affording different materialities and behaviors that I identified as likely possessing provocative elements. I surveyed interactional dynamics arising from novel computed physics, implementing hyperphysical environments and directly experiencing the resultant dynamics. The goal here was to research these areas further by myself designing and prototyping spatial structures, out of which might emerge conclusions relevant to the areas associated in my literature review.

Raymarched Hands ・ Visually Merging with Objects

Raymarched signed distance fields (SDFs) is a method of rendering 3D shapes without using polygons. It defines each object as its geometric primitive, each influencing a shared surrounding concentric “distance field”, and renders an isosurface at a given radius away, thus visually fusing any objects that are within 2r of each other. This property produces very organic forms, where any collision smoothly joins the objects into a melted, singular mass.

I had seen this technique used for external objects, but never used for the rendering of the hands themselves, and I suspected it might be quite compelling. Initially I added SDF spheres to the tips of my thumb and index finger within the Attachment Hands section of the hierarchy, childed to my thumb- and indextip, but found that the raymarching script needed them to be at the same level in the hierarchy. Instead I wrote a simple script to have each SDF sphere inherit the global transform coordinates of a specific body part, allowing me to place them wherever in the object hierarchy.

After two fingertips were created, I went ahead and added the other eight, populating the world with a sphere and a couple cylinders to move up to directly see the SDF behaviors. This was immediately mesmerizing, and changing the effective isosurface radius changed my hands from separate spheres only overlapping within close proximity to a singular doughy mass where the underlying proprioceptive motion remained intact if not only slightly masked.

I added spheres for the rest of my finger joints and knuckles, and found that it felt slightly more dynamic to only include the joints that I could move separately. My knuckles weren’t adding to the prehensility and only added mass to the lumpiness, so I removed them.

Before starting, I envisioned that this rendering technique might allow hands where the SUI was fused somehow, or was emitted out of the body directly, or where the body could fuse with the external world. I imagined some UI element being stored in my palm, and only exiting when my hand enters some state.

I initially added a disk-aspect-ratioed cylinder as my palm, and situated a sphere embedded at its center, to be drawn out when my palm rotates to face me. However, the blending between the solid disk and the sphere was too great, bulging too much at the center. I instead tried a torus as my palm, as it leaves a circular hole that the sphere could fit in. Secondarily, when the sphere floats above the palm, the torus offers negative space behind the sphere which provides extra visual contrast, heightening the apparence of the floating UI. By rising above the palm, the sphere delineates itself from its previously-fused state, spatially and kinetically demonstrating its activeness and availability. I expect this UI (and thus the overall form holding it) to change from the placeholder sphere and something with more direct utility. However, this materiality-prototype serves as a chance to engage with the dynamics of these species of meldings without immediate application. The sphere is pokable and pinchable, perhaps the type of object that could be pulled away from its anchor and placed somewhere in space (expanding into a larger set of UI elements).

On my right hand, instead of a prehendable object, I wished to see how something closer to a flat UI panel might behave amidst the hand. To remain consistent, I again chose the torus as the palm, and embedded a thin disk in its center that, when the palm faces me, rises above the palm a few centimeters. While docked, the restrained real-estate of the torus again provides the panel breathing-room such that the pair do not, in their fusing, expand to occupy a disproportional volume. In its current implementation, the panel remains the same size through its spatial translation. I’d like to experiment with changing its size during translation such that in its active state it is much larger and might perhaps be removable and exist apart from the hand as a separate panel.

These experiments begin to touch on this novel materiality, and point at ways that UI might be stored within the body, perhaps reinforcing an eventual bodily identification with the UI itself. Further, the ways that grabbed objects fuse with the hand mirrors how the brain assimilates tools into its body schema, and begins to more directly blur the line between user and tool, body and environment, internal and external.

What are the implications of such phenomena? Could a future SUI system be based around body-embeddedness? What would distinguish its set of activities from surrounding objects? What body parts are most available to embeddedness? The arms are arguably the most prehensile part of the body, and most often within our visual fields, so their unique anchorability is easy to establish.

In future explorations of this rendering technique, I aim to expand on the direct mapping of user motion to behavior of objects in the visual field. How might the sphere behave as an icon of a tool that adheres itself to the fingertip directly, becoming the tool itself (rather than merely a button to enter that tool mode)? How might scaling of objects afford svelter embeddedness before scaling to useful external sizes?

I’m keen to explore more direct mappings of hand motions to the movement of rendered structures. Might elements of the SUI be mapped to the fingers in a way that the prehensility allows a novel menu-maneuvering? Is proprioception loose enough that one would feel identified with the hand-driven motion of non-hand-like structures?

NVIDIA Flex ・ Parameter-Space Search via Playful Engagement

Though the ray marching toolkit produced many visual behaviors, the physical dynamics were unaffected from normal Leap Motion hand grabbing, with no collision or kinetic behaviors. In the search for a more physically reactive system, I discovered NVIDIA’s Flex particle simulation library for Unity. Flex allows for many thousands of colliding particles, and as long as they are all the same radius, they can be meshed into flexible fabrics, enclosed volumes, rigid- and soft-bodies, or remain free-flowing fluids.

Without a strict plan of attack, I began placing particle emitters on my head and hands, any place that offered me manual control over the placement of the particles in realtime. Immediately I found the physical dynamics captivating, spending multiple hours tuning the available parameters. As the particles were emitted from my palm and came to collide with a flat surface I erected, I changed the parameters of the simulation such that there was supremely increased friction, causing the particles to bunch up immediately upon colliding with each other or the surface, behaving more like a highly viscous goo.

Turning the friction back to zero and the dampening up high produced an effect comparable to if the surrounding “medium” of the air was highly viscous itself, rapidly slowing down any fast-moving particles. In combination with turning the gravity to zero, the intense drag caused the particles to bunch up at a distance perhaps half a meter away from my palm, just floating in mid-air. However, bringing the palm closer caused the new, slightly-less dampened particles with velocity remaining to impact the already-stationary pileup, scattering the static chunk at the point of impact. Applying this effect to different portions of the chunk enabled me to sculpt its form, as I gained finer control and sensitivity to the nuances of its physical behavior.

Wanting to experience the fabric dynamics possible, and at the suggestion of Golan Levin, I attached ropes to the tips of my fingers, initially hoping that I could connect one fingertip to the other. Being unable to do that, I found that, even with one end of the rope loose, the dynamics were nevertheless immediately captivating. With zero gravity, turning the friction up quite high caused the ropes to inescapably tangle with each other, and with such long loose ends floating freely, their pendulous, tentacular behavior was available to play with and, so captivating as to be a distraction from its own development, which is likely indicative of some innate positive aspect.

Turning the gravity to be negative, the ropes were pulled vertically away from my fingers, and with dampening turned up, immediately looked like kelp floating in the slow currents of the sea. By “submerging” my hands under the surface/ground, only the “kelp” was visible, and allowed its behavior and reactions to my movement to be the only visible objects in my field of view. I was able to puppet the “kelp” around, imagining myself the “currents” and driving the sway and twist of the vertically-pulled ropes. Thus the physical reactivity revealed an application in the maneuvering and puppeteering of objects when the driving hands are hidden from view.

It was curious to notice how merely the direct engagement with the phenomena was enough to elicit possible applications, in combination with the progressive altering of the parameters driving the physics simulation. In implementing large-scale novel physics simulations, is the only way to identify possible applications by directly interacting with the dynamics and observing the emergent reactions? If so, it points to the usefulness of play as a research method, especially when mapping out undocumented parameter-space, and how building intimate familiarity with materials helps to better conceive of their possible applications.

Unity Visual Effect Graph ・ Plasticity with Non-local Mappings

In the middle of my progress in the other materialities, Unity introduced a prerelease version of a new tool, their Visual Effect Graph, a node-based editor of compute shaders, capable of performantly simulating the behavior of millions of particles on the graphics processing unit. The early demos of this simulating method were engrossing, and I knew it would provide a very flexible testbed for visual and physically-reactive phenomena.

After gaining a facility in the tool by exploring its signal flow outside of VR and motion-control, and being initially unable to bring in my Leap-Motion-tracked-hands, I used my Oculus Touch controllers as input devices and explored the consequences of over-mapping hand input to environmental output, choosing intentionally esoteric and arbitrary mappings, as a test to see how flexible my brain and body were in engaging with this highly sensitive system and becoming attuned to its peculiarities and dynamics.

I controlled the scale, intensity, and drag of a turbulent vector field affecting one million particles via the three rotational axes of my hands. Initially quite difficult to control, I quickly found pockets of orientation in parameter space that produced engaging particle behavior, and learned to return precisely to those pockets via muscle memory. Once there, exploring the adjacent parameter space was easy, and I could shepherd the particles with high precision.

I’m interested in the rapidity of neuroplasticity even when faced with a relatively arbitrary mapping. Maximizing the number of simultaneously tracked+mapped dimensions of tracked input allows for high expressiveness, at the cost of a learning curve.

I wonder how designers might come to trust users more to devote time to building facility with novel spatial phenomena+tools. I fear that the UI trend toward immediate intuitiveness may hamper the development of virtuosity, and I’m interested in exploring the space of novel spatial interactions that afford skill development.

As I became accustomed to my agency over these particles, I began thinking about the subjective experience of having a body, and what the body means in the space of hyperphysics. In our ordinary physics, with our genetically-determined physiologies, our connection is continuous with the objects (body) that we have agency over. This connection can be augmented, in the case of embodiment described earlier, with prehended external objects that we incorporate into our body schema. However, in this particle system, there is no analogous, humanoid representation of the body, yet there is a direct correspondence between the proprioception and certain visual structures that behave coherently and whose behavior can be correlated with proprioceptive feedback. This reactivity is explicitly nonlocal, and lacks even the rudiments of 1:1 mapping of hand position to the position of coherent structures in the visual field. Yet there comes to be developed a sense of identification with these phenomena, as if they are me. I suspect that one key missing factor in these mappings is that it is almost one-way. I send output motions, and I receive by default my proprioception, and the visually rendered behaviors in the scene. If I were able to receive sensory feedback from the phenomena happening where I am enacting my agency, perhaps then would those visual structures be cemented as subjectively my body, and not mere objects being puppeted.

GANbreeder ・ Nonphysical Parameter Space Navigation

Having previously only considered the interactional+intuitional consequences of unorthodox computed laws of physics, my traversal of BigGAN’s high-dimensional space on ganbreeder.app revealed ways that my spatial interaction thinking was limited by being focused on physics simulations.

BigGAN’s latent space is massive, and yet through hours of stepwise exploration and interpolation I noticed that I build intuitions about its structure and tendencies, learning to avoid being “trapped” in attractors of increasing visual artifacting.

This space has interactional dynamics that don’t involving computing a physics system but are nevertheless available for intuition to develop around. This in some ways reminds me of mathematical notation, which I have written about previously.

Future WorkAreas Unexplored so far

I am inclined to explore areas of how hyperphysical systems might provide parallel sensory inputs back to the body besides what is shown visually. This might be accomplished with sound, but I also wish to experiment with sensory remapping using the haptic engine on an Apple Watch, perhaps seeing if I can map the moment and velocity of a fingertip collision with the intensity of a haptic tap on my wrist, and if, through repetition, if that remapping can come to be subjectively felt on the fingertip.

A fellow graduate student Cyrus, is has working hardware that provides arbitrary forces, up to 50N, of haptic feedback via an electromagnetically-levitated and -actuated system. His enthusiasm is infectious, and he is eager to collaborate. Currently his demos are confined to 2D screens, and the illusion is compelling even with those constrained visual depictions. I’m interested in furthering our collaboration by implementing a VR headset such that arbitrary visuals may be produced to synchronize with the felt haptics.

Further, there is an opportunity for collaboration with Aman Tiwari, a senior with expertise in neural-network-corralling, on a hand-tracking-based interface for navigating parameter space of the visual output of a General Adversarial Network’s image generation (such as BigGAN and GANbreeder). This is especially challenging because the scale of the dimensional mapping-down is on the order of seven orders of magnitude. I’m especially curious about this application of embodied fluency because this is an explicitly unphysical system, yet is equally “traversable” and may allow the demonstration of key insights regarding the fluent manipulation of very-high-dimensional datasets.

Relevant Literature

Blom, K. J. (2007). On Affordances and Agency as Explanatory Factors of Presence. Extended Abstract Proceedings of the 2007 Peach Summer School. Peach.

Blom, K. J. (2010). Virtual Affordances: Pliable User expectations. PIVE 2010, 19.

Chemero, A. (2009). Radical Embodied Cognition.

Clark, A. (2015). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press.

Disessa, A. A. (1988). Knowledge in Pieces.

Dourish, P. (2004). Where the Action Is: The Foundations of Embodied Interaction. MIT press.

Engelbart, D. (1962). Augmenting Human Intellect: A Conceptual Framework. Stanford Research Inst. Menlo Park CA.

Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Psychology Press.

Golonka, S., & Wilson, A. D. (2018). Ecological Representations. bioRxiv, 058925.

Gooding, D. C. (2001). Experiment as an Instrument of Innovation: Experience and Embodied Thought. In Cognitive Technology: Instruments of Mind (pp. 130–140). Springer, Berlin, Heidelberg.

Heersmink, J. R. (2014). The Varieties of Situated Cognitive Systems: Embodied Agents, Cognitive Artifacts, and Scientific Practice.

Leithinger, D., Follmer, S., Olwal, A., & Ishii, H. (2014, October). Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration. In Proceedings of the 27th Annual ACM Symposium on User interface Software and Technology (pp. 461–470). ACM.

Maravita, A., & Iriki, A. (2004). Tools for the body (schema). Trends in cognitive sciences, 8(2), 79–86.

Piaget, J., & Cook, M. (1952). The Origins of Intelligence in Children (Vol. 8, №5, p. 18). New York: International Universities Press.

Rybarczyk, Y., Hoppenot, P., Colle, E., & Mestre, D. R. (2012). Sensori-motor appropriation of an artefact: a neuroscientific approach. In Human Machine Interaction — Getting Closer. InTech.

Smith, R. B. (1986). Experiences with the alternate reality kit: an example of the tension between literalism and magic. ACM SIGCHI Bulletin, 17(SI), 61–67.

Sutherland, I. E. (1965). The Ultimate Display. Proceedings of IFIP Congress, 506–508.

Won, A. S., Bailenson, J., Lee, J., & Lanier, J. (2015). Homuncular Flexibility in Virtual Reality. Journal of Computer-Mediated Communication, 20(3), 241–259.

--

--