Invisible Fields

Rishab Jain
The Mechanical Eye
Published in
8 min readOct 21, 2020

The third tutorial in this series is aimed at introducing new knowledge into the sight of significance.

AgRP Improv Project by Dana Karwas | Image Credit: Dana Karwas

Building on context, the digital site is to be investigated through a simulation context outside of its current (or known) human space and time. The aim of this tutorial is to further illuminate and amplify the context of the site into a performative or behavioral context. What does it feel like to be in your site at a certain moment in time? What is this time? Where? What are the parameters of the scene? What new sources of knowledge can be introduced in this context? What is visible and what is invisible to the viewer?

In Meeting the Universe Halfway, Karen Barad coins the term ‘intra-action’, as a proxy to interaction.With intra-action not as an inherent property of an individual or human to be exercised, but as a dynamism of forces in which all designated ‘things’ are constantly exchanging and diffracting, influencing and working inseparably.” (Whitney Stark, 2016).

GOALS

The goal of this tutorial is to further illuminate and amplify the context of the digitized site into a performative or behavioral context. This means building out a context that you want to be situated within. Using simulation and analysis tools, change the experience of your site by adding at least one or more Invisible Fields.

CONTENT

  • Invisible Fields Introduction
  • Part 1: Computer Vision / Intra-Action using Max/MSP
  • Part 2: Intelligence/Decision making — Analysis of images using Runway ML.
  • Part 3: Simulation (atmosphere, wind, ecosystem, particles )
  • Other Invisible Vision Options.

TOOLS

  1. Unity 2019+
  2. Max/MSP
  3. Runway ML
  4. Houdini FX

RESOURCES

  • My Apps CCAM Suite is a virtual desktop service provided by Yale University with custom software specific to CCAM. (has Recap and Maya)
  • CCAM Remote Workstations are extensions of our computer lab and media lab physical workstations. They have 4 powerful workstations setup that can be booked in 4 hr windows. (These have Unity and Maya)
  • Unity — Student and Education can be installed at Studio in YSOA.
  • Unity Developer Docs
  • Oculus Rift S / Quest 2 / HTC Vive Provided by CCAM.
  • Oculus Developer Docs
  • Max download 30 day free trial for Mac or PC

Part 1: Max/MSP

Max, (formerly Max/MSP/Jitter) is an object oriented programming interface for creating your own interactive software. It is a way to connect inputs and outputs and uses a series of objects, patch-cords, and control. Essentially when building in Max, you are creating little machines that do stuff for you with various inputs and outputs. It is a flexible and easy to customize software and historically has been a favorite of architecture students looking to introduce interactions and behaviors into their projects.

For the Mechanical Eye course this software will be used to in the context of computer vision (cv.jit) and (XRAY), as a way to see further into your site.

A few things to know:

  1. Max files are called patches — you will build a max patch of your own. There can be one primary patch, and sub patches can be stored within the main project patch
  2. Max objects are connected with patch cords. Objects can be feed various input from cameras, microphones, sensors, videos, images, etc. Objects are very powerful and can run analysis, control time and even create new media.
  3. The help files are incredible, and you can copy entire patches and just paste them into your patch, so it is a very friendly software for trying out new ideas.

For this tutorial we will use a package called CV.jit that will need to be installed through the built-in package manager. You should also install the XRAY package.

For this tutorial you will need a computer with a web camera, or built in camera. If you don’t have that, then you can use a .mov file (just put it in a folder on your desktop that you can find). I recommend saving the movie file in the same folder as your max patch.

  1. Open up Max
  2. Install the cv.jit package and the XRAY package (file/ show package manager) search for cv.jit in remote packages, install it. Do the same for XRAY.

The cv.jit package is a series of computer vision modules for seeing and analysis from the computer. The XRAY package is a powerful tool for analysis and openGL geometry.

3. Hit the install button. And then launch cv.jit from the package manager window.

4. Scroll down and check out some of the descriptions. Choose a few to experiment with.

5. Re-save the CV.jit patch that you like and name it something else. Put it in a new folder on your machine. Make sure it is in the max patch file type. The patch needs to be unlocked to save and edit (toggle between lock and unlock by pressing command + e). Enjoy the copy and past function within the help files.

6.Inputs and Outputs. You will be tasked with controlling one system with another. For instance, this can be information from the camera to control an audio file, or a a dataset that can manipulate a particle system, or learned positions to control reflectivity levels and frame rate. Max is a powerful tool for translating data into live, analysis and control.

7. Direction to control X….try to extract some numbers from the optical flow patch.

8. Find your own means of control and build a little max patch of your own.

Output: Play around with the software, create a patch. Screenshot for medium post.

Part 2: Runway ML

RunwayML is an out of box machine learning toolkit. With an easy to use interface and remote and local computation capacity, the software is a portal that helps load and explore various trained artificial intelligence. Most of the popular machine generation models like style-transfer, imageGAN, etc are easily accessible.

We can use some of the models within RunwayML and for the study of invisible fields. It gives us an ability to use machine vision to pick up features and analyze the image in ways we weren’t able to before to reveal unseen forces.

In the following, we will test out Machine Learning models to extract data out of our image datasets.

  1. Style Transfer
  2. Depth Analysis
  3. Motion Capture

Install:

Follow the setups on the website to sign up and download, it works on both mac and windows, while a cloud based web version is available as well. https://app.runwayml.com/signup

Download and install the desktop application and open it. Certain modules will require Docker system installed. Download Docker alongside it on your system.

You can explore existing models in the browser tab. We will begin by searching for adaptive-style-transfer.

Style Transfer

Add the adaptive style transfer native to runway ml into your workspace. Download the model to run locally. Style transfer is a process of training a module of certain styles, like painting by Van Gogh, and then transferring that learnt style onto the input image/video.

We can run style transfer on your original texture to create new ones, which can later be remapped on to the scene.

Still from DreamscapeAI

Depth Analysis

Using the DenseDepth module, we can use the trained model to project depth in the image and create a map.

Motion Capture

Using PoseNet developed by Google AI, helps detect and track motion data of human movement.

Export:

Each model has different export options, images/video, while the option on exporting a json file available.

Output: Play around with the software, create images or video. Try to link in into your unity scene. Screenshot for medium post.

RunwayML + Unity

RunwayML has recently released a plugin for Unity. The documentation and execution can be found on their github page.

https://github.com/runwayml/RunwayML-for-Unity

Part 3: Simulation Tools

Simulation tools help create dynamic environments to enhance our VR scene. Conditions like wind, smoke, trees, etc can be created within our scene using particle and other generative systems. For this exercise, try adding at least one or more particle systems within your scene.

Particle System Unity:

The Particle System in Unity, known as Shuriken, is a robust particle effect system where you can simulate moving liquids, smoke, clouds, flames, magic spells, and a whole slew of other effects. The system makes it possible to represent effects that are normally difficult to portray, using meshes or Sprites since they often represent effects that are fluid and intangible in nature.

https://learn.unity.com/tutorial/introduction-to-particle-systems#5cf7ca71edbc2a09d0290dc8

Smoke And Fire

Using VFX graph asset, downloaded from the asset store we can create simulations of particles.

Environmental Fog

Creating fog and mist can be done using a volume workflow as shown below.

Add Terrain, Trees and Wind Zone

We can make assets in the scene like terrain, trees, particle systems. These can then be made dynamic using wind zone to simulate wind effects.

Terrain:

Creating landscapes and ground for you scene is easy using the Terrain tool on unity. Check out the video below to understand what this tool can do.

Trees:

Using the GameObject > 3d object > trees function we can quickly create a tree asset. The thickness of stem and branches and can be edited while leaves are populated and textured using image. Using wind we can simulate them to move gently.

Wind Zones:

To create a Wind Zone GameObject directly, go to Unity’s top menu and go to GameObject> 3D Object> Wind Zone. You can add the Wind Zone component to any suitable GameObject already in the Scene (menu: Component > Miscellaneous > Wind Zone). The Inspector for the Wind Zone has a number of settings to control its behavior.

Output: Take a screenshot for medium post of scene with an environmental component.

Other Resources

Houdini 3D for Simulations

Houdini is a procedural node based workflow for creating animations and simulations. It has power engine that helps create fluid and particle simulations.

Water simulation — https://www.sidefx.com/tutorials/4-ways-to-cache-water-flip-fluid-simulation-in-houdini/

ArcGIS

GIS helps explore real world data and visualize it on a territorial scale. For more info https://www.esri.com/en-us/arcgis/products/arcgis-pro/overview#image5

Source: ArcGIS Online ESRI

ArcGIS is available for Yale Students thought — https://guides.library.yale.edu/onlinemapping/AGOL

--

--

Rishab Jain
The Mechanical Eye

Architect + Computational Designer | Yale University 21' + SCI-Arc 18'