Virtual Atrium, pt. 2 (weeks 6–7)

Milo Bonacci
The Mechanical Eye
Published in
4 min readOct 21, 2020

The project picks up with the six separate hundred-picture Recap Photo models as described in the previous post. Because of the limitations of the educational license of a maximum of one hundred photos per model, the initial challenge would be combining the fragments into a whole. Since architectural accuracy had never been my goal with this capture, I decided to let the software make sense of the different models and put them together however it interpreted the information. The software’s interpretation of these models is unlike anything I would ever have done if I had in fact been scaling them, rotating them, and positioning them to be in agreement with one another. Light, shadow, and two-dimensional features become spatial in unexpected ways.

Recap Pro to import the six separate .rcm files as ‘point cloud’

I decided to combine the models as a point cloud because there is something far more intriguing about an implied surface rather than an actual surface. In my head I was imagining the points as snow, or smoke, suspended in the air, awaiting a breeze. The steps taken were in pursuit of this vision, but also informed by my limitations as a novice with the software(s).

The resulting point cloud of the combined models, exported as a .pts file

The resulting composite from Recap is quite surprising — it seems to have 6 separate scales, overlapping and blurred together. The conflicting scales would be a problem if I was trying to accurately describe the physical dimensions of the space, but there is something about it existing at all scales that gets my imagination running. The resulting point cloud implies a complex surface, curving and folding in on itself like a drop of dye into water. Almost fractal in nature, there are echoes of repeating spaces and features, reoriented and re-scaled. There are moments where the particles appear as noise with no sense of defined space, and then suddenly from within that chaos are moments of spatial clarity — a recognizable feature or familiar configuration. Move forward or turn your head and it’s gone. I was intrigued by these crystalized moments where things kind of make sense and snap into focus and how they’d lose their shape as soon as you proceeded. I began to imagine what it would be like to inhibit this space. Is this a room or a canyon?

There was a lot of trial and error trying to figure out how to get the .pts file from Recap into a format that Unity would recognize. I eventually came across the open-source software CloudCompare which enabled this conversion. Rhino was came close but I think be complexity of the model and file size caused some hiccups.

The goal for this week had been to get the particles to react or be affected by the presence of a viewer, however there have been a few road bumps so it’s not quite there yet. As of now the cloud is static and it’s only the viewer’s movement and perspective that brings the space to life. The particles enable for some pretty spectacular views — transparency and scale shifts that are disorienting but hypnotic.

Location of the four audio sources. As of now it’s a pretty basic scene with very few elements: point cloud, directional light, plane and (most-importantly) the FPS (‘first person shooter’ — not my terminology) controller and character. The FPS allows you to navigate the space with the arrow keys on the keyboard and ‘look around’ as you do. Note the Audio Source settings on the righthand side (3D Spatial Blend, logarithmic rolloff, min/max distance…)

I began experimenting with audio objects in the model, which are currently in a rough demo-state. For this iteration I placed four audio sources in the corner of the model, each with their own track (a deep sub-bass beat, some radio noise pulses, and various other ambient static) and scaled the audible range relative to the model. As you explore the model visually, your relative position amongst the soundscape results in a different combination of sounds. The 3D sound shifts as you ‘turn your head’ and balance between the sources is dependent on your proximity to them. These are pretty random sounds for now — the next iteration will be more intentional; perhaps micro-scale field recordings of the actual atrium space? I wonder, how many audio objects can there be? 10? 100?

Game view of a wispy moment. Other adjustments to the FPS included speed calibration (slowed down), gravity, and deselecting the ‘head bob’ and footstep sounds.

Here is an exploratory float-through with the placeholder sound:

Next steps: presence and time to affect the cloud… coming soon!

--

--