Digital Look into Past and Future Events — The mechanical eye

Serge Saab
The Mechanical Eye
Published in
6 min readDec 22, 2020

The previous posts settled on the camera, having a limited ‘attention.’ Ascending and descending cameras on stairs do not ‘see’ the same space and create different photogrammetries, each a direct manifestation of the camera’s perception.

“I fed the set to the photogrammetry software and ended up with broken modeling of the stairs. The software seems to fight to create a conclusive space from the pictures but cannot do so in axonometric view. The juxtaposed models of accurate and broken space embody the lens’s ability to understand its surroundings. There is a noticeable difference between seeing the model from a first-person perspective and axonometric.

Ascending (Left) and Descending (Right) Photogrammetries

The exercise also made me notice holes in the camera’s knowledge, literally opening round windows in the created mesh. I wonder if there is a way to trick the camera by using pictures from the photogrammetry model to create a new space that is imprisoned by the lens’s initial misunderstanding of space.”

Since the ascending camera created the most accurate tectonics of the “Sursock Stairs,” it was used as a base (mother model). An animated person recreates the movement of a descending individual. Within the software, a camera is attached to his foot. While the person looks in front, from its eyes, the camera looks at the right side and back from its right leg. A video is created from the limb’s position. We ensure the digital camera is perceiving different areas than the ‘real’ camera (the one that created the initial photogrammetry) to investigate the ‘child’ space that will follow.

I exported the animation as a series of JPEG pictures, then fed them to the photogrammetry software.

“Child Model” — Leg Attached Digital Camera

I anticipated that the resulting model would be a culled version of the ‘mother‘ model, a sort of ‘mapped attention’ of the camera. However, I did not anticipate the registered presence of the animated person in the model—the process ‘fossilized’ a footprint on one of the stairs.

“Child Model” — Footprint

Two things arise from this experiment: The first one is a confirmation of the ability to create space as a memory of attention. In this case, the camera's memory under the form of a video is physically materialized in a 3d model. This is an alternative to video archiving. The second is the embedded presence of the digital person on the stair. The model bends inwards for the footprint. Each child model registers the story it was born out of; it’s an interesting window to investigate visual data mapping, even encryption: The resulting model holds clues but not the entire event.

— — — — — — — —

Photogrammetry of the real space — This is the small courtyard

Clues into past events exist in the mother model: Tags left by trespassers were overlayed by the owners stating that cameras had been put to monitor the small courtyard. The tags on the walls resemble the fossilized footprint. The next exercise is a forensic reconstruction of the moving body that initially tagged the wall, another way for the mechanical eye to see beyond what it captures from space.

Movement Data from CensorPlay (App) recreates the tagging movements — Every cube is a position for 1/100 of a second

Using CensorPlay on iPhone, and while in VR, in the space, I recreate the movement over the tags and feed the excel data to Maya. The accelerometer data on the excel screenshot is a portion of the 7360 (x3) values registered over 73 seconds. For a given row on the excel sheet, the following resultant of the three vectors x,y,z, is the following position in space.

Maya speaks python: the algorithm consists of creating a cube at the beginning of the Tag marking the first position. Then, duplicate the cube once and move it on the x-axis, y-axis, and z-axis as the datasheet says for the second position. Repeat that for every value to reveal the trajectory.

Revealing Past Movements
Top View of Small Courtyard — The trespasser's trajectory is the white free curve; he walks up the stairs, pauses for a second, looking at the wall and stars tagging. The whole process takes 90 seconds, 73 seconds of tagging with 7360 positions registered with the Gyroscope.

— — — — — — — —

The latest experiment asks how a camera focusing on the animated character registers information. We saw previously that whatever it looks at is registered in the ‘child’ model. Yet is there a consequence to attention span? Meaning does the amount of seconds looking at a space affect the model it will create?

I rig a camera in Maya that fixes the moving character and travels from the courtyard to the bottom of the stairs; this is the resultant space:

‘Child’ model of external camera fixating the moving character

Similar to the previous ‘Child’ model, at first glance, it resembles the initial space. Here are the details we can extract from it: Since the camera fixes the moving character, the only clearly represented stairs are the ones he walked on. The model loses its resolution around them. Notice that the character is not in the photogrammetry; the process removes moving objects from the scene. The event and movement location are mapped in the model’s resolution.

The grain is lost on the tagged walls. A closer look at the small courtyard reveals only fragments.

Elevation of the tagged wall in the small courtyard — Fragments of the Mechanical Memory

A temporary conclusion: The digital eyesight is not constrained to the model it is seeing. The experiments showcase an effective method of mapping events or memories through space embodiment. Resolution is the unit to look at. In the leg camera example, information is added to the scene: the fossilized footprint witnesses the passage of the character but does not represent him. In the fixating camera example, the courtyard is fragmented. It removes resolution but embeds information about the camera’s path and focus. How much of the wall can we see? 20%? This is how much the camera paid attention to it. Yet, it is not uniformly culled. The ‘Child’ model of the wall looks like a burning paper. The attention of the camera seems reliant on the wall’s texture to gather information.

Going further with this is about representation. The process remains unwielded; it is not yet an instrument. Taking it further would make the attention quantifiable, give the eye an ability to cull what it sees in advance.

--

--