Prosthetic Memory: Week 9/10— Mechanical Eye

Joey Reich
The Mechanical Eye
Published in
3 min readDec 22, 2020

The concept is pushing beyond its anthropologic foundations within the cognitive known, mainly through the engagement with non-human entities as facets of the engageable toolbelt through which this prosthetic memory is further manifesting. These tools, as they are utilized within the project, explore primarily what is beyond the frame of the mechanism through which these memories are constructed through facilitated realities of the space. In doing so, the ungraspable nature of that which is desired is further entangled through the extraction of overlooked phenomenon lying dormant within the gaze as it engages with a deepened scope of the lenses through which the gaze is augmented. As it stands, the products of this re-exploration of what is beyond the gaze has been explored through the application of Artifical intelligence along with an immersion into what frames the gaze through the inclusion of a soundscape that bleeds from space to space and reorients the user to the prosthetic memory through which they explore this amalgam of the cognitive gaze.

My Technik for this step was grounded largely in RunwayML for the exploration with AI applications within the project. In RunwayML, I worked with a series of images taken from each window within Rudolph that created a sense of meaning through the engagement with its associated window. This included focusing on moments of specific interest in terms of lighting, the skyline, the landscape, interesting oddities, facets of the building, the re-viewing of many of the windows of significance in the project, and ultimately an indiscriminate scanning of the view. In this way, I hoped to give the Network a more general understanding of the totality of the views along with specific moments that would orient one’s gaze. Along with this, I spent time recording the spaces within which these photos were taken in order to further ground the viewer within the memory of the place while engaging with imagery that would ultimately be something Other than what they would be presented with had they engaged the same windows in the physical settings they would typically find themselves within.

Fig. 1: High Speed Latent Space Walk — Curated Yalification
Fig. 2: GIF of the Resulting Scene — Layers of the Unity environment

The most challenging part of this process was exploring alternative to RunwayML that would generate the intended results, but inevitably I opted in for the RunwayML approach due to my familiarity with its outputs. Unfortunately, RunwayML is no longer in Beta, so training the model cost about forty dollars of credits for three quick StlyeGAN, the five-minute video, the depth map conversion, a style transfer test, and a couple of other explorations, which seemed like a fair deal for the amount of material I got out of it. What was most interesting about this process, was the role that the simultaneously reflective and transparent surface of the glass of the windows played in confusing the goal of the Network’s training. Additionally, the tectonic qualities of the building due to the close ups along with the instances of framing due to the situation of the windows played a significant role in what the network was able to establish as important with regards to the source imagery of the churches in the final model as opposed to the landscape or illustration models. In this way, the replacement of the windows with those that are seen from space to space are often supplanted in the model, as are the proportions, modules, tectonic qualities, and overall modern aspects of Rudolph Hall. Interestingly, more repetitious and rugged qualities of the built environment are often translated to the place of natural ones, wherein the hammered concrete becomes synonymous with the sky, ultimately becoming the thing with which the images are framed even further.

Fig. 3: Snapshot of the Addition of Field Information — Exploration of the Beyond

Further Reading:

Chaillou, Stanislas. AI & Architecture. 9 July 2019, www.medium.com/built-horizons/ai-architecture-4c1ec34a42b8

DuBois, R. Luke. “Insightful Human Portraits Made from Data.” TED, www.ted.com/talks/r_luke_dubois_insightful_human_portraits_made_from_data/up-next?language=en.

--

--