Intervention in the Invisible

Eleni Xynogala
10 min readJan 9, 2018

--

9 January 2018

Deeper

“Intervention in the Invisible” is the third part of the project in the unit called Theories & Technologies of Interaction Design. In this final part, I had to develop a system that intervenes in the invisible system that I investigated and visualized in the previous parts of the brief. This system was the artificial neural networks and the way that they create art.

Presented my artwork from the previous part of the project in my tutor Tobias Revell and my classmates I took valuable feedback to continue with the final part of the project. This part of the brief was the hardest, as I had to choose one aspect of my subject that I wanted to focus more and help people learn new information, interacting with it.

So, I turned back to literature to find out more and set my research questions. I rode different blog posts and papers which discuss the debatable side of this subject. One article that Eva Verhoeven proposed me to read is entitled “Art by algorithm” in which the writer claims that “in order to survive, but more importantly to thrive, in the age of algorithms, we need to cultivate a deep respect for algorithmic literacy and the capacity to ‘read’ the impact of computational influences on our work — not necessarily to resist those influences, but to understand them and use them to become better humans.” (Finn, 2017).

Influenced by this essay which discusses the fact that computation is changing aesthetics and debate the role of artists and appreciators of the arts in this trend, I continued with my research and I defined the following questions: “Is it real art?”, “Can a machine outdo human?”, “Who is the artist? Is the creator of the algorithm? Is the person who inputs the image in the system? Is it the computer? Or is it a combination of all three?” and “Is the machine biased against blacks?”

Inspiration

How Not to be Seen: A Fucking Didactic Educational .MOV File — Hito Steyerl (left) | Material Speculation: ISIS — Morehshin Allahyari (center) | The Kitty AI — Pinar Yoldas (right)

Ideas in Sketches

Rejected (left and center) and Approved (right) ideas

Installation — Rejected

This installation was based on the idea of taking real-time photos of people and edit them immediately with Deep Dream algorithm. It was rejected because it is not a real intervention and it doesn’t have so impact on people’s perception for the system.

Processing Representation - Rejected

To understand better the concept of Deep Dream I started to use the algorithm and I came in touch with two people that are close to the field in technological terms. One of them is Will Gallia, who is currently in the prototyping lab of the Interaction Design Communication course and assist students with their projects. Will, alongside the technician help that provided me, expressed also his valuable opinion about Deep Dream algorithm, as he was a practitioner in neural networks’ visualization in the past. Together we experimented, creating a Processing Representation, trying to succeed a result similar to Deep Dream’s algorithm. The result wasn’t compatible with what a real network would do, but a fake representation of the way it works. At the same time, it was like a visualization of the network’s operation and not an intervention. So, this idea rejected.

This idea implemented, but the result wasn’t compatible with what a real network would do. But a fake representation of the way it works. At the same time, it was like a visualization of the network’s operation and not an intervention.

Video — Approved

Combining the knowledge of the coding with the information that I gained through research, I decided that a video that debate and criticize the system is what I want to create.

I inspired a lot from Dr Georgina Voss’s feedback in my project, who proposed me to look in the work of Memo Akten, a London based artist who uses Deep Dream to criticize technology. Also, from Georgina’s workshop, I came in touch with the project “Material Speculation: ISIS” by Morehshin Allahyari. In this project, the artist includes a memory card inside the body of the artworks, like creating time capsules for future civilizations. I decided to save my video on a memory stick, placed in the artwork, which demonstrates the visualization of the neural network system of the previous part of the project. This addition symbolizes also an intervention in the system and its purpose is to raise audience’s interest to discover what this artwork is, watching the video.

Other projects that influenced me were “The Kitty AI” by Pinar Yoldas and “How Not to be Seen: A Fucking Didactic Educational .MOV File” by Hito Steyerl, which proposed from Tobias. I impressed from the representation of an AI by a cat and the meanings that this metaphor can produce. I really admired and enjoyed the artistic intervention in the technology of computer vision and rendering from Hito Steyerl. So, I decided to include techniques like these in my video. Last but not least, I inspired from Wesley Goatley seminar, who referred to the prejudice against black people that is produced from these machines. Speaking with Wesley about Deep Dream, he told me that at the beginning, he was also impressed from these trippy images, but after that, it is always the same images showing puppets and it depends on the artist’s creativity to use this tool to create something interesting.

Final Outcome

Watch the video:

Deeper — video

I created a video which I called Deeper. The name came in my mind because of all the names that are given to the algorithms related to the neural network, like Deep Dream, Deep Blue and DeepMind, but also because it implies that we explore the system deeper. The goal of this video is to inform people for some useful implementations of neural networks, like face and voice recognition, and at the same time to intervene in the system, showing the negative impact that they have in people’s life, such as creating prejudice against black people, or creating fake porn.

Video in memory stick placed in the artwork (left) | The intervention (right)

The video created in Adobe Premiere Pro and saved on a memory stick which placed in the artwork which demonstrates the visualization of the neural network system of the previous part of the project. This symbolizes also an intervention in the system and its purpose is to raise audience’s interest to discover what this artwork is, watching the video.

In each chapter, there are two sides. On the one hand, there are some videos and photos, found online, which show what neural networks can do. On the other hand, there is an intervention which is visualized as repeatedly changing images, through which the system is trained. It has the voice from a text to speech service, so as to sound less natural and to create a dialogue between the machine and its implication.

Chapters’ titles — Processing sketch (left) | Old TV change channel noise effect (After effects)

The video has the following structure:

Introduction

In the introduction, I show photos of the machine that I visualized in the second part of the brief and I set the following questions to the audience: “Does the machine know things?” and “Does it create art or racism?”.

First chapter — The machine can think

All chapters’ titles are created with a processing sketch which seems like networks.

The first video is a combination of a video, which shows the robot Sophia talking, and a voice from another video which talks about computers to outsmart humans by 2029.

Then, there are is a video about Deep Blue, a chess-playing computer developed by IBM, which defeated the world champion G. Kasparov in 1997. But also, another video which demonstrates Google’s AI AlphaGo which is beating humanity at its own games.

The changes between the specific videos, which are from news broadcasts, are with an old TV changing channel noise effect created in Adobe After Effects.

The first chapter discusses if the machine can think. The selected videos recognize the loss of an essential human territory, like strategy games, to the onslaught of thinking machines. The machine’s voice claims that “Algorithms are simple mathematical formulas that nobody understands”. In this way, I try to start a conversation with the audience and make them think that these machines are not intelligent at all, but they are algorithms created very good at avoiding mistakes.

Second chapter — The machine can create

In this chapter, the machine starts saying “I see animals. I dream animals. What do you dream?” Then I show Mike Tyka’s video of zooming in a noise image, which is created from Deep Dream algorithm. And a part from his TEDx talk about the art that the neural networks create.

I debate if the machine can create, showing Mike Tyka’s video of art creating by neural networks. These networks are trained in a dataset of animal pictures, so they interpret art based on the motives that they are trained from. This approach follows a famous quote from Richard Feynman: “What I cannot create, I do not understand.” The important thing here is that these algorithms are watching us and learning from us, just as we learn from them. The combination of people and algorithms is the key to new creative work.

Third chapter — The machine can intervene

Here, the audience questioned about whether the machine or the human can intervene, so as to make a statement to a circumstance. This chapter shows some videos edited with the use of Deep Dream algorithm, which creates a psychedelic effect in the videos. In this way, videos’ creators intervene in a movie like Fear and Loathing in Las Vegas, which is about LSD, so as to make a statement. In the same way, the other video is showing Donald’s Trump speech and the creator has transformed him to look like a monster.

In this case, the machine asks “Who is the artist? Is the engineer? Is the person who inputs the image? Is it me?” Here we realize that it is a human creator who bends computational tools to achieve a breakthrough.

Fourth chapter — The machine can NOT

The title implies that the machine cannot do everything, like doubting what said before. In this chapter, there are videos and pictures which show that these machines are biased against black criminals, or racists in face recognition, as they seem to have problems in recognizing black people or tag them as gorillas. Finally, I demonstrate a clip, showing Gal Gadot’s face on a porn star’s body, which created with a machine learning algorithm.

At the end of the video, my goal is to raise awareness of what the machine can do but also what it cannot and what it does wrong. The audience of this video could be anyone relevant to technology. The person who watches the video doesn’t necessarily understand what a neural network is but takes the message of machine learning’s power and dangers. I believe that I didn’t succeed in making the video as artistic as I had imagined at the beginning because I tried to make it more conceivable for the audience. As a result, it ended up to has a stricter structure and flow. But, combining valuable information, I believe that I made a statement and contributed to the field.

End

The eyes symbolize that machine observe as, as we train it with pictures that it sees.

The ending clip is created in Adobe After Effects and the two classical music pieces that are used in the video are created from Recurrent Neural Networks.

To be honest, I didn’t enjoy the process as much as the one in the previous part of the project that was more tangible. But I learned a lot using these tools that are useful in video making. What I really appreciated from all this journey was the help from my tutors, my classmates, university’s technicians and friends-practitioners in their fields.

To conclude, what express me more in this subject comes again from the “Art by algorithm” article, in which Ed Finn says “This is terrifying and breathtaking all at once, and it’s artists that we need most of all to make sense of a future in which our collaborators are strange mirror machines of ourselves. Computation is a parallel project, grounded in the impossible beauty of abstract mathematics and symbolic systems. As they come together, we need to remain the creators, and not the creations, of our beautiful machines.” (Finn, 2017).

--

--

Eleni Xynogala

I am an artist, designer and developer who experiment with creative media and technologies, holding an MA in Interaction Design Communication at UAL.