Tweaking AI with Terence Broad

Beth Jochim
The AI Art Corner
Published in
10 min readOct 6, 2020

Explorations of AI Art — Episode 24

[This interview has been previously published on Cueva Gallery’s blog on August 10, 2020]

“Just because machines now automate some of the aspects that were traditionally recognised as the parts where creativity was expressed (such as style) doesn’t mean that it is removing peoples creativity, it’s just that people are now becoming creative in new ways.” — Terence Broad.

In science and engineering, a black box is a device or system in which inputs and outputs can be viewed, but the internal operation is unknown. When Google engineer Alexander Mordvintsev designed DeepDream (2015) to analyze visual imagery and understand how images were processed, the dream-like hallucinogenic outputs began to be appreciated also for their artistic qualities. Then, the growing interest in computer vision pushed the development into Generative Adversarial Networks (GANs) with Ian Goodfellow and his colleagues.

An artist looking into AI black boxes is Terence Broad who is currently completing a PhD at Goldsmiths in London. He is also a visiting researcher at the UAL Creative Computing Institute, where he develops methods and tools to manipulate deep generative models. As he explains, in his work generative machine learning models and algorithms become artistic materials to experiment with and explore the latent possibilities of these black-box systems.

Important venues, such as The Whitney Museum of American Art, Garage Museum of Contemporary Art, Ars Electronica, The Barbican, The Whitechapel Gallery and The Photographers Gallery, have exhibited and screened his work. In 2019 (un)stable equilibrium 1:1 he won the Grand Prize in the ICCV Computer Vision Art Gallery in Seoul, while an edition of Blade Runner — Autoencoded was acquired by the City of Geneva’s Contemporary Art Collection.

From reversing and amplifying the uncanny valley of Mashairo Mori, to the research of novelty through the training of GANs without data, Broad’s approach is experimental, research-driven and focused on exposing unseen aspects of the machine’s gaze. Not very interested in works made autonomously by machines, or by the application of cutting edge Creative AI models per se, his effort is on making art with machine learning, opening new possibilities. He develops tools for artists and designers to use in their practice, and investigates the ways in which machine learning helps artists become differently creative.

Perhaps not orthodox in the methods, Broad’s works create an almost paradoxical situation in which the teaching to a machine is imparted without the fundamental element that underlies machine learning: data. Lately, his focus is on the manipulation of images while they are forming to create a space for interaction with a GAN and get a reasonable control over it, opening the black box.

[Fig.1] A still from (un)stable equilibrium 1:1 which won the 2019 ICCV Computer Vision Art Gallery prize in Seoul. Credit: Terence Broad.

Beth: Can you tell us about your background and how you got interested in AI Art?

T.B.: I originally started out doing a sculpture degree but I dropped out to do a computing degree, as I had become increasingly interested in making art with computers, which I wasn’t supported in pursuing at art school. On my degree I became interested in research in computational photography and neural networks, and during my Masters in 2015 I started to do research on deep learning. My Masters dissertation, which initially started as a technical project, ended up becoming one of my most successful art projects: Blade Runner — Autoencoded. I was fortunate enough to be researching this area just as big advances in generative models were being made, and my film, which was made by training a neural network on the frames from the film and reproducing it, went on to be widely exhibited around the world. I then went on to spend two years working at a startup applying AI to smart cities before going back to academia where I am now doing a PhD at Goldsmiths.

[Fig.2] Video Blade Runner — Autoencoded. Credit: Terence Broad.

Beth: What are the main obstacles to access the field of AI Art?

T.B.: I think access to computational power, especially the kind you need to train and run big generative models, is probably the biggest obstacle to people getting started. There are tools now like RunwayML and Google Colab that allow people to experiment with these models without having their own expensive machines, but to create the works I make, and I think in general if you want to be doing more advanced and experimental works, you unfortunately need to have a powerful expensive machine which does place a large barrier to who can be part of the AI Art community. This is something that makes me uncomfortable, especially when only a handful of artists have access to the vast resources of big tech companies or can afford to pay technical consultants to do the work for them.

Beth: In the age of AI, where machines seem to show some creative aspects, what does it mean to be creative as a human being?

T.B.: People are constantly coming up with creative ways of training, framing and using these tools. Just because machines now automate some of the aspects that were traditionally recognised as the parts where creativity was expressed (such as style) doesn’t mean that it is removing peoples creativity, it’s just that people are now becoming creative in new ways. The artists I find most inspiring are early and experimental photographers and filmmakers such as Harold Edgerton, Hiroshi Sugimoto and Oskar Fischinger, who were often people with quite a sophisticated technical understanding of an emerging medium, and are exploring new ways of working with those mediums. AI techniques are giving us very powerful tools for generation and representation, but there is a huge amount of scope for people to take these tools and do new and creative things with them, which is the approach I am taking with my artistic practice and my research.

Beth: Can we appreciate art produced by a machine?

T.B.: For me, I am much more interested in the new ways people use these machines or find new ways of framing what they are doing. I am not particularly interested in artworks autonomously made by machines, and artworks that are generated simply by using the most recent state of the art AI model quickly get boring for me. So I think there is still a long way to go until we are truly appreciating artworks made exclusively by machines, but it is true that it is forcing people to become creative in new and interesting ways, which I think can only be a good thing.

Beth: In your work you use neural networks as a raw material, looking for new ways to use and manipulate them and realize something completely new. Can you tell us more about (un)stable equilibrium?

T.B.: My series of works (un)stable equilibrium — where I trained GANs without data, came out of a desire to find a way of training a neural network in such a way that everything that they generated was completely new, and did not resemble any training data. It had been something I had been trying to do for a while and had been having trouble with, as that kind of goes against the whole orthodoxy of machine learning — which is learning from data. In the end it occurred to me if I found a way of training a GAN without any data, then the result could not be, by definition, like any training data.

I spent some time experimenting with this concept, rapidly testing out different training regimes and network ensembles, while at the same time closely observing the visual output during training. I ended up developing quite an intimate understanding of what was going on, and the way I set out about making decisions about how the networks should be arranged were based almost purely on how it changed the aesthetic output of the results, rather than any mathematical principles. So the work, and a paper I presented at NeurIPS creativity workshop, ended up being a story about this quite intimate, tacit understanding of the models and algorithms I had developed over time. So far I have made 6 artworks in the series which I have presented as video works and I am getting aluminum prints for these works as well. I also this as an ongoing series and plan to revisit this idea of training without data again in the future.

[Fig.3] Video (un)stable equilibrium, Credit: Terence Broad.

Beth: In Being Foiled you approach the uncanny valley in reverse and generate images that are less and less realistic until you reach almost abstraction. What does this mean and why would it be interesting to study the uncanny valley phenomena?

T.B.: Again, this work came out of the same desire to find ways of training models, such that the output was completely new and unlike any training data. I had been experimenting for some time with fine-tuning already trained GANs with different kinds of networks that had been frozen, forcing the output of them to change and converge into a new space. One of my experiments to fine-tune one of these GANs using the discriminator used from training, but to optimise away from what the discriminator predicted as being real, and towards what it predicted as being fake. This was done with a GAN that generates faces, and was so visibly arousing about the results was how disturbing and uncanny they were at the midpoint of this process. I actually had shown a couple of people the images, and they were so disturbed by them, that I didn’t show or discuss them with anyone else for several months, before I finally revisited them and realised there was something very interesting going on.

[Fig.4] Being Foiled — images where the uncaniness was most pronounced. Credit: Terence Broad.

I later wrote a paper entitled Amplifying The Uncanny that I presented at xCoAx 2020, where I discussed this training process, and how the fine-tuning procedure, starting from realism and ending at complete abstraction, is a process of crossing the uncanny valley in reverse. Where normally the uncanny valley is a phenomena encountered when trying to create more realistic representations of people, this procedure encountered in the other direction, by deliberately producing representations that diverged more and more away from realism. The artworks in Being Foiled are produced from taking the model snapshot where the uncanniness and unusual artifacts are most thoroughly pronounced.

[Fig.5] Being Foiled — images at the end of the process where the results are almost total abstraction.Credit: Terence Broad.

Beth: In Network Bending you introduce deep generative models manipulated for creative expression. Can you tell us more about this idea and what is the trade-off between randomness vs. control?

T.B.: Network Bending is the name for the new approach that I have developed in my research and will continue to build on for the rest of my PhD. There are two main parts of what I have been doing, the first is developing methods for analysing GANs and the second is using tools I have developed to then insert new layers inside a GAN to apply filters to these internal components manipulating them from inside the model. The idea with this is to utilise the weights of an existing model, but manipulate the formation of images as they are being generated, to massively open up the possibility space of what you can generate using these models and which gives you a way of interacting with a GAN that you can have reliable control over.

[Fig.6] Network Bending Demo. Credit: Terence Broad.

One of the artworks I have made using these techniques is the video piece Disembodied gaze. As part of this research I developed a method for analysing a GAN to understand what the components inside are doing and discover how they work together. I applied this to a GAN that generates faces and one of the things that came out from the analysis was the discovery of the parts of the model that generates faces. If you turn them all of the eyes disappear, but if you turn everything else off apart from those components, you only generate the eyes, and the rest of the image has to be inferred by the lower layers of the model, which generates these images of eyes removed from the body and floating in an ethereal skin-like texture. This approach, although the outcome was unexpected, is definitely using these techniques in a deliberate way where you have total control over the results.

[Fig.7] Video Network Bending Disembodied gaze, credit: Terence Broad.

My series of works Teratome were made during some of my earliest experiments I had done in the development of these methods. In the early stages of this approach, I experimented with applying simple filters, either to entire layers, or to randomly to different parts of the layers. You weren’t really in control of what was happening, but certain configurations of transforms on layers would produce very sometimes arresting results, and I ended up developing a workflow where I would generate a few samples using these random layers, and that would guide what filters I applied next, eventually cherry picking the most interesting ones.

I also developed a similar workflow for the EP artworks I was commissioned to produce by the band 0171, where I projected images of the band members into the StyleGAN2 latent space and worked in a similar way, embracing the randomness that afforded by these results, and used that to work in a much more iterative, experimental way, where I end up working in a much more exploratory fashion.

[Fig.8] One of the artworks from the series Teratome. Credit: Terence Broad.

Beth: Do you have a new project or idea you would like to share?

T.B.: I’m currently working on applying the same network bending techniques to other GANs and hopefully in the future to generative models for other domains such as audio. The artist and educator Derrick Schultz has started to teach courses on how to use my Network Bending techniques so if you want to start using them yourself you can refer to his teaching material: https://www.youtube.com/watch?v=pSo-aLWTn14.

To follow Terence Broad:

Website: https://terencebroad.com
Twitter: @Terrybroad
Instagram:@terence.broad

About the author: Beth Jochim is the Creative AI Lead at Libre AI, and Director and Co-Founder at Cueva Gallery. She works at the intersection of technology and arts. She is actively involved in different activities that aim to democratize the field of Artificial Intelligence and Machine Learning, bringing the benefits of AI/ML to a larger audience. Connect whit Beth in LinkedIn or Twitter.

Except where otherwise noted, this work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

--

--

Beth Jochim
The AI Art Corner

I am a Content Curator, Writer and Consultant with a focus on AI, Creative AI and Digital Art.