Jake Elwes: “The key question here is about consent”

Jess
7 min readAug 1, 2021

--

Yes, AI is biased, but what do we do with it? I spoke to Jake Elwes, a media artist and creator of the Zizi Project, a project which reveals the potential of using AI bias as an artistic medium. We spoke about the dialogic rather than didactic nature of his work which reveals the feedback loops and glitches in ourselves. Elwes lives and works in London and is an alumnus of The Slade School of Fine Art, UCL (2013–17).

Jake’s recent works take a foray into machine learning (ML) and artificial intelligence (AI) to explore the successes and failures of these systems in regards to the codes and ethics behind them. Jake’s demystification and subversion of AI systems queers datasets and unveils the hidden bias at the roots of ML.

The interview is edited for brevity and clarity.

Screencap from Zizi and Me (2020). Credit: Jake Elwes

JP: Where does your experimentation with generative networks come from?

JE: My time at Slade played a huge role in this, being one of the first art schools to have widespread computer access, so the early roots of the school aptly inform my work and have been an invaluable source for my practice conceptually. That is, they provided a direct link and a deeper grounding to my practice using generative networks and machine learning for art making. My practice was developing at a time when ML was developing exponentially and so thinking about dataset politics and standardised datasets via creative means and using art to point out the biases and limitations around these seemed like a natural progression.

JP: As part of that conceptual practice are you envisioning a transformation in society? Does your use of deepfake technologies make a wider political point consciously, beyond the use of the medium’s novelty?

JE: Yes, my latest project specifically works through a lot of these themes. Zizi and Me started with greater thinking about representation in regards to things like training, classification and facial recognition. AI is really bad at recognising trans people and people of colour. And so, I thought I’d mobilise the drag community in bringing to life these ideas which challenge society and gender.

I thought of ideas of obfuscation and assimilation in feeding these identities into a network to disrupt it. I wanted to know how I could use this as a performance tool and wanted moreover to satirise the anthropomorphic narrative within AI to build a humanoid form out of networks. But while I’m personally against the anthropomorphic narrative running through AI, this satire of what an AI actually is raises important questions about fearmongering around big data. We tend to defer to things which are easier to picture but the idea of AI conforming to the idea of killer robots can be misleading. I wanted to bring an audience into the discussion about what big data is actually based and redirect misguided ideas about policymaking.

My work in this project evolved out of conditional generative adversarial networks; something which brings up this notion of control– whose bodies are these and what are the ethics of swapping different bodily parts with others? The key question here is about consent. I wanted to subvert these algorithms and empower performers to create something fun and camp.

But I think my views have changed a lot in the last five years since I started using ML. From agency and pushing the boundaries of agency, these were exciting to think about at the time, but quickly came to understand that was a sort of mystification to what AI could do unsupervised. And so, my understanding evolved to commit to the limitations more; I was going to a lot of conferences at the time and there are some interesting conversations around AI’s, albeit fairly inaccessible and reserved for a certain tier of understanding. I wanted to get back to the building blocks of AI and started so thinking about the technology more as a tool and one which was heavily led by innovation in datasets.

JP: You refer to deepfakes as a dichotomous tool for a range of uses, from something that can be easily corrupted to a transformational medium for performance. Do you see deepfake technology as a sustainable tool for making resistance technologies?

JE: Possibly. I hope my work can stand the test of time; there are so many political questions we’re trying to work out and I’m just interested in the technology as a medium. At the very least I think it’s important to get people to have conversations about this topic which is so imminent, if not ubiquitous, and yet so technologically inaccessible helps bring these ideas to the mainstream so that have a stake in how these things develop.

JP: In the aftermath of the revelations of 2016 we now find ourselves in an age where authenticity and being “real” is often hailed as a necessary component of trust. Is this project in some ways a pushback against authenticity?

JE: I think this project tries to delve more deeply into revealing otherness and pushes back through feeding the network with that content and by dirtying datasets to get to a more authentic sense of selfhood. These algorithms are so easily confounded by simple measures and take superficial indicators at face value too often, quite literally. This is something which is mentioned by interdisciplinary groups of artists and technologists, from Coded Bias, Hito Steyerl to Zach Blas.

All our systems rely on supervised learning and classification systems, which are in turn reliant on human labels. But I think there’s an inherent queerness with unsupervised ML which opens up a realm of possibilities. In unsupervised ML, characteristics become reduced to vector space and suddenly there’s no difference between race and gender– identity becomes a liminal space.

For example, my project with ML porn sees the inherent misogyny built into porn being corrupted by unsupervised algorithms, which move into these latent spaces and make something entirely new. These algorithms are not inherently biased themselves but the data we feed them with gives algorithms a greater sense of agency than they ought to have.

JP: The work you make aims to be dialogic and plays on the fact that this feedback loop with new levels not just a tool, but also reflects on human glitches in the machine. Do you think these glitches apply to GANs only, or does this hold true for the wider scope of technologies?

JE: I don’t want to offer too many caveats on what art can or can’t do, but art has a certain political distance which breaks down into poetics, but these are less effective at unpacking notions of real-world bias and the narratives picked up by journalists, which were out for deliberate mystification. This is particularly the case for overselling artificial consciousness.

Human intervention is still needed in a lot of AI, and so when stories like the sale of the first “AI-generated” piece emerge, we shouldn’t be so quick to assign possible potential to it. While there is magic in the black box of AI, one needs to understand where this lies rather than conveying that the whole scope of technology is magical. Some artists attribute a higher level of comprehension to AI without fully understanding it and artists should be especially more responsible about how this is misconstrued. We’re overloaded with so much of these easy ways of shifting our identifies through cheap fakes, art in science and vice versa it’s not difficult to become inundated.

JP: To return to that educational narrative, you referred to people educating themselves; are you trying to bring the reality of these technologies and their political implications to greater fore by educating an audience through the choice of an increasingly mainstream medium?

JE: I think we hoped it would be education, but it’s difficult to be hardline about this when something errs towards the playful rather the didactic. We wanted to demystify AI and deconstruct the body to create a world of which was more open minded and accessible to spark some conversations. Communication is the goal more than education per se.

Bringing this away from the elite and inaccessible conversations into a more playful use case, recalls the very early days of the internet– a space which was previously dominated by esoteric knowledge, for instance, how to use protocols. And now we’re inundated with ways of creating content with technology with no nearly no technical friction whatsoever, with filters, visual platforms and so on. So perhaps awareness is enough and giving people the knowledge about what not to trust at face value is the most important thing for now as we’re thinking of this at a much faster pace.

JP: Politicians often see deepfakes as an inherently bad technology and one to police, whilst artists see it as a utopian medium for opening up all sorts of possibilities. In this conversation, academia should be the mediator but defers to the policymaking side more often than not. How do you feel about this?

JE: Disinformation is something I avoid in my art and try to put the sense of play and fun in my work upfront. The nature of deepfakes as a medium lends itself to both but I see it more as a tool to augment ourselves in a collaborative way. I hope people will see these technologies less as an insidious format by which we can control people’s bodies; something which is deeply ingrained in eugenics and fascism and sees people’s bodies studied and scrutinised and taken apart. Technology can be better than merely an oppressive tool and so it’s important to remember that we can always revert to the more joyful parts of AI.

For more on Jake Elwes work visit https://www.jakeelwes.com/

--

--

Jess
Jess

Written by Jess

0 Followers

Jess is a security researcher and film-maker who writes on the intersection between privacy, security and emerging technologies.

No responses yet