A digital god for a digital culture. Resonate 2016

Memo Akten
Artists + Machine Intelligence
34 min readApr 26, 2016


This is the transcript of my talk at Resonate 2016.

Helix Nebula (NGC 7293) by 2.2-metre Max-Planck Society/ESO telescope at the La Silla observatory in Chile. source: http://www.eso.org/public/images/eso0907a/

Intro and brief summary

I'd like to talk today about a few projects and ideas that are connected by this thread of surveillance, artificial intelligence, data dramatization, poetry, and digital deities.

But before that I want to give a very quick summary of where I come from, to explain how I got to where I am, especially mentally.

Selection of work up to 2014

Interactive systems as instruments

I'm an artist who’s primary medium and craft is code; sculpting images, sounds and behaviors with algorithms. E.g. These images you’re seeing are algorithmically generated, in these cases generated in real-time and driven by a camera input watching the movements of a dancer, or member of public if it’s a public installation. And sometimes I’ll be controlling and tweaking parameters on sliders or other input devices in realtime too. I want to design interactive systems such as these that explore ways that we can create and perform images and sounds in real-time. I like the metaphor of a musical instrument where there’s a relationship between the interactive system and the people using the system, establishing a live creative feedback loop. So when you’re using it, you’re not thinking about what you need to do, you’re just acting how you feel, it becomes an extension, an extrapolation of your body and movements. The aim isn't to create a functional interaction, but an expressive one.

Mixed Reality

In 2011, I founded a company called Marshmallow Laser Feast, with two good friends. Some of these projects you’re seeing right now [1:40 PlayStation video mapping], we made as MLF — e.g. projection mapping a living room with head/camera tracking and updating 3D perspective, combined with live theatrical special effects. Kind of like a mashup between Michael Naimark’s ‘Displacements’, Johnny Lee’s Wiimote hacks, and Michel Gondry inspired sweded cinema. I'm very interested in mixed reality, augmenting physical space, hacking it physically and virtually. While the previous projects were exploring input devices and modes of interaction, these projects are exploring output devices and modes of expression. Similarly this project [2:28 Meet Your Creator Quadrotor show] is a techno-ballet for a swarm of quadrotors with motorized mirrors and robotic spotlights, a medium for floating kinetic light sculpture.

With MLF we did a lot of work that I'm very proud of, such as these. But we also did some advertising work, which it turns out just isn't my cup of tea. So in 2014 I left the company to go back to being a solo artist. Still collaborating with others, including MLF on occasion. But at least now I'm professionally single, so if I want to stay in bed for a few days, I can and have no obligations telling me otherwise.

Artificial Intelligence

Alongside my arts practice, I've also recently started a PhD at Goldsmiths University, broadly speaking looking at this kind of stuff. Specifically investigating artificial intelligence — with a focus on machine learning / deep learning and combining with agent-based AI such as reinforcement learning — and how it can be used for this kind of work. The interactive systems that I've created in the past, such as those in this video — are very dumb, they have no agency or creativity at all. I program them and tell them exactly what to do. So the output is rarely a surprise for me. Sometimes it can be a bit surprising, because complex behavior can emerge from simple rules. And I try to exploit that as I’ll talk about later. But I want to create systems that are smarter, and more creative, so that you effectively collaborate with the system.

But I'm more fascinated by artificial intelligence as a subject matter, both as a technology and hypothesis for testing computational models of the mind and cognition, but equally investigating the role and impact of this technology on our society. And my PhD is almost an excuse to dig deeper into that — pardon the pun. That’s what my talk is mainly going to be about today.

Data Dramatization

I generally try to take conceptual approaches to algorithms. Designing algorithms not just to shape the visual or sonic outcome, but designing algorithms — which shape behavior — as metaphors for the subject matter. Or algorithmic poetry, not algorithms producing poetry, but poetry within the algorithm itself. I see it as a kind of behavioral abstraction, as opposed to say visual abstraction. I also refer to this as Data Dramatization, a term I first heard from Liam Young, when we went on an expedition to Madagascar a few years ago and bounced ideas around regarding this.

I’m not a net-artist, but I appreciate this diagram about net-art, and ‘where the art happens’. Inspired by that I think this communicates (and perhaps oversimplifies) my approach — based on the classic Von Neumann architecture:

It’s not just about what you see on screen, or hear. It’s definitely not the code itself. It’s what happens inside the brains of the computer, and how it relates to the subject matter. A lot of the work I make usually has this kind of structure. Low level inspirations or motivations which are often quite technical and implementation specific, craft related. And high level inspirations or motivations which are conceptual, and conceptually related to the behavior and implementation, and ultimately shape the final outcome and tone.

Equilibrium (2014)

Madagascar plains, Photo by Memo Akten (c) 2013

In 2013, I was fortunate enough to go on an expedition to Madagascar with the UnknownFields Division, run by Liam Young and Kate Davies, following the trail of global resource extraction, examining the reach and impact of our western society across the globe.

Indri, Photo by Memo Akten (c) 2013

Probably when most of us think of Madagascar we think of lemurs and rainforests. Indeed the country has a very rich ecosystem, lots of endemic species and high biodiversity. This is very valuable for a lot of people and ecotourism is big in Madagascar. But there’s a lot of pressure on these values. A culture of unsustainable slash and burn subsistence farming is destroying these rainforests. On top of that, the lemurs which were once protected by cultural taboos are being illegally hunted for bushmeat. Especially driven by growing extreme poverty, and illegal logging.

Google image search for ‘rosewood furniture’

Rosewood trees, which take centuries to grow, are also extremely valuable — and victim to these illegal loggers. Because we have a thirst for stupendously expensive furniture, or fancy guitars.

Open-pit mine in Ilakaka, Madagascar. Photo by Memo Akten (c) 2013

And we value precious stones such as Sapphire. Which is causing locals to start digging huge open-pit mines…

Ilakaka, Madagascar. Photo by Memo Akten (c) 2013

…and with no enforced laws to regulate any of this, wild-west style towns pop up, run by armed gangsters, bringing everything that comes with lawless popup towns: violence, prostitution, corruption, lack of infrastructure…

Ilakaka, Madagascar. Photo by Memo Akten (c) 2013

Then our boys come to town. The western boys, with big capital and big machines. And take it up a notch. Mining for nickel & cobalt. So that we can have our batteries and electronics for all our gadgets. Wiping out thousands of hectares of centuries old rainforest.

Ambatovy. Photo from https://ejatlas.org/conflict/ambatovy-mining-project-madagascar

When you see this for real it’s very emotional. The soil is red from the iron oxide, and the mine looks like a giant, open bleeding wound in the earth. All the trees along the edges, everything, is covered in this red dust. Like splattered blood.

But of course the situation is very complex. I’m not going to pretend that I’m an expert on extreme poverty or such social issues, or try to pass judgement on any particular party. This looks awful, but it’s also worth remembering that this one mine alone employs 7,500 locals. That’s potentially 7,500 locals who don’t have to resort to unsustainable slash and burn subsistence farming, or illegal logging, or gangster run mines. All of this further complicated by the extreme poverty in the country, and a severe political crisis. But that doesn’t make it right either.

Nothing is black or white here, and I don’t know what’s right or wrong. The only objective view that I can see, is that the land contains many different resources that different people value differently, for different reasons. It feels like the country is being pushed and pulled apart by all of these different forces, and hangs in a very fragile balance that is constantly shifting as these values change, always at the brink of seemingly falling apart, yet somehow clinging on.

And that was the idea I wanted to explore. So inspired by this I made Equilibrium, a very abstract Data Dramatization of this experience, a touch-screen installation.

Equilibrium at Laboratoria, Moscow, 2014.


It’s a system that hangs in a very fragile balance to form an intricate structure. Consisting of millions of little components flying around frantically, influenced by forces acting upon them, pushing and pulling on the system. When you leave it alone, the system tries to settle. Eventually, they’ll find stable trajectories where the forces cancel each other out, and they settle into those trajectories. And as the system settles, it appears to be static, but if you look closely all the components are still oscillating. It’s stable on a macroscopic scale, but on a microscopic scale it’s still very dynamic. And if you touch it, you shift the forces, you disturb this balance, and it falls into chaos again, and starts looking for a new equilibrium.

Of course this concept doesn’t apply to just Madagascar’s landscape and ecosystem. It applies to many complex dynamic systems. Including our financial system, social and political trends, the climate, animal populations, even the gas particles in space that form stars. This is also an example of complex, unpredictable behavior emerging from very simple rules.

Machines that see.

“Robot readable world” by Timo Arnall, 2012

But back to the main narrative. This is a wonderful film by Timo Arnall from 2012. He’s edited found computer vision footage. In his own words “As robots begin to inhabit the world alongside us, how do they see and gather meaning from our streets, cities, media and from us”.

This can be paired with other similar themed works from that era. Around the same time James Bridle started his The New Aesthetic blog, initially a tumblr and later a series of talks, writings, discussions etc. Also around the same time Elliot Woods started the hashtag #DebugViewArt on twitter. These images are essentially the debug screens of our work, especially those of us who work with machine vision. But of course this isn’t just about the aesthetics. It’s an investigation into the implications of a world where machines are learning to see. As Warren Ellis so eloquently foreshadowed:

Imagine it as, perhaps, the infant days of a young machine intelligence.”

Slight diversion: the aesthetics however are significant for one reason, because we grew up with the following images…

Terminator 2 HUD scenes.

It’s natural for the designers and developers of robot readable world ‘debug’ screens to subliminally reference these machine vision images from our childhood sci-fi films (e.g. Terminator 2, from 3 decades ago). Thus it’s also natural for the rest of us, to subliminally project the baggage associated with these sci-fi films, back onto the robot readable world images. Or even when we talk about the concept of ‘machines learning to see’. That projection is a mistake. Robot readable worlds raise many interesting questions and challenges. But SkyNet-style self-aware robots trying to take over the world is not one of them. I just want to get that out of my system for now, and will revisit it later.

Big Data

Around the same time as Robot Readable Worlds and The New Aesthetic, the Zeitgeist was all about ‘Big Data’. Data data data. This is Google trends on the term ‘Big Data’ in the news (the irony of using Google to find and plot this data, to later criticize it, is not lost on me).

You’ll hear figures like ‘Every single day we generate 2.5 quintillion bytes of data’. That’s 2.5 Trillion MegaBytes. As estimated by IBM. I don’t know how accurate it is, but we are drowning in data.

Now, after 3–4 years of Big Data and Robot Readable Worlds, the Zeitgeist is ‘Artificial Intelligence’. Again Google trends on the term ‘Artificial Intelligence’ in the news:

(I think the spike in 2012 may be related to a research at Google where they let an unsupervised deep learning algorithm browse YouTube and after millions of frames it started learning about and ‘imagining’ cats. Seriously).

In fact, in just the past few months and weeks it’s been all about BOTS.
Bots are the new apps”, says everyone and their cousin. (Worth mentioning: I'm loving the artistic community of bot-makers).

But it should be no surprise, that after a massive wave of Big Data and Robot Readable Worlds, we are now seeing a massive wave of AI and Machine Learning — which has actually been around since the 1950s, and lay relatively dormant for the past few decades.

It’s no coincidence that AI is back on the rise now.

(Future) History of Artificial Intelligence

If you look at the (future) history books of AI they’ll say that the current deep learning algorithms have been around for decades, but only recently, with the emergence of powerful parallelized hardware such as GPUs, have we been able to run them properly. And only with massive crowd-sourced datasets have we been able to put them to good use solving real world practical applications. “That’s why we’re having a massive AI revival” they’ll say. And it’s true.

This is a slide from a well known lecture by Yann LeCun called “The Unreasonable Effectiveness of Deep Learning”. He is one of the godfathers of Deep Learning and he knows what he’s talking about, and it is true. It’s true that this field got little-to-no funding or interest throughout the ‘AI Winter’ of the 80s and 90s, even though there were promising results (despite what some might say). The research was deemed impractical, unfeasible, useless — until these GPUs were developed and the large datasets became available in the recent years.

However it’s also true that LeCun is now the director of AI Research at Facebook. And another old-time godfather of Deep Learning Geoff Hinton at Google. It’s also true that Facebook and Google are now collecting more data quicker than they can capitalize on. It’s also true that the NSA, GCHQ, the 5 Eyes are building such a monumental archive of human communications of incomprehensible size that they don’t have a frigging clue what to do with it.

There’s more data being collected than anyone can handle. We are drowning in data. What the powers that are funding AI research need, is machines to crunch through their data, to compact it, to find meaning in it. They need machines to understand that data, and provide only the relevant bits of information.

What is ultimately needed is for machines to produce an executive summary for our puny little human minds.

(I realize that ‘machines understanding data’ is a bit of a loaded phrase. I’ll expand on that later).

So right now, billions are being invested in solving this problem. And even if this research is performed openly, with all algorithms and research outcomes shared publicly, it’s useless without data to train or predict on. Whoever has the data, is in control.

So first and foremost we needn't be concerned about terminator-style machine overlords enslaving us. Because the powers that are funding AI research are funding research to create machine servants, to crunch their data and make sure that the powerful remain powerful, if not gain more power. That’s the first thing we should be concerned about.

Of course plenty of good will hopefully also come out of this AI research, revolutions in healthcare perhaps, cures for diseases, which is great. But make no mistake as to why billions are now being invested in this field.

We could say that if World War I gave us — at least accelerated the development of — analog computers, World War II gave us digital computers, the Cold War gave us the Internet; the current alleged War on Terror and Mass Surveillance is giving us artificial intelligence.

On a related tangent, I also like to think of biological metaphors relating the development of artificial intelligence as a means of managing big data, analogous to the Darwinian evolution of complex organisms and consciousness, managing and modelling their environment and ultra high dimensional stimulus. Unfortunately I don’t have time to go into that today. If interested you can search for a post I wrote called “Consciousness is evolution’s solution to dealing with big data”.

Simple Harmonic Motion #12 for 16 Percussionists (2015)

I’d like to talk about this project which was performed in 2015, but started as part of a series in 2011.

It began as an exploration into oscillations and the emergence of complex patterns and rhythms through the interaction of simple oscillatory behavior. The behavior was initially inspired by the movement of pendulums, and visually and sonically very inspired by John Whitney, Norman Mclaren, Gyorgi Ligeti, Terry Riley, and of course Steve Reich’s many phasing pieces.

And I made many explorations in this area.

Simple Harmonic Motion #5, 2011


The inspirations I mentioned before are the low level inspirations, at the implementation / craft level. At the high level, the initial motivation for this series came when I was speaking to a gallery about an Istanbul themed show — which is where I'm from. And I wanted to make an abstract piece about Istanbul.

It developed into a very non-literal visual and sonic interpretation of the cultural diversity of the city; a collision of cultures and intertwined opposites: progressive vs conservative, religious vs secular, liberal vs authoritarian, extremely decadent vs extremely moral; interwoven, not only in the same city, but in the same streets, the same buildings. Amongst this chaos different lifestyles seemingly conflict, but also breed thriving and flourishing intricate subcultures, operating at different frequencies, crossing paths daily, through their interactions creating a rich tapestry of complex, unpredictable behavior.

Simple Harmonic Motion #9, 2013


I explored this idea with many different aesthetics, visually and sonically. The previous versions were from 2011. This one is from 2013, exploring a more sculptural approach, also as proof of concept for a potential kinetic sculpture. And sonically investigating the system as a compositional tool.

Simple Harmonic Motion #11 for 80 Lights, 2014


In 2012 I started developing an idea for a version with robotic (moving-head) spot-lights, but it took 2 years to find someone to commission it. In 2014 Blenheim Art Foundation commissioned it for the launch of their Ai Weiwei exhibition at Blenheim Palace.

These are really powerful light beams that go very high. And by this point, in 2014, I was starting to get into this train of thought regarding The Cloud metaphor, which I'm going to speak about in a bit.

A major part of this installation is using the sky, the clouds, as a screen. This is what you see and hear when you look straight up. Projecting these patterns, complexity from simplicity — which is how physics works, evolution works, the universe works, learning and understanding works — back into the sky, onto The Cloud, where God lives.

Simple Harmonic Motion #6, 2011

That was 2014, but back in 2011 I also made this.

It’s a proof of concept for a performance in the SHM line of inquiry. That’s me waving a bottle of Apple juice I filmed late one night. It took 4 years to find someone crazy enough to commission it. Bless Future Everything, they had faith in the idea. They commissioned and produced the project and helped realize it, along with the Royal Northern College of Music and with support from The Shed, Manchester Metropolitan University. We gave a short 10 minute performance at Future Everything 2015.

Simple Harmonic Motion #12 for 16 Percussionists (2015)


The core principle and low level inspirations are the same as previous incarnations, but the higher level inspirations were very different and shaped the outcome and tone accordingly. The previous incarnations had always been relatively chilled, hypnotic, almost meditative. This one I wanted to be very in your face, uncomfortable, almost violent.

I was very much inspired by the tensions between man vs machine. Science, technology, tradition, culture, ethics, religion, capitalism all inextricably intertwined, yet pulling on us in different directions; at times ripping us apart.

And so the performance drifts in and out of sync, and creates moments of distressing tension — at times quite uncomfortable moments, highlighting these conflicts between science, technology, culture, ethics and tradition. Just when you think there’s a harmonious pattern, it falls apart, and goes into chaos, noise. Again just as things start to realign and you notice new patterns, it falls into chaos again. When science discovers something new, or a new piece of technology becomes available, it’s often culturally difficult to accept. E.g. The switch to the heliocentric model of the planetary orbits; Darwinian evolution; in vitro fertilisation; artificial cloning of animals. Science, technology, ethics and tradition are always out of sync. And when they drift far apart, the results can be very uncomfortable — if not tragic. We’re living that again right now, not only in the hard sciences with physics, chemistry, biology but also economically with algorithmic finance; socially with mass digital surveillance, data ownership etc. Society, culture, ethics desperately tries to catch up, and realign with science and technology. But it fails. Because it’s always behind, just as it’s catching up, new discoveries, inventions, regulations again push everything out of sync. And this is how it’s always been, since the dawn of civilization, if not before.

Another angle of inspiration can be summed up nicely by this quote from John Culkin, popularized by Marshall McLuhan. “We become what we behold. We shape our tools, and thereafter our tools shape us.” — an adaptation of Winston Churchill’s “We shape our buildings, thereafter they shape us.”

So an important part of the performance was to have humans controlled by the central computer — ‘meat-robots’ as suitably referred to by a tweet. Each performer has in-ear monitors and receives individual cues, when to step forward and become active, when to hit, when to step back and become inactive etc. and they do the best they can to act out the commands they receive. The individual performers, the ‘workers’, don’t need to be aware of the ‘bigger’ picture, i.e. the composition. They individually execute their own cues, and perform very simple, tedious, monotonous tasks. However, collectively they are a complex creature, controlled by the machine, playing out the complex audio-visual composition, drifting in and out of sync, shifting between order and chaos and back.

I also wanted to capture the notion that technology is not something external to us, that comes from outside. It co-evolves along with us, socially and culturally. It’s an extension of our body, an extension of our reach. Hence the torches, extending the arms, quite a literal metaphor there. Also a useful instrument to inflict momentary acts of violence onto the audience, as the unexpected flashes in the eye can feel like a slap in the face.

Surveillance as a business model

So I'm really into this idea of looking at our current relationship with emerging technology through an anthropological lens of ancient history, tradition and ritual.

This is a quote and article by Bruce Schneier — writer, cryptographer and privacy advocate.

“Surveillance is the business model of the Internet” — Bruce Schneier.

Which on the whole I agree with. But is there a wider picture?

Has surveillance not been the business model of humanity for millennia? Since the dawn of civilization, agriculture, and organized religion?

I'm going to try and explain this in the next couple of slides. But first I’d like to ask: if you believe you’re being watched, how important is it whether or not you actually are?

There’s a great experiment / architectural design from the late 18th century by philosopher Jeremy Bentham about exactly this, called the Panopticon. A circular prison with one-way mirror in the center, designed such that the prisoners don’t know whether or not they’re being watched. This concept was further examined and expanded upon by French philosopher Michel Foucault and used as a metaphor for many social structures. And currently the Panopticon is widely used as a metaphor for the current state of mass surveillance.

But I want to think about it in a slightly difference context. Have we not been living in a Panopticon since at least the Neolithic era, 12000 years ago? With Gods’ asymmetric gaze from behind the one-way mirror in the sky? You may actually be watched by a higher force, e.g. prison guards, or state surveillance; or you may believe that you’re being watched, e.g. by a fictional deity. As long as you believe it, isn't that where the power of control lies?

“The more tremendous the divinity is represented, the more tame and submissive do men become to his ministers; and the more unaccountable the measures of acceptance required by him, the more necessary does it become to abandon our natural reason, and yield to their ghostly guidance and direction.”

– David Hume, “The Natural History of Religion”

I don’t say this to trivialize the current state of digital surveillance. I'm just fascinated by the evolution over millennia of systems of surveillance, power and control. And I see the the current mass digital surveillance as a natural evolution from religions of the past, which is what I find most fascinating, and I’ll try to expand on in the following slides.

Evolution of mass surveillance and control

I'm not going to talk about how or why we feel the need to believe in gods or religions, or how they originated. There are so many, many speculations, all very fascinating. I'm just going to talk about this very specific aspect of organized religion — related to this narrative.

And I don’t want to stand here on stage insulting anyone’s beliefs. When I talk about religion or god, I'm not talking about specifically Christianity, Islam, Judaism or any other common modern day belief. I'm talking about the thousands of religions and deities throughout human history: Thor, Zeus, Osiris, Odin, Poseidon, Horus, Krishna, Gaia, Ra, Shiva, Xenu, Jupiter, Mercury, river fairies, animal spirits, pixies, the lost beliefs of indigenous tribes around the world. Past and present etc. I'm referring to all of these beliefs. Either invented as is by a person, or — more likely — organically evolved to suit the characteristics of the society. Not necessarily to suit the needs of or benefit everyone in that society, most likely just a select few, i.e. those who were instrumental in establishing those beliefs into the fabric of their society.

And I'm interested in how these Overseers adapt and affect the societies in which they exist, and how they evolve over time.

Philosopher Daniel Dennett looks at the evolution of religion through the lens of cultural evolution, specifically the metaphor of ‘memetic’ Darwinian evolution. He suggests — similar to wild species of plants or animals — ‘wild ideas and beliefs’ are born naturally ‘in the wild’. These wild ideas adapt — they evolve traits — to co-exist with the society they’re in. Some traits help ideas spread — those ideas and traits survive. Other traits cause ideas to die and become obsolete. Those ideas with traits most suited to the requirements of their host groups are eventually assimilated, domesticated and ‘farmed’ — mass reproduced — by stewards, guardians of those ideas. As both society and beliefs develop, they symbiotically co-evolve.

Outside of these metaphors, there have been some interesting scientific studies looking at such correlations.

Looking at subsistence and the evolution of religion (Peoples, H. C. & Marlowe, F. W. 2012. Human Nature, 23(3), 253–269), researchers collected data on 178 different cultures and their beliefs. They categorized the gods as: Absent (these are the light grey sections in the bar graphs), Inactive (no authoritarian / moral gods, no divine intervention, deism. These are the cross hatched sections in the middle of the bar graphs), Active or Moral (these gods tell you what to do, right from wrong, and punish you if you sin. Like the Abrahamic gods of Christianity / Islam / Judaism, Roman gods, Greek gods etc. These are the bottom dark grey sections in the bar graphs).

And they found a number of interesting — perhaps unsurprising — correlations between type of god and characteristics of society. For example the larger the population, the more likely the god was a moral god, telling people what to do. Looking at mode of subsistence, foragers or subsistence farmers are able to feed themselves without large-scale organizations or cooperation, and mostly do not have moral god telling them right from wrong. While those living off pastoralism or intensive agriculture — where large scale cooperation or social hierarchies would be useful— are more likely to have moral gods. And looking at wealth distribution, the greater the social inequality or social hierarchy, the greater likelihood of having a moral god.

So it seems to me surveillance has always been the business model of civilization — even if by a fictional deity, to ensure land owners have workers working their land; large populations are held together to maximize work-force; non-egalitarian systems are reinforced; social hierarchy and inequality is widened; the ruling classes maintain — if not gain — power.

Now these are just correlations. Statistics 101: correlation is not causation. Neither I, nor the researchers of this study, are suggesting that one caused the other. It could be that once agriculture started, social classes started being established, and a moral God evolved in order to aid holding everything together. Or it could be the other way round, that perhaps a moral God evolved first and that allowed social classes to become established and agriculture to flourish. Or it could be other hidden factors which drove both adaptations independently. Or a complete coincidence.

There is actually a number of more recent research which finds evidence for the former, that the social/economic/technological changes happened first, and religious, ‘moral high god’ adaptations came later. Which is also very interesting — however the direction of causation is not actually important for my argument.

It’s also worth noting that the society-deity pairings aren’t 1:1. Not all large populations have moral gods, neither do all agricultural societies. It turns out that moral gods aren’t required in these situations, but their likelihood increases. And that’s what is of most interest to me. Despite all unknowns, it is clear that historically, correlations do exist between society, technology and religion.

And the nature of the deity correlates with the nature of society, to ensure suitable mode of social control.

For millennia, religions have evolved, responding to their host culture’s traits and needs. Thousands of beliefs have come and gone. Some have disappeared completely and gone extinct, some are localized to small groups, and others grow to dominate across countries and continents. As societies grow and change, so do the deities and religions along with them. Those beliefs and values which are most culturally fit survive.

Now we are living in times of increasing technological surveillance. In this post-Snowden era we are more aware of the extent of this invasion of privacy than ever. But how have the general public reacted to the Snowden Revelations? It seems the general mood is apathetic — perhaps even sympathetic, finding safety and comfort in knowing that a Higher Force is watching; protecting those who are virtuous, the law-abiding. He who is innocent has nothing to fear. He who does wrong will be found and punished.

This seems to me, rather similar to the role of organized religion and moral high gods of the past.

Man invented god to inflict fear, control and power. It was ancient religions that imposed omnipotent, omniscient and omnipresent powers watching over us, judging us, protecting us. Those were myths fabricated to control the masses. Today, as our societies lose spiritual sensibilities, we are drowning ourselves in a break-neck race of materialism and technological submission. So The Overseer too is adapting, co-evolving. As its metaphysical traits crumble and become obsolete, the gaps are filled and substituted with new traits — physical, material, digital traits — to match our physical, material, digital lifestyles. We don’t fear the Old Gods anymore, they cannot protect or control us, so we need new ones. We have killed God, as Nietzsche says. But we are rebuilding Him, with Technology, to match our techno-culture. The myth is becoming real. We’re edging closer and closer to an authentic man-made deity. Living up in The Cloud, of all places. Watching over us, listening to our thoughts and dreams in ones and zeros.

A Digital God for a Digital culture.

I'm really intrigued by this evolution of our Overseer — invented to ‘watch and protect’, an instrument of power and control — into a material, digital form; to match our material, digital lifestyles.

And to be very clear, I'm not suggesting that I believe that Google, Facebook, NSA, GCHQ are gods that we worship, or that they have other magical god-like powers (though all of those points could be argued too). I'm just suggesting that they are taking the role in society of surveillance and control, that was previously the role of traditional religion, which is no longer effective in that role.

All watched over by machines of loving grace: Deep-dream edition (2015)

That image was the GCHQ btw. This is another portrait of the GCHQ, made with Google’s Deepdream.

I guess most of you have heard of or seen Deepdream? You probably all hate it. Because everything mostly looks like puppy-slugs. I love it. But I should be clear it’s not the aesthetics that I love, its the algorithmically conceptual poetry. I wrote a long blog post about this. If you search for “Deepdream is blowing my mind” you should be able to find it.

To summarize very briefly, it’s an artificial neural network, very loosely modeled around our own brain, especially the retina + visual cortex. The network is trained to recognize images. And when you give it an image it doesn't know, it tries to recognize the image — or sections of it — in relation to what it does know. It might think that a section of the image looks a bit like a puppy, another section resembles a bird etc. So the puppy neuron fires a bit, and the bird neuron fires a bit. And then new images are generated to maximize certain firings, i.e. those recognitions. It’s conceptually similar to us looking at a cloud or a Rorschach inkblot and recognizing shapes. That in itself I find really interesting, because the way this happens is — at least at a very high level— similar to how it happens in our minds.

But my favorite thing is, that when we look at these deepdream generated images, we say “oh it’s a puppy-slug”, or a “bird-lizard”. But actually, there’s no such thing. There is no “bird” or “lizard” or “puppy” or “slug” in the generated images. There are only bird-like features, lizard-like features, puppy-like features, slug-like features in the generated images. Because the neurons that represent those particular bird-like, lizard-like, puppy-like features fired in the artificial neural network, it amplified those particular features. And then when we look at the generated images, we pick up on those same bird-like, lizard-like, puppy-like features because the neurons that represent those same features in our brain fire. But those features are only in the image because the artificial neural network saw them and amplified them. This is basically a mirrored duet of Rorschach tests between our mind and the artificial neural network.

This is a bit of a simplification. Check out my blog post (and related) if you’re curious to read more.

But back to the original narrative. This is a portrait of GCHQ taken from the cloud — Google Earth — and run through Deepdream. It’s quite high resolution, about 800 MegaPixels.

So this is portrait is a collaboration between myself and an AI. The AI is developed by Google, one of the new digital deities. But it’s not just any AI, it’s an AI that is a mirror of our own mind as a described before. I really wanted to maximize the HR Giger-esque bio-tech vibe in the aesthetics, to reflect the nature of this new Overseer, part human, part machine. So I combined multiple renders from deepdream maximizing different layers to get different textures: the eye-like structures; ornate giger-esq decorations; webby, organicy vibe etc. and then graded and composited them all.

Machines that understand

“Does everyone understand?”

This is my favorite image of the presentation. I found it on iStock and it was actually called “Does everyone understand?”.

Earlier when talking about Big Data and Robot Readable Worlds, I mentioned the driving force for AI was building machines that could understand data, and produce an executive summary.

I realize ‘Understand’ is a loaded word. A complex concept, and there most definitely isn't a cross-discipline universally agreed definition of what it means or how it works. However in Artificial Intelligence — or looking at human understanding through the lens of AI (which is how the field of AI was born) — there is a rough agreement / direction on what it is. And that involves data compression.

An oversimplification is: finding a more compact way of representing information is ‘learning’, i.e. finding regularities and patterns in data. Learning a compression (or model) that allows explanations of the data and future predictions is ‘understanding’.

And the more that you can compress, and predict, the more you've learnt and understood.

Whenever we’re presented with something new — a new image, a new sound, a new object, a new concept — we look at it through the lens of what we already know, and we try to strip it apart into components that are already familiar to us, and store it in relation to those familiar components,

The horse jumped over the fence.


Imagine these sentences purely as sequences of lines and shapes (The second line is a Google translation of the first, which I believe is incorrect, but I prepared this slide late last night and didn't have time to correct it).

If I had to memorize each of these sentences, to replicate elsewhere — i.e. write down without looking — the first one would be a lot easier for me. Because I can compress that information really well. The most important thing for me to remember is the concept of ‘horse’. Once I have that, ‘fence’ is more probable for me in that context (e.g. compared to the word ‘fortnight’) so I can compress it well. As soon as I store ‘horse’ and ‘fence’ concepts, the concept of ‘over’ is very probable in that context too, so can be stored with very few bits. And once I have ‘horse’, ‘fence’ and ‘over’; ‘jump’ is almost implied because it’s so probable, so I can store it with very very few bits. Once I've stored those concepts, I already know how to spell the words. H-o-r-s-e for horse etc. I already know how to draw each letter. And I actually know how to draw each letter as sequences of primitive shapes such as straight lines, curves, intersections etc which I learnt as a baby. So all extremely compressible. And when I look at the sentence, the same process happens in reverse. First the receptive fields in my retinal ganglion cells apply filters to the incoming light signals and pick up all of the edges and most fundamental shapes and orientations. Further along the visual cortex those different edge and basic shape detections are cross-correlated and I recognize letters. As I recognize letters, the brain predicts other most likely phenomena. E.g. when I see H-O-R-S, it’s highly likely that that’s HORSE and will be followed by an E. Collections of letters trigger words, words trigger concepts etc.

Whereas in the second sentence — I don’t speak Japanese — so I would have to memorize it as a sequence of shapes, lines, curves etc. That’s a lot of information that I can’t compress very well. Still I’d try to compress it, perhaps as: a 2x3 grid with a beard; followed by a vertical line; then a kind of pound sign without a top; a 7 with a curved bottom; sideways capital H; 30 degree rotated smiley face with one eye etc. So there’s still a bit of compression that I can do, but it’s far from optimal.

Psychologists and philosophers might frown at this definition of learning and understanding. Especially anti-computationalists. But what I just said shouldn't be seen as a comprehensive explanation of what learning and understanding is, especially in humans, or even as a defence of functionalism or computationalism. It’s a hypothesis, of a testable model of learning and understanding in computational systems, as seen through the lens of Information Theory.

Time for a joke.

But Jürgen Schmidhuber (pronounced “You Again Scmidhuuboo) — one of the rather important figures in artificial intelligence and machine learning — takes it a bit further and applies this concept as a general intrinsic driving force for all intelligent agents.

He believes that this is the underlying principle for unsupervised learning and the path to general artificial intelligence, human-level intelligence. That curiosity and creativity are fueled by our intrinsic desire to develop better compressors. Within our capacity — as deemed fit by our evolutionary survival strategy — the more we are able to compress and predict, the better we have understood the world, and thus will be more successful in dealing with it.

As we receive information from the environment via our senses, our compressor is constantly comparing the new information to predictions it’s making. If predictions match the observations, this means our compressor is doing well and no new information needs to be stored. The subjective beauty of the new information is proportional to how well we can compress it (i.e. how many bits we are saving with our compression —if it’s very complex but very familiar then that’s a high compression). We find it beautiful because that is the intrinsic reward of our intrinsic motivation system, to try to maximize compression and acknowledge familiarity.

However if the predictions of our compressor do not match the observations, our compressor /predictor has failed. That is good thing, an opportunity to learn something new about our environment. Our curiosity drive is encouraged to find such observations, to find new information that our compressor cannot initially compress.

If we store the new incompressible information as is, i.e. without learning to compress it, that is not ideal. Because we haven’t actually learnt anything, or improved our compressor. However if we are able to find new regularities in the new information, that means we have improved our compressor. What was incompressible, has now become compressible. That is subjectively interesting. The amount we improve our compressor by, is defined as how subjectively interesting we find that new information. Or in other words, subjective interestingness of information is the first derivative of its subjective beauty, and rewarded as such by our intrinsic motivation system.

Schmidhuber has written many papers and articles (1, 2, 3) on this with mathematical formalizations on how it can be applied to define and drive attention, curiosity, creativity, beauty, poetry, art, science and even humour. Here he is explaining a joke. Please imagine that it’s read with a strong German accent.

“But the punch line is unexpected. Initially this failed expectation results in sub-optimal data compression. Storage of expected events does not cost anything, but deviations from predictions require extra bits to encode them. The compressor, however, does not stay the same forever: within a short time interval its learning algorithm kicks in and improves the performance by discovering the non-random, non-arbitrary and therefore compressible pattern relating the punch line to the previous text and to the observer’s previous elaborate, predictive knowledge. This prior knowledge helps to compress the whole history including the punch line a bit better than before, which momentarily saves a few bits of storage,- that is, there is quick learning progress, that is, fun. The number of saved bits (or a similar measure of learning progress) becomes the observer’s intrinsic reward.”

Important to note how he distinguishes a random punch-line (one that is totally unrelated to the rest of the joke, and thus we will not be able to learn to compress i.e. relate to the joke) to a punch-line that seems initially unrelated (i.e. we fail to predict, and doesn't match our current compressor), but then once we ‘get-it’, we recognize how it relates and are able to compress. We have improved our compressor. And that’s why it’s funny. He must be so much fun to hang out with.

In short yes, using these kinds of approaches or similar models, machines can learn, and understand.

Keeper of our collective consciousness, 2014

I've neared the end.

To summarize, I'm an old fashioned artist. I paint landscapes…

…and scenes of the Divine…

I want to end with a poem. It’s a poem I wrote a couple years ago. It’s a collaboration with Google. Not people working at Google, but actual Google, the search engine. And it’s actually more a collection of prayers.

We have a very intimate connection with the cloud. We confess to it. We ask for things from it. We tell it things we wouldn’t tell our family, or closest friends. And Google is the Keeper of our collective consciousness. It sees everything we see, knows everything we know, feels everything we feel.

So for this poem, I wrote the first few words and Google auto completed, based on our collective consciousness. I made this originally as a performative video in 2014, set to Marilyn Manson’s cover of Depeche Mode’s ‘Personal Jesus’. I thought it was very appropriate, here we did have our own personal Jesus, someone who hears your prayers, someone who’s there, all you have to do is reach out, and touch (your keyboard or screen, and have) faith.

But now instead of playing that video, I’d like to read that poem out to you. And while I read it, here is a slideshow of the cosmos with images from the Hubble telescope (I didn't make this video, I found it on youtube by user Relaxicity).

Please remember, these are real prayers, the most popular prayers, from all over the world, in 2014.

[I don’t have video of me reading the poem, so below is text and original video]

I itunes
want one of those
am learning
see fire
I feel
I feel
I feel
tired all the time
I feel
pretty lyrics
I feel l
I feel l
I feel l
ove donna summer
I feel l
ike killing myself
I feel li
ke killing myself
I feel li
ke a failure
I feel li
ke buddy holly
I feel li
ke a woman
I feel like a
I want
one of those
I want
that dress
I want
to break free
I want
you back
I want a
I want a
I want a
hippopotamus for christmas
I want a
I want to
break free
I want to
know what love is
I want to
marry harry
I want to l
ose weight
I want to l
ive in america
I want to l
ook like that guy
I want to l
ose a stone
I want to le
ave my job
I want to le
I want to le
ave university
I want to le
ave teaching
I want to learn
I want to under
I want to under
stand football
I want to under
stand you
I want to under
stand everything
I want to get
I want to get
I want to get
I want to get
married islam
I want to get a
I want to get a
way from it all
I want to get a
I want to be
I want to be
I want to be
I want to be
like you
I want to become
a nurse
I want to become
a teacher
I want to become
I want to become
a doctor
I want to become a
I want to become b
I want to become b
I want to become b
ritish citizen
I want to become b
I want to become better
I want to become better
at math
I want to become better
at math so that I
I want to become better
I need
a hero
I need
your love
I need
a doctor
I need
a dollar
I need a
I need to
lose weight
I need to
talk to someone
I need to u
rinate frequently
I need to u
I need to u
nlock my iphone 4
I need to u
pdate my browser
I need to understand
I need to understand
I need to understand myself



Memo Akten
Artists + Machine Intelligence

computational ar̹͒ti͙̕s̼͒t engineer curious philomath; nature ∩ science ∩ tech ∩ ritual; spirituality ∩ arithmetic; PhD AI×expressive human-machine interaction;