AI, Music, Style: Can there be too much culture?

Sunil Manghani
Electronic Life
Published in
22 min readMar 27, 2024

--

At the heart of this article is a question about the artificial replication of ‘style’ and its significance for an advanced mode of cultural production. It is worth noting, style is not something that is readily subject to copyright law. As AI models become more sophisticated, a new generative approach to style poses the potential for a massive (over-)production of culture. At the time of writing, for example, a new text-to-video model, Sora, has been hitting the headlines. It offers a glimpse into a potentially ‘memory-less’ future as foretold by postmodern theorists.

Warhol’s Legacy

I’ve got a Brillo box and I say it’s art
It’s the same one you can buy at any supermarket
’Cause I’ve got the style it takes

Lou Reed and John Cage, Style It Takes, 1990

In the mid-twentieth century Andy Warhol forever shattered our understanding of the ‘artisticness’ of style. Following on from Duchamp’s earlier interventions, which gave attention to the wider context of an artwork, Warhol co-opted an ‘industrious’ mode, famously epitomised by his exhibiting of supermarket Brillo boxes in 1964. By turning the mass-produced object into a work of art, he challenged preconceived notions of the boundaries between art and the everyday. Had it been a problem confined to the artworld it might not have been so significant.

Yet, Warhol’s legacy cuts more widely across culture. He came to epitomise the ability to affect style, i.e to engender a new capability to influence, change, or alter an existing style. This moves towards a postmodern reading. After Warhol, it was no longer necessary to create new utterances, but instead, it was possible (or only possible) to reframe, reposition and repackage what was already existing. Taken to its extreme, the argument is that there is no longer anything new, only new ways to contextualise and situate. Today, across the flood of Pinterest boards, Instagram feeds and TikTok and YouTube videos, we see the mass repetition of poses, filters and memes. Each are indubitably individual acts of creativity, yet arguably there is a paucity of originality. The ‘art’ is less in the generation of new meaning, and more in the curation, i.e the repurposing of images, texts and sound. We stylise and curate as a means to create.

Rap music, for example, as it emerged in the early 1970s within the African American communities of the Bronx, New York City, is often considered the embodiment of remix culture. The genre thrives on the reinterpretation and recontextualization of existing sounds, beats, and lyrics, making the act of remixing not just a method of creation but a statement of cultural and social significance. The skill of the rapper as a curator of sound becomes paramount, selecting and combining samples in ways that pay homage to their origins while simultaneously pushing the boundaries of musical expression. This practice reflects a postmodern appreciation for bricolage and intertextuality, where the meaning and value of the work are derived from its relationship to other texts and sounds. It is also worth noting, as rapper and professor Lupe Fiasco remarks, rap is intricately bound up in technological advancement. It is no real surprise that that relationship only further thrives on recent developments in AI. As part of his collaboration with Google, in developing the tool TextFx, Fiasco describes himself as a ‘data gathering machine’:

Style and its Repetition

A scholarly reading of rap music typically focuses less on its entertainment and more its capacity for as rich, complex form of cultural and political expression. As part of wider remix culture, rap serves as a means of innovation and cultural critique, reflecting broader themes of resilience, identity and resistance. Tricia Rose, for example, a pioneering scholar of hip-hop culture, argues in her book Black Noise (1994), how the practice of sampling is not just an artistic technique but a method of cultural preservation and commentary. Similarly, in Yo’ Mama’s DisFunktional! (1998), historian Robin D. G. Kelley positions rap as a form of counter-narrative, pursued through the reassembly of existing musical fragments to forge new expressions of identity and resistance. In Prophets of the Hood: Politics and Poetics in Hip Hop (2004), Imani Perry charts the complexity of hip-hop as a cultural movement, taking rap to be emblematic of the foundational practice of ‘citationality,’ again suggestive of a practice of ‘style’ whereby references to existing cultural elements create layered meanings and commentaries on society, politics, and culture. Importantly, as Mark Anthony Neal argues in What the Music Said (1999), Black popular music styles play a key role in shaping Black public culture.

Dick Hebdige famously argued in his seminal work, Subculture: The Meaning of Style, published back in 1979, the ‘symbolic resistance’ of fashion and music are never just a matter of clothing or music preferences but are significant forms of communication, reflecting and challenging broader societal tensions and conflicts. Nonetheless, as Hebdige argues there is a fundamental problem. While subcultural styles are inherently political — initially setting out to subvert mainstream norms and values — dominant cultural forms eventually co-opt these styles, diluting their subversive power and incorporating them into mainstream commercial culture. For example, the portrayal of punk as a treasured part of the national culture at the 2012 London Olympics opening ceremony is an overt example of such incorporation (in its heyday punk was seen as highly subversive and distinctly anti-establishment!).

In the post-War period, Existentialist philosopher Jean-Paul Sartre urged for the practice of the ‘committed’ writer — whereby he argued authors (and creators more generally) can make choices within their use of the pre-exising forms, the established conventions, genres and codes. Artistic production arises out of a struggle with what has gone before, which in turn provides the ‘site’ upon which to make choices, as a means to seek change and freedom. In this sense, the assertion of ‘style’ provides a means to subvert or outwit the constraints of a dominant language. Sartre, for example, positioned the revered, bourgeois 19th century writer, Flaubert, as a writer embattled with his own (privileged) class identity, seeking to use literary style as a means of existential escape or critique. Flaubert’s critical portrayal of bourgeois society in his novel Madame Bovary is a case in point.

Roland Barthes, a pioneering semiotician and literary theorist who explored the realms of culture, text, and meaning beyond authorial intent, was sympathetic to Sartre’s view, yet gave a more pessimistic view; a view that went onto influence writers such as Dick Hebdige. Importantly (albeit difficult to grasp), Barthes sought to distinguish between language and style with a third term, ‘writing’. His point was that both language and style are forms we inherit, i.e. we cannot choose them. We are born into a language, which is defining of a subject position. Style, while typically considered a means of individual expression, was similarly considered by Barthes as a means of defining one’s subject status. Style comes in many forms and does not readily possess a ‘grammar’, yet he viewed it an involuntary, embodied set of actions and affectations — some of which are indeed culturally shared habits — which crucially come prior to writing.

Barthes’ reading of style marks it out as ‘calculable’ (as coded), which he sets against writing as a potential means of the incalcuable (that which has yet to be defined or codified). In a broader frame, theorist Achille Mbembe engages with concepts that contrast the calculable and the incalculable, or more specifically the politics of visibility and invisibility. His work prompts critical reflection on what can be quantified and what escapes quantification in social life and human experience. In the context of cultural styles and expressions, one could extrapolate from Mbembe’s perspective to argue that while certain aspects of culture might be measured or analyzed (the calculable), there are dimensions of human creativity, affect, and meaning that resist easy quantification or computational analysis (the incalculable).

In the context of AI and indeed digital culture more broadly, the calculable takes on ever more seriousness. The packaging of styles and genres within easy-to-use filters, apps and softwares, appear to promote creativity, to provide very direct and intuitive means of cultural production, yet nonetheless provide a general repetition of styles, leading to a reduction in acts of choice or freedom (in Sartre’s sense). For example, the effortlessness in selecting a drum sound or post-production effects in Pro Tools — the industry standard for recording studios — is liberating in the immediate circumstance of making a track, yet betrays something of Barthes’ view of style as involuntary (as already coded).

The Grain of the Voice

In his essay, ‘The Grain of the Voice’, Barthes provides one way of understanding the incalculable. Writing about the reception of music (specifically the vocalist), he distinguishes between the ‘pheno-song’, which refers to aspects of performance that are culturally coded and communicable, and the ‘geno-song’, which represents the bodily, physical presence of the singer’s voice itself — or what he refers to as the ‘grain’. The grain is what Barthes identifies as the unique, irreducible quality of the voice, tied to the materiality of the body producing it.

The grain of the voice is the phenomenon, for example, that allows us to differentiate between a live and recorded version of a song. It is how we tend to associate with the ‘signature’ of a specific artist; their unique character, which in turn provides unquantifiable nuances to how a song is sung. Arguably, there is a contradiction with Barthes’ earlier account of style as coded, and the grain of the voice as a bodily idiosyncratic, uncoded form. However, for the present purposes, the point is that there remains a ‘presence’ that cannot be quantified or copied. While this might seem a romantic view, Barthes’ maintains a structuralist account. The ‘geno-song’ is a part of an overal form, working in conjunction with the ‘pheno-song’. Crucially, it is a component that is not part of the communication of meaning (it does not draw upon the structure of language, for example; it is not the coded means of representing feelings or expression). In fact, the ‘geno-song’ or the ‘grain of the voice’ is just pure sound. It is the timbre of the individual vocalist — a form of friction or ‘noise’ — that we hear in addition to the meaning in a song.

It is a quality that usually we would designate as unique and ‘analogue’ — i.e. not reducibe to that which can be calculated, repeated or digitised (coded as 1s and 0s). It is possible to sample the grain of the voice, but essentially all you are doing is making a specific recording. By contrast, the words and melody of a song can be easily transmitted, transposed and ‘cited’. To give the analogy of handwriting, which we tend to think cannot be reduced as a sharable font, the grain of the voice is the preserve of the individual.

Yet, in the context of AI, pattern recognition, and high performance computing, our general understanding of the in/calculable is greatly challenged. New audio technologies are advancing in ways that lead to a view of the grain of the voice less as a romantic account of unique presence, and more as an actual ‘unit’ of meaning and measurement. Descript’s Overdub software, for example, allows users to clone their voice using AI, whereby users can then type text to be generated as audio in their voice. It is a technology used in correcting podcasts or video narration without need of re-recording. Taken in a different direction, Google’s Project Euphonia is aimed at helping people with speech impairments, using AI to understand and synthesize impaired speech patterns (albeit not directly modelling the person’s voice).

A more prominent software is Vocaloid which can ‘model’ a singer’s voice to then synthesize singing by inputting melody and lyrics. It uses voicebanks recorded by voice actors or singers and can create singing in various voices, including those of fictional characters as well as well-known singers. In principle it means any voice can be used to sing any song. The technology relates to broader ‘deepfake’ deployments, whereby real voices (typically of celebrities) can be cloned via AI training on hours of real audio.

The use of AI in modeling a singer’s voice opens up innovative possibilities for music production, accessibility, and entertainment but also raises important questions about consent, copyright, and ethical use. We can begin to pose strange posibiliities. Copying a singer such as Michael Jackson, for example, and recording every song ever written but in Jackson’s unique style — i.e. sung using the grain of his voice — becomes technically feasible. It raises numerous philosophical questions about authenticity, creativity, and the essence of human expression in art, but nonetheless emerges as a real possibility and whereby the previous importance of ‘presence’ is no longer so critical. Beyond complex discussions about the nature of cultural production, concerns arise about saturation in the digital age. New technological capabilities pose a question as to whether there can be ‘too much culture’, i.e. the over-saturation of cultural production which we could never humanly consume.

Too Much Culture?

In his structural analysis of narrative, Roland Barthes remarks upon the phenomena of stories as ‘human material, a class of thing which humans produce’. At first, it appears we cannot impose any sort of order upon narratives: ‘There are millions and millions of narratives, developed over an indefinite period of time, the origins of which are unknown. … Narrative is everywhere’. Thus, to the human faculty narratives and melodies might as well be infinite: the maths is too big to contain. Yet, this is not the same as saying narratives of the world are actually infinite. In Words and Rules, Steven Pinker provides us with some of the numbers. He is interested in the combinatorial and recursive nature of language, and regards combinatorial rules. He writes:

Say everyday English has four determiners (a, any, one, and the) and ten thousand nouns. Then the rule for a noun phrase allows four choices for the determiner, followed by ten thousand choices for the head noun, yielding 4 x 10,000 = 40,000 ways to utter a noun phrase. The rule for a sentence allows these forty thousand subjects to be followed by any of four thousand verbs, providing 40,000 x 4,000 = 160,000,000 ways to utter the first three words of a sentence. Then there are four choices for the determiner of the object (640 million four-word beginnings) followed by ten thousand choices for the head noun of the object, or 640,000,000 × 10,000 = 6,400,000,000,000 (6.4 trillion) five-word sentences. Suppose it takes five seconds to produce one of these sentences. To crank them all out, from The abandonment abased the abbey and The abandonment abased the abbot, through The abandonment abased the zoologist, all the way to The zoologist zoned the zoo, would take a million years. (Pinker, Words and Rules, 2015 [1999])

Pinker evokes Jorge Luis Borges’s story ‘The Library of Babel’ (as ‘[p]erhaps the most vivid description of the staggering power of a combinatorial system’). As the story goes, ‘somewhere in the library is a book that contains the true history of the future (including the story of your death), a book of prophecy that vindicates the acts of every man in the universe, and a book containing the clarification of the mysteries of humanity’. Of course, even after the human species is made extinct, the library (and its combinatorial possibilities) remains. Yet, technically, Pinker explains, ‘Borges needn’t have described the library as “infinite.” At eighty characters a line, forty lines a page, and 410 pages a book, the number of books is around 10 [to the power of] 1,800,000, or 1 followed by 1.8 million zeroes. That is, to be sure, a very large number — there are only 10 [to the power of] 70 particles in the visible universe — but it is a finite number. (Pinker, 2015).

While seemingly infinite, it is the (albeit massively) finite nature of language that draws us back to an understanding of the statistical turn in AI development. While at a human level we can have no sense of the magnitude of words, sentences and narratives, for high performance computing (and more potentially with quantum computing still to come) the sums are within range. And not just words, but the computation of the massive array of style in words, sounds and gestures. We can begin to imagine a ‘culture machine’ capable of making-consuming-remaking all of the finite combinatory possibilities of all songs, novels and films ever to be made.

We might be tempted to revise the infinite monkey theorem, which states that a monkey typing randomly at a keyboard for an infinite amount of time will — at some point — produce the complete works of Shakespeare. A supercomputer typing probablistically for a finite amount of time will be able to produce the complete works of Shakespeare. Importantly, however, the experience of time in this case is very different to that of human experience. If a person could read 60 books in a year (and assuming the person began reading at 6 and lives to 81), we might estimate they read around 4,500 books. Human reading, of course, accrues only over time. Yet consider, following a relatively short period, it is technically possibly that a comptuer could have aalready read all humanly available books ever published. It is as if — like a combination safe — all of culture is unlocked with a click, and based on which it is possible to re-render all culture, all at once!

The Last Beatles Song

Keeping to mind this revision of the infinite as finite, let us return to the modelling of the ‘grain of the voice’, and a specific real-world application with the release of The Beatles’ song, ‘Now And Then’, in October 2023 — widely reported as the band’s ‘last’ ever single. Originally written and sung by John Lennon in the late-1970s, it survived only as a ‘sketch’ recorded on a portable tape recorder while Lennon sat at a piano (with various ambient sounds in the room, including a TV set). Long after Lennon’s death, in the mid-90s, the song was worked on by the remaning Beatles, Paul McCartney, George Harrison and Ringo Starr, but was abandoned as they could not satisfactorily isolate Lennon’s vocal and piano. Four decades after its initial composition the song was finally finished by McCartney and Starr. Upon release, the much touted ‘last ever’ Beatles’ song received broad, positive reception, described, for example as ‘an affecting tribute to the band’s bond’ (Guardian), and as ‘haunting’:

So, what was it that enabled the song to be completed? In short, it was advancements in artificial intelligence. Specifically, it was the work of director Peter Jackson, whose team employed a revolutionary approach to process audio recordings from the original Beatles’ 1969 Let It Be sessions. The technique, referred to as machine-assisted learning (MAL) in a homage to Mal Evans (a beloved Beatles roadie and assistant), was developed to address the specific challenge of isolating dialogue from background noise, including music, chatter, and other sounds present during the recording sessions. The result was the acclaimed extended documentary, The Beatles: Get Back, which, among other things, helped bring to light startling footage of the real time act of song writing; the viewer able to witness Paul McCartney writing ‘Get Back’ literally from scratch. (See: Johnny P, ‘Why All Creatives Must Watch The Beatles Get Back Documentary’).

While the technical specifications of MAL are proprietary and not fully disclosed, there are numerous similar techniques deployed in art and sound restoration. The core of the technology involves machine learning models trained on vast amounts of data to recognise and differentiate various types of audio signals, so helping to identify the characteristics of the human voice, distinguishing it from other sounds.

In Get Back, the MAL technology was applied to the raw session recordings, which were notorious for their poor audio quality and overlapping sounds. Jackson’s team was able to enhance the clarity of the dialogue significantly, making it possible to hear the band members’ interactions in unprecedented detail. After the initial separation, the isolated audio tracks are further cleaned up, balanced, and enhanced to improve clarity and quality. This can involve noise reduction, equalisation, and other audio mastering techniques.

The overall process raises complex questions about the status of the original sound and its final production. We ask: Are we given the restoration of the original audio or does the process involve the synthetic production of audio based on probabilistic learning? The distinction touches upon broader discussions in audio engineering. For example, the moment a singer sings into a microphone, are we only capturing an approximation of their voice? In essence the answer is ‘yes’, but nonetheless, over time, culturally, we become accustomed to the sound of the recorded voice to the point we hear it as an original sound.

In the case of Get Back, the MAL technology was primarily aimed at separating sounds. Yet, the process may be said to involve aspects of synthetic production if the AI technology is required to ‘predict’ or ‘fill in’ audio information that could not be cleanly extracted from the background noise. The extent to which this occurred has not been fully detailed, leading to ambiguity about how much the final audio is a direct restoration versus a synthetic enhancement or reconstruction.

Expressions of Artistic Style

A similar case is the Netflix documentary series, The Andy Warhol Diaries. The six-part series used AI technology to recreate the voice of Andy Warhol. As Flora Roumpani explains:

The voice of the American pop-art pioneer was created using a text-to-speech algorithm that incorporated his native Pittsburgh accent. Actor Bill Irwin then recorded the sentences, with his performance and the digital AI voice combined to create the voice heard in the documentary series. […] The AI system used in the docuseries is based on a neural network architecture called WaveNet, which was developed by researchers at Google’s DeepMind. WaveNet is a deep learning model that can generate high-quality audio waveforms by modeling the raw audio signal directly. In the case of the Warhol project, the system was allegedly trained on over 1,000 hours of audio data to create a realistic and authentic sounding voice for the artist. (Flora Roumpani, ‘Lights, Camera, AI’).

As noted, the technique involved a layering of AI generated audio with live acting, representing a novel form of production — an example of how AI is starting to be used in wider, established media production processes. The director, Andrew Rossi, was keen to adopt this more complex proccess in order to ‘fully appreciate the extreme sensitivity Andy reveals in his diaries’, arguing that we ‘needed to hear the words in Andy’s own voice’ (cited in Flora Roumpani’s ‘Lights, Camera, AI’).

These examples of AI restoration, in Get Back, the single ‘Now and Then’, and The Andy Warhol Diaries, represent considered and crafted uses of AI techniques. But, nonetheless, they are part of a wider domain, a new vista in which we can model and replicate style.

Until now, what is significant is that we have tended not to consider style as something that either can be replicated, or that can be viewed the preserve of copyright. Copyright law primarily protects original works of authorship fixed in a tangible medium of expression, including literary, musical, dramatic, and artistic works, among others. The concept of ‘style’ is more abstract and generally refers to the distinct manner or technique an artist or creator uses in their work. Because style itself is a broad and often intangible set of characteristics, it is not directly protectable under copyright law as such.

Specific expressions of an artist’s style in tangible forms — such as a particular painting, written work, or piece of music — are protectable. This means that while you cannot copyright a style (e.g., impressionism, cubism, a specific genre of music, or manner of writing), the individual works produced in that style are protectable. Similarly, distinctive elements that are original and fixed in a tangible form within a work (such as a character’s unique appearance, a specific arrangement of music, or a particular graphic design) can be protected by copyright. Crucially, then, while copyright law protects the expression of ideas and tangible assets, it does not prevent others from creating works in the same style, provided they do not copy specific, copyrighted expressions of that style.

Here the notion of generative AI is important. The new generative AI models for text, images, music and also video, while trained on copious amounts of prior tangible works (including copyrighted materials), do not reproduce existing works. Instead, they produce new works based on ‘knowledge’ of prior forms.

The key distinction is that generative AI functions through ‘form’ (and style’), rather than specific content. So, just as a tribute act is allowed to copy the style of a band or artist (as long as they secure permission to play the original songs), equally generative AI is technically able to reproduce styles ad infinitum. The tribute act, for example, is not purporting to be the original band — there is no sense in which they are passing themselves off as the original. By the same logic, this allows other artists to record their own versions of songs. They are allowed to adopt their own style, as long as they have permisssion to use the ‘idea’ of the original song. Yet, what emerges with generative AI (and deep learning calculation more broadly) is a whole new dimension to the question of style.

Coda: We have never been postmodern

When styles becomes calcuable it potentially no longer makes sense to consider it the preserve of an individual artist or group; as something that cannot be mapped or territorialised. Consider, for example, whether it is feasible to refer to the ‘last’ Beatles song? With the proviso that current models are largely flawed, but working on the principle we are currently glimpsing into a brand new future (where it is indeed possible to model the grain of the voice and indeed the whole compositional ‘sound’ and ‘style’ of a band or artist), does it hold to say there will be no more Beatles’ songs? Never say never…

Researchers at Sony have used the company’s Flow Machines software to analyze a database of 13,000 musical scores from different genres around the world. Based on this training the software writes its own melodies, which are then developed with the work of a human composer to turn the material into a fully produced track. Through this process they produced ‘Daddy’s Car’, a pop song in the style of The Beatles:

Also, ‘The Ballad of Mr Shadow’, in the style of American songwriters such as Irving Berlin, Duke Ellington, George Gershwin and Cole Porter:

As James Vincent writes:

The lingering question … is what the hell is ‘Daddy’s Car’ actually about? Although the lyrics were written by [the human composer] Carré rather than the software, it’s impossible not to detect the menacing threats of machine intelligence in the wording, presumably working through Carré like a sinister puppet master. (The Verge)

Vincent’s response is somewhat dramatic, but nonetheless reflects widespread anxiety and suspicion about the new developments of AI — not least the prospect for generative AI to take away the role of artists and designers. Of course, we might argue similar anxieties were expressed with the early developments of photography, yet ultimately this medium soon became a mainstay of artistic practice. OpenAI’s generative video platform, Sora, is not yet publicly available, but — at the time of writing — the company has given access to ‘visual artists, designers, creative directors, and filmmakers’ to gauge first impressions.

Air Head — AI-generated video image by shy kids using OpenAI Sora (Image credit: OpenAI / shy kids)

Shy kids’ ‘Air Head’ is particularly noteworthy: a short film about a man whose head is a hot-air-filled yellow balloon. Technically, it is impressive, convincingly and consistently rendering the human protagonist with a balloon head — as the narration explains ‘I am literally made of hot air’. But, what is more impressive is the poetry of the film. It is a witty and provocative piece — an example of what it is that artists do best regardless of the medium. As shy kids’ Walter Woodman notes, ‘as great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal’. In other words, the artists are focused less on the technology, more on the work they want to make.

We are perhaps on the precipice of an explosion of a new generation of artworks and culture. And as with any epistemic ‘break’, big questions abound. If we don’t take hold of the new technologies in creative ways, then we may well be flooded with a mass of auto-generative ‘culture’ — too much for us to ever even experience. Yet, this is an unlikely scenario, if only because there is no point to such a mass of output. Nonetheless, it is important to recognise the coming of a new cultural condition — way beyond anything that Warhol might have envisaged.

In his book, We Have Never Been Modern? (1991), Bruno Latour challenged the idea of the separation between nature and culture, which had long been taken as a central narrative of modernity (e.g. the progress of science and technology in overcoming the natural world). This separation is a myth, he argued, with modernity actually representing the simultaneous purification of objects (through science and technology) and the proliferation of hybrid entities (which he calls ‘quasi-objects’ or ‘networks’; whereby the science is never without the social and cultural). Instead of seeing ourselves as modern by making an artificial separation, we should recognise the interconnectedness of humans, non-human entities, and technological artifacts in what Latour termed the ‘Parliament of Things’. Importantly, his account was a critique of postmodern discourse, charging postmodern philosophers (such as Lyotard and Baudrillard, but also thinkers such as Barthes, Lacan, and Derrida) with the view that ‘their thinking simply revolved around artificial “sign-worlds;” [hence] he challenged them with the provocative statement that “we have never been modern.’

Yet, what if we have never been postmodern? In other words, what if the ability to grasp the world of signs at the end of the twentieth century (at the time the postmodern philosophers produced their iconic works) was vastly insufficient to truly comprehend and respond to both the idea of Latour’s hybrid entities and postmodernist views on relativism, fragmentation, simulation and the ‘death of the author’? What if, it is only now we are starting to gain insights into the massively calculable ‘world’ of things, signs and styles that have always persisted, if only outside of our scope of visibility? We might consider the emerging ‘techniques’ of generative AI, along with the internet, big data, and other digital technologies, to usher in a new era that transcends (and makes newly mallaeable) postmodernism’s skepticism and relativism. This era could be characterized by a new form of realism or another philosophical stance that acknowledges the profound impact of technology on our understanding of truth, reality, and human identity. It might suggest that the distinctions postmodernism challenged between high and low culture, or between the author and the reader, were in fact only minor considerations in an increasingly irreverent world in which algorithms produce art, literature, and music, which in turn challenges not only our notions of creativity and originality, but in fact the very space and time of making art.

A bewildering thought. For now, however, we might hold onto style in all its idiosyncrasies. Console yourself, perhaps, with a few lines from David Bowie’s ‘Space Oddity’ (1969):

And I’m floating in a most peculiar way
And the stars look very different today

Not only is this an iconic song, surely never replicable through mere calculability, but when Bowie syncopates the word ‘peculiar’, he enacts on our behalf a wholly pe…cu…lia..r human nature; a reminder of our most indefatigable style.

--

--

Sunil Manghani
Electronic Life

Professor of Theory, Practice & Critique at University of Southampton, Fellow of Alan Turing Institute for AI, and managing editor of Theory, Culture & Society.