ChatGPT is dead: Poststructuralist thoughts on language and AI

Peter Kahlert
STS@ENS
Published in
6 min readDec 16, 2022

written by Peter Kahlert and Suzette Kahlert

There is a lot of chatting going on, mostly humans chatting with machines, and more than mostly humans chatting with humans about machines. What is all that chat about? It is about Open AI’s latest technological stunt: a stochastical parrot chatbot named ChatGPT.

We got from that chatting with ChatGPT sentiments of concern and excitement. Depending on your perspective, they are all right and sound. For all we know, this generative pretrained transformer can sound very human as a chatbot, and can provide complex and meaningful information. With its broad semantic capabilities, including coding basic programs and boilerplate, GPT might have the potential to be a general purpose tool.

However, to the same extend it can be pushed to its boundaries. And while this instantly calls for urgent ethical debates about what this means for public speech, author authenticity, and free discourse, it raises concerns about the automation and mass-spreading of misinformation and agitation, hate speech, harassment, spam and scam, it creates new issues for amendments of intellectual property jurisprudence, and it provides access to all kinds of harmful and perilous knowledge.

There are also discussions on the software’s capability to imitate humans as a precursor for general AI or at least a stage of AI development that can replace a full new bracket of human practice and work.

Thus, it is harvesting season for discourse analysis enthusiasts and professionalists (sic!) as we are witnessing more and less new takes and claims on language, creativity, and cognition. One enthralled blogger has been amazed with the chatbot’s capacity for correctly applying basic rules of human language, as it has been capable of participating in the invention and application of a freshly made-up language! Reading the blogpost, translation and language are still rather weak concepts, as they are empirical, physical representations of their metaphysical pendants in platonic idea or generic grammar (cf. Chomsky 2017), if there are such archetypes that are no mere imaginations or other intellectual artifacts. So, the blogpost shows that, if inventing a language is creating a miniature copy of a language (e.g. English), then the software can really keep pace with its creation and learn to use it — with some corrections even flawlessly.

Talking about text, how do you like this text so far? E.g. “professionalists” is no proper word, we should have gone for professionals instead. Maybe you struggle with guessing exactly (and explicitly) what we mean by “implicating and explicating” in the paragraph above. If you managed to read so far anyway, how was this possible? It has been possible, because communication by language does not produce error-exit, it does not require correction-functions or exception-maps, it does, in fact, incorporate all kinds of exceptions and inconsistencies. As a system of quotations, iterations, and remixes (cf. Derrida 1988) it is not working within the boundaries of “not failing” but constituted by performative decisions of failure and success. An invented language that may resemble a real language must have this feature and be open to more.

Now, can a machine, a proficient CPU with an intricate model from huge data, like GPT3 participate in doing so? The very method of language models trained with huge amounts of data, is a lot like the deconstructivist idea of text as exclusive, unbound text-only references without center or boundary (cf. 1980). On the other hand, as a software it ultimately is a deterministic device whose vagueness is a mere artifact of cognitive and observational limitations. It is not less but a strict set of rules, procedures, and most of all, currents running and not running through semi-conductors in a cascade of logic gates.

Let’s not get too much in the human vs. machine debate, as it is doomed to fail, but provide some discursive remarks as one can already anticipate some upcoming — or rather prevailing — misconceptions. Let’s not get blinded by ideological needs of human distinction which we cannot express in concrete instances. As war as discourse and communication is concerned, we alongside advanced and simple texting machines are all participating in the reproduction and proliferation of text. There is no essential aura being attached to texts through which authors may survive for authors are always already dead (cf. Barthes 1977) and they are just one possible source for signatures (cf. Derrida 1988) that may or may not be accepted as performative acts of writing (cf. Austin 1975).

Of course, human agents may extend beyond text — they may have a soul or more probably have a body in a bodily world that does extend beyond textuality. Regarding whatever a text can render explicit, however, we cannot describe any such thing (with text), as far as an extra-textual implicity is ‘real’, something might recognize or phantasize that that is ‘true’.

There is no reason to mix up concern and critiques for the mere tautology of distinguishing human existence, for it is silently extinguishing the same. It is that we act in relation to the world we exist in, that we can experience and create things. We can use a ChatGPT to experiment with images and texts, we can worry about or celebrate the results. But don’t you judge a machine for getting language wrong, or you miss the whole point about language and what it does for the human condition. We rightfully reject a world that is dominated by one data model and set of algorithms, like we shall reject the image of a world where human existence can be but one thing, one way of living, thinking, acting, and feeling. Making mistakes is also part of this so-called human condition as it helps us (and AI) to learn. This poststructuralist perspective of AI seems to be worthy of exploration, as it maybe helpful in understanding AI dealing with all the different and complex aspects we must navigate in our everyday life.

If there is motion in semantic spaces, if there is an evolution within cosmos like Whitehead (1978) and Mead (1932) have described it, it is inside différance, insight the changes that are being concluded before they could be eliminated by correction.

Beyond all the real and immediate threats and disruptions that may take place, cutting edge technology is still mirroring human imaginations of the human self itself. Automation is no inherent threat to wealth but an inherent threat to inequality if capital fails to frame it to be threating common wealth. Machines doing language is no strong, general AI, and no meaningless triviality, but another phenomenon, medium, and agent through which text is unfolding and exploring (itself). And we are rather quick in tying machines into human structures of distinction, exclusion, and normalization — so quick indeed that they are no mere tools and extensions of such estrangement but reflect the unworthy and unjust measures of governmentality (cf. Lemke 2002).

Like Orwellian Newspeak, these misconceptions pose no real threats. They are symbols of concern. We can try to normalize machines, ourselves, and more as much as we like, the constitutional capacity of failure will always prevail. But we might miss some chance. As creativity is not just seeing the bigger picture, or understanding something in its wholeness, but stepping away from interpretations, patterns, and remembrances, parts are not more or less than their whole but something else: and we do not know what sense is being made inside the Chinese Room that is not intelligible to its outside.

Sources:

Austin, John L. (1975): How to Do Things with Words. Harvard: Harvard University Press.

Barthes, Roland (1977): The Death of the Author; in: ibid: Image Music Text. London: Fontana Press, pp. 142–148.

Chomsky, Noam (2022): The language capacity: architecture and evolution; in: Psychonomic Bulletin and Review 24, pp. 200–203. https://link.springer.com/article/10.3758/s13423-016-1078-6 (accessed 16.12.2022)

Derrida, Jacques (1980): Structure, Sign, and Play in the Discourse of the Human Sciences; in: ibid.: Writing and Difference. Chicago: University of Chicago Press, pp.278–294.

Derrida, Jacques (1988): Signature — Event — Context; in: Gerald Graff (ed): Limited Inc. Evaston, Illinois: Northwestern Press.

Lemke, Thomas (2002): Foucault, Governmentality, and Critique; in: Rethinking Marxism 14(3), pp. 49–64. http://www.thomaslemkeweb.de/publikationen/Foucault,%20Governmentality,%20and%20Critique%20IV-2.pdf (accessed 16.12.2022)

Mead, George H. (1932): Philosophy of the Present. LaSalle: Open Court.

Whitehead, Alfred N. (1978): Process and Reality. An Essay in Cosmology. New York: Free Press.

--

--

Peter Kahlert
STS@ENS
Editor for

Sociologist/Researcher @ European Newschool of Digital Studies (European University Viadrina). Currently working on DataSkop, funded by BMBF.