The anthropomorphic AI — A Scaffold
We are living in posthuman times when technological development produces an emerging set of new agents (inorganic entities able to act purposefully) that enter into the organizational structure of everyday life. Whether they remain imperceptible for the general understanding of processes like filtering algorithms or strike immediate attention and purchasing power like unmanned drones, they are increasingly taking over domains of coordination and control of different kinds. Starting from the early cybernetics to today’s dominant model of representation, the internet, these agents embody an ever-growing decentralized network fed by enormous datasets generated by information-aggregating and processing softwares. If cybernetic thought has laid the conceptual foundation to equal machine to man as both being „nodes in networks acting and reacting in a flow of information”, a general tendency towards anthropomorphizing machines is still enforced as an urgency today in order to make them perform flawlessly in human systems (society).
Let they be scripted interactions running on web-based platforms (chatbots) or hybrid assemblages of hardware and code (robotics), aspirations to make synthetic intelligence act (or at least appear) like humans is a discernible bias of scientific thought and so being a residual trait of modernity and humanism. One of the use cases being the different voice-control applications from OSX’s Siri to Amazon’s Alexa that without exception are working experiments to impersonate interface mechanisms. As Benjamin Bratton pointed out, the interface layer exhibits a tendency „to perform as if it were itself an other User”.
It’s not difficult to see the risk of expecting counter-intuitive outputs from designs based on radically different modes of intelligence than we have conceptions of. It is not only the problem of retro-engineering how algorithms really work (Google Deep Dream), but the challenge also presents itself to be able to premediate every possible condition they are/will be drawn into when being out in the wild. This is the vanishing point for the different timelines involved in the conception of such artifices: while understanding the end results of complex mathematical calculi needs constant retrospection, projecting extant or existed models to the future to preempt undesirable outcomes is an other force at work.
If we wish to transfer a value system, sense of ethics, deductive logic or even consciousness on a human scale to (our desiring) machines, „we are confronted with the necessity of radically interrogating who we are, our ways of thinking, our ways of acting”, as Geoffroy Lagasnerie puts it. However, what do we, humans value collectively and whether the majority-principle by all means serve the public good? Amidst the global crisis of political democracies and sensible fallback on consensus even on highly menaceful issues like climate change or human rights indicates the difficulty of working out a universally applicable framework of action.
Who should a self-driving car roll over, the cyclist with helmet or the one not observing the traffic rules? Whose life is more valuable, a child’s or an elderly person’s? How should we assign gender to bots that do not display a division of productive and reproductive functions? Which law has higher priority, to protect national security or the right to private property, when it comes down to take control over drones? Should every piece of information on public affairs be disclosed for the sake of transparency or information should remain a privilege of certain groups of people? Should a human get a job, even if a robot performs better at a task?
These are some of the dilemmas of machine-human interaction that the pilot project, Training 2038 — Human Data Collection for Artificial Intelligence created by the innovation lab Kitchen Budapest chose to revolve around. Training 2038 rehashes the old questions of what constitues to be human with the application of state-of-the-art technology (converging the user experience of VR and conversational AI). The project envisions a set of nuanced scenarios within 6 territories of control where AI is expected to blow up and disrupt several critical industries: 1,warfare 2, perception management (media) 3, corporeality of artificial life 4, love, sex, relationships 5, governance 6, autonomous tranport.
In contrast to the perceived obfuscation in much of the computational protocols today, Training 2038 stages a scenario where people are given a voice to feed back on systems just taking shape. The project aims to counter-act the black box effect promoted by the majority of the Silicon Valley-platforms that tend to hide the underlying principles of a system architecture and the hierarchy of motives is never articulated overtly. In a safe-space of a private VR experience, an extensive survey is underway in a form of a dialogue played out between an embodied conversational bot and a human user. The opinion poll is formatted to empower non-professionals and laymen outside the techno-intelligentsia to embark on subjects of automation, ethical algorithms, risk management in order to boil down value preferences central to make decisions and differentiate between right and wrong. How certain are we of that?
Even after clearnig grounds to thought experiments it is still highly uncertain whether after the mass-deployment of artificial intelligence we’ll arrive to the bright side of fully automated luxury communism or a more sinister alternative of people simply becoming superfluous.
Training 2038 is featured in the curatorial program of Ars Electronica Festival 2017, Linz, Austria.