Controlling Machines — AI, intentions, and games and in the IoT
by Simon Höher
The ThingsCon report The State of Responsible IoT is a collection of essays by experts from the inter-disciplinary ThingsCon community of #IoT practitioners. It explores the challenges, opportunities and questions surrounding the creation of a responsible & human-centric Internet of Things (IoT). For your convenience you can read it on Medium or download a PDF.
There is a funny video on the internet. It shows Amazon’s Alexa and Google’s Echo, two voice activated and “smart” assistant systems, being tricked into talking to each other in an infinite loop of “Hi — how can I help you?” It’s a creepy and somewhat pitiable display of two of the most advanced artificial intelligence (AI) systems of our time being stuck in a bizarre trap of polite senselessness. And while it raises justified questions about the state and power of the smartness of the things at our hands today, there is more to it.
While certainly prominent in the realm of technology, this kind of hacking — that is unintended use of things — can be found pretty much in any context of our world, building of course on the very human trait of ingenuity. But as prominent as these counter-intentional use cases are, there are of course many others, where we follow the intended usage of the machines around us (more or less) happily. Examples range from the de-humanizing industrialization of the early 1900s, to the TV-zombies of the 1990s, to the Facebook-slaves of the today’s Generation Y. Is this „intended“ use of technology something, that we should strive for? Or should we all be hackers of tech, in order to cherish our own free intentions? Put differently: Who’s intention are we following exactly, when we react to a thing, say, a phone’s notification? The phone’s or our own? And why would that matter?
Intentions and quasi-intentions
Daniel Dennett’s account of the „intentional stance“ provides a very good starting point on this matter. He claims that while similar in their outcome, there are different levels of intentionality to be found in different types of organisms: from the intention to grow and prosper of a tree, to the intention to win of a chess computer, to the manifold and complex intentions of us human beings. What all these systems have in common, however, is that they seem to strive to achieve a certain goal or preferred state through their actions. This thought is in part reminiscent of Aristotle’s notion of a telos, the inherent „end” goal of an object, that can be applied to virtually anything, from a flower, striving to grow and bloom, to a book that calls for being read, to a machine that calls for wants used. Now, here’s the catch: According to Dennett, regardless of how a system is made up (mechanical, electrical, biological, or psychological), the best way to understand and anticipate it, is to ascribe a general intent to them. Just like the example of the chess computer that is best beaten when ascribing to it the intention to win by playing according to the rules, rather than by exploring the code and circuits within it.
What Technology Wants
How ever blurry the lines between genuine and ascribed intentionality get along the way, we are dealing with a de facto intentionality when it comes to the smart and proactive things around us — especially technology. An interesting take on this is introduced by Kevin Kelly: In his book „What Technology Wants“ he ascribes a very specific intentionality to technology, or as he calls it the „technium“, itself, a “system of tools and machines and ideas … so dense in feedback loops and complex interactions that it spawned a bit of independence.“ He claims that throughout the history of human kind we have been employing and developing technology with greater and greater complexity. From basic tools, to elaborate mechanisms of the antique, to industrial scale production systems — all the way to digital assistants, computers, and AI systems. We can translate Kelly’s account of what technology might want into three goals:
- activity and engagement — the state of being switched on and running
- connectivity — the state of being connected to other technological objects, employing the notion of a network of things
- ubiquity — a state of ubiquitous access and accessibility, catering to connectivity and in turn engagement
We see that something interesting is taking place: While the chess computer — just like any technology before it — is in fact following a very determinist process of execution, today’s AI systems might climb up that ladder substantially. We might even call them adaptable, and re-shaping themselves, reflecting upon their strategies, goals, and techniques, in order to achieve the best possible outcome. The reason for this is that the „smart“ systems employed today, like Google’s Deep Learning algorithm, manage to perform fundamental tasks, that range from learning best solutions to creating them: Our phone that „learns“ when and how it should notify us, claims to safe and value our attention through predictive analysis and pattern recognition, nudging us only when „it makes sense“ for us to be notified of any event. But in fact, it is maximizing attention and likelihood of us reacting (in the future). Boosted with AI, apparently the first time in the history of technology, things have gained a new quality of means to pursue their intentions.
So.. Should we be worried?
If AI is just getting better and better when it comes to trick us into behaving as it intends us to, does that mean my smart thermostat is challenging my free will, manipulating by nudging me? While the term manipulation might seem drastic, its actual representation is not quite as dramatic or obvious as it might imply. We can describe the impact that smart technologies have on our lives a little more subtle, unspectacular and common. After all, this is what manipulation is all about: the victim does not note it is being manipulated.
In fact, there are many phenomena today that we could describe as manipulative and “intentional“ actions by contemporary technologies in the attention economy in the digital realm — from „smart“ notifications to Facebook’s newsfeed algorithm, that learns what interests us, just so that we will spend even more time on its platform and are more likely to engage with it. Of course, these developments are merely the flip side of what we call technological development, innovation, marketing, or simply a modern life. Still, they have very real consequences on how we act and live our lives — often in a way, that seems so unconscious and inattentive, that we do not notice it. Making sure we do not fall victim to the manipulative intentions of technology once we bring them into the physical world around us, would thus seem advisable. And is in fact nothing too new, either.
Grown up tech, grown up users
Defending our own free will against the interests of others through critical thinking is what we call being an adult, rational human person. An agent capable of free will is called to use it and defend against the manipulative effects of other agents, implying ownership and responsibility: The price of freedom is eternal vigilance. Dennett points to John von Neumann and Oskar Morgenstern here, stretching that the strategy of anticipating another agents actions and proactively adjusting its own, constitutes agency and allows us to act freely. It is this very capability that is decisive to remain autonomous and free when it comes to how we live our lives and what decisions we make.
As a society, we are very used to do this when it comes to political manipulation, through media, politics, and arguments — by employing a critical and transparent public discourse and scrutiny. We are also used to (more or less effectively) block out the manipulative effects of advertisers, driven by business and corporate interests and intentions, and reserve for us the right to choose when and where to make our buying decision. And since Foucault, we even tend to critically reflect the inherent agendas set in social and cultural systems among us, ascribing an inherent intention to basically every aspect of social life. The trick here as well is to identify, mark, and reflect upon these intentions, to remain free and autonomous in a world of other autonomous actors with their own beliefs and goals.
We might thus want to learn to extend this competence to technology itself, since we are starting to be dealing with an intentional and incredibly adaptive opponent when it comes to AI. But how exactly could that look like?
A Game of Things
Going with the theory of games, we might want to behave just like we learned to behave when playing strategic games — just now, our opponent might happen to be a thing. Strategies like attentiveness (asking us regularly What did I just do?), reflection (asking us Why did I just do that?) and smart tactics to engage in a playful strategic game with our opponents, in order to get the other part to do what we want from them, are cultural traits, we are already learning, when dealing with our computers and phones. In a connected future, we might just want to extent this to virtually everything around us. A constant game of things.
This implies anticipation of our and our counter-parts’ actions and re-actions — be they human or non-human — as well as a certain level of unpredictability, that is the competency to conceal one’s true intentions to attain bargaining and strategic leverage. A virtue that might be threatened by ubiquitous mapping and tracking of virtually all data, in order to predict and anticipate our very own decisions — sometimes even before we know them.
To cherish and value our free will, we might thus want to be able to become “hackers” whenever we please to, using the tools and tech around us counter-intentionally, boldly, and confidently with, without, or against the nudges of technology. Claims for privacy, proper technological education, as well as emerging trends like “digital detox” in this light attain another, more fundamental notion that exceeds plain lifestyle considerations and gain an additional dimension, regarding our human capability to engage in the world as free and deliberate actors for a technologized future to come.
Simon Höher explores human-centric concepts of technology, culture, and society in a global context. He co-founded FORRREST, an exploratory strategy firm that works with organizations to learn about ways to co-create their future, fosters connections between people, ideas and products — and applies his insights as a speaker and mentor.
Beyond that, his activities include chairing ThingsCon, one of Europe’s leading conferences on the future of hardware and a responsible Internet of Things, and curating for .process, a multi-disciplinary festival about innovation and collaboration. He is fellow at the European Policy Council’s Futures of Europe program and an active member of the MIT-based International Development Design Summit (IDDS). In his work and studies he explores concepts of collaboration, open / critical design, innovation management, and systems thinking. Simon regularly mentors at Seedcamp and worked in various organizations in the field of technology and development throughout Africa and Europe. He is currently based in Cologne and blogs at simonhoeher.com
ThingsCon is a global community of IoT practitioners dedicated to fostering the creation of a human-centric & responsible Internet of Things. Learn more on ThingsCon.com, join an event near you, and follow us on Twitter.