In Michael Ende’s fantasy novel “The Never Ending Story” a troubled boy starts reading in a somewhat dark and scary and yet exciting book about a beautiful place threatened by the so called “Nothing”. Not knowing if he is just observing or an actual part of this adventure the boy decides to go on, dive into the adventurous undertaking and (Spoiler Alert!) in the end — saves the world.
Does this kind of resemble our current situation when it comes to Artificial Intelligence? Nobody really knows his part in the story. The major difference is that in this time we cannot choose whether we want to dive into this “adventure” or not. We cannot just close the book and walk away. Almost everybody is able to play his part in this story: there are the cynics dismissing this as absolute, overrated non-sense, influencers proclaiming the apocalypse by AI Robots and the excited optimists, that feel strangely and inexplicably attracted to this topic. However, it may not be as absolute. You are not a victim of technology and unconnsciulsy have to obey to the dictate of growth and development. You still have a clear choice on the very induvidual level. Of course you are not forced to use any AI if you do not want to: you can go AI-free as well as a mother or father can choose to parent screen-free in their bubble at home if you are close to industrial world. In rural, very remote areas the impact on the individual level logically is not as massive. On a societal level, denial of the impact of AI will not work in the long run.
When you look a bit back in history over hundred years ago, there was probably a similar mega trend about human intelligence when researchers first began to measure human intelligence and boil it down to numbers to compare human abilities to each other. It was likewise exciting and yet scary and very dark when you think about how Adolf Hitler used so called “Intelligence Tests” as a legitimization to eradicate certain people or races.
It took about one hundred years to get at least some researchers to agree on a definition of (human) intelligence and you may think intelligence has lost its appeal; but with the remarkable advances in big data, Internet of things, deep learning/ machine learning etc.- the many-faceted discussion about intelligence — human and machine — is reignited and attention in academia and press reached an all-time high.
Some Major AI Fails among Many
* In 2016, Microsoft’s chatbot was shut down after not even 24 hours of operation because Twitter users made it an insulting nazi-lover.
* In 2017, German police had to break into a house because Amazon’s Alexa was throwing a party on her own.
* In many reported cases, AI systems have been accused of being racist, sexist or biased: for example, in an AI System to predict future crime, black offenders were at higher risk to a future crime, in an AI-judged beauty contest, mostly white women were awarded, and PokemonGo Stops were predominantly located in white neighbor hoods.
Intentionally or not, Artificial Intelligence tends to be wrapped in inventive mysteries embedded in spectacular stories, complicating, but also keeping alive public discourse.
Rise of AI
The interest in artificial intelligence seems to be interminable. In 2016, over fifteen thousand papers across disciplines have been published in academia only. In addition, an innumerable corpus of articles online and in print add up to the almost epidemic spread of information about AI — from rock-solid science to trashy, urban legends and fake news.
AI strikes humanity where it hurts most
What are the reasons for the continuous, passionate interest in artificial intelligence? If you look at the stories above, only a few out of myriad of AI-tales, they all have one thing in common: They lead us to the darker places of the human mind. In these stories told, the AI exhibits maleficent, immoral, dubious and questionable behavior. And we all kind of know that it is us humans, that have laid the foundations for this behavior; it reminds us of our weaknesses, darkest fears or false attitudes we have but are not able to admit or handle. The AI mirrors human behavior — so it cruelly reminds us of how imperfect human nature, human behavior is. We simply must accept that it taps not only into the brighter but also into the darkest parts of human mind and behavior. Furthermore, Artificial Intelligence strikes humanity where it hurts most: our fear of being vulnerable, imperfect and replaceable.
AI: curse or blessing
Humanity constantly strives to evolve, to grow and develop to accommodate and adapt to the current circumstances. And again, it is out of the same motivation as named above: the fear of being replaced by another species. There is a constant strive for getting to or staying at the top of the food chain. Because this is where humans can protect one of the most basic and necessary needs: the need for security. This evolutionary ideology, the survival of the fittest, can be transferred into many areas of human life. Especially when you consider current economies. There is no panacea for the harm and problems which have risen and will continue to arise in our fast and ever-changing world, among other things influenced by the massive amount of information which a human mind is no longer able to cope with. Yet, if you want to make “good” decisions in for e.g. in business to survive, the current imperative is that you must have accurate information, reliable facts, realistic figures, empirical evidence to make a rational decision (for now, not considering the imperative of creativity and innovativeness as one of the most important competitive edges). So, we talk ourselves into believing that we can take rational decisions, but unsurprisingly we all know that this is impossible.
The homo economicus is dead
(Not even sure if this concept of human kind has ever lived!) Human kind must cope with the constant burden of their bounded rationality and has to admit that no one will ever be able to make a purely rational choice. More than ever, we will face and must deal with unreliable information, our limited mental capacity to work with information and less time and resources to make a decision. Among others this may constitute the underlying rationale why we invest in technology. It patches our ever-hurting wound of being imperfect. Artificial intelligence represents a logic consequence of the information overload challenge to cope with complex problems produced by the massive amount of information. It is built as an extension to humans’ mental capacities, an assistant for doing unpleasant work and functions as additional manpower, in this case machine power, in order to complete multiple tasks at once and make faster decisions.
There is absolutely no way that artificial intelligence as a topic is solely for the computer sciences and movie makers.
AI has become a product for everyone
A discourse across scientific disciplines is absolutely essential to examine artificial intelligence not only as a matter of programming language, but as a concept with all its intricacies and its major impact on society as a whole. There is no artificial intelligence without humanity. Hence, psychology as a scientific discipline is one of the major, essential lenses to use, among others such as philosophy and ethics, sociology and political science, health or neuro sciences.
Psychology of AI as a good start
Psychology is especially well-suited to start a discourse and work on interdisciplinary concepts as it starts at the human level (versus sociology or politics, which start at a systems level). However, ethical or biological views are very much integrated into psychology. That’s why a sharp distinction between the disciplines may not be possible and useful anyways. Psychology investigates mind, life and behavior of humans. As an academic discipline it has an immense scope from cognitive psychology to social psychology; from clinical psychology to organizational psychology and many others . When it comes to Artificial Intelligence one thing is clear: you cannot take the human out of artificial intelligence. Whether it is the human-agent interaction, perception, language, cognitive processes or soft skills such as empathy, emotions or communication skills. There is always a human side involved — whether it is writing the program, editing data or interacting with the system. As for now there is no AI Psychology or Artificial Psychology (a term coined by Dan Curtis in 1963 ), which implies that the Artificial Intelligence has its own mind or even consciousness to make decisions without any human interaction or input. Yet the progress of imitating human behavior and various mental processes is quite amazing and much more can be expected, since this road is not a dead-end and many more are coming along to join the journey.
Complexity is the enemy
The more people join the discourse the more complexity (and competition!) will be added; and complexity is the enemy: complexity separates, disconnects and isolates. If artificial intelligence shall be an entity to create a “better” life, it is essential to reduce complexity; we need to find a common ground. We need a common base, language and have the possibility to create stop-signs, if the path leads us to dangerous territories or search for sign-posts before we get lost.
SCIP research department: Psychology of AI
This is the first paper to legitimate why psychology and artificial intelligence are inseparable so to say. Within the course of the coming year this series will touch upon the very basic and profound concepts and theories of psychology as a science integrated into the AI environment. Relevant psychological constructs will be explained and how they connect to artificial intelligence. For example, what is human perception and attention: reality, ambiguity and deception and how does AI resemble these processes? How much perception bias is in AI? What are the implications? For example, if you use an AI system in a recruiting of personnel process (Human Resources), how can you make sure that applicants of specific races or sex are not discriminated against?
The topics have a wide range, yet they are chosen based on relevance and use in day-to-day practice. Goal is to create general understanding, find a common ground and therefore reduce complexity. Key questions and answers about the understanding, measurement and comparison of human and artificial intelligence skills will be the focus of this series, which incorporates both, invisible processes (so called black box of human brain) as well as visible behaviors.
- Brain & Cognition
- Soul & Sensation
- Behavior & Environment
- Society & Ethics
- AI in Practice
The challenge is to find the perfect in this context adequate level of depth and complexity to gain further insights and yet have everybody across disciplines, across research and practice in the same boat. Because after all it is a topic that has an impact on almost to all of us. Inclusiveness and adequate degree of comprehensiveness are critical success factors to avoid failures or even worse disasters.
AI fails or human fails?
The “fail” of Microsoft’s Tay named in the beginning, truly is an interesting story as you can look at it from many different angles. This story provides perfect evidence that an objective, interdisciplinary approach to the development and implementation of artificial intelligence is absolute necessary. It is so straight-forward and yet complex, impossible to be handled by one single person apparently. The Tay case raises a multitude of questions with only a few examples being:
* Why does Microsoft create an AI to imitate youth behavior at all?
* Why does it have to be a young female when the majority of developers are still male?
* Why do users enjoy attacking the AI?
* Why do users invest time and energy teaching hatred?
* What is wrong if it is solely mirroring current cultural ethos of hate speech in social media?
* Why should an AI behave ethical whilst the rest of users engage in hate speech?
* What are the pits and downfalls, what is the motivation, fun and excitement about “playing god”?
* Why have experienced developers like Microsoft, not foreseen the ethical dilemma? Why were they not able to moderate the impact of hate speech?
* How can you technically implement ethical values to not harm other people? And who decides what is ethical behavior? Where is the threshold, who decides upon the rules?
* Or is it better to implement realistic behavior? Why should an AI exhibit perfect human behavior when humans are naturally imperfect?
* Who benefits of the development of an AI of this kind and who is harmed?
The list of questions goes on and on and on if you start thinking about it thoroughly and as neutral as possible. As it is most often the case when you actually start doing research you do not get answers, you raise more questions, more problems, more dilemmas. It would be way easier to stay at the surface, brush aside the case of Microsoft’s Tay as a major, disastrous AI fail, rejoicing in the suffering of others. The easy way out it not always the best, this is not the way to reduce complexity. We shy away from complex topics. We are scared to ask dumb questions. We do not want to lose our face and sometimes choose to humiliate or focus on others instead of standing up to the challenge and take responsibility for our flaws and mistakes we make. The only way to reduce complexity is through shedding light where it is dark and decompose where there are huge blocks, all step by step. We need to take a shared approach to a common understanding by using a common language and a common ground.
More than ever, we need to focus on interdisciplinary, cooperative discourse in the best possible way. We even need to have discourse about discourse per se! If we have so many people from different disciplines, with very heterogeneous backgrounds and knowledge, how do we want to approach these topics after all?
Our research endeavors within this series comprise technical and non-technical considerations to guarantee the preparation of our community in the best possible way. Topics, problems, dilemmas, whatever may come up, are all examined through an interdisciplinary lens to measure social-psychological impact, ethical implications and additionally to forecast future development for the greater purpose, which eventually is,- the public good.
Originally posted https://www.scip.ch/en/?labs.20180215
Zimbardo, P., Gerrig, R., & Graf, R. (2008). Psychologie. München: Pearson Education.
* Kaufman, J., & Sternberg, R. (2010). The Cambridge handbook of creativity. New York: Cambridge University Press.
* Stone, P. et al. (2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA
* Landes M., Steiner E. (2013) Psychologie der Wirtschaft. In: Landes M., Steiner E. (eds) Psychologie der Wirtschaft. Psychologie für die berufliche Praxis. Springer VS, Wiesbaden
* Koch M., Werther S. (2013) Kreativität und Innovation in Organisationen — eine systemische Perspektive. In: Landes M., Steiner E. (eds) Psychologie der Wirtschaft. Psychologie für die berufliche Praxis. Springer VS, Wiesbaden
* Crowder, J. A., Carbone, J. N., & Friess, S. A. (2013). The Psychology of Artificial Intelligence. Artificial Cognition Architectures, 17–26. doi:10.1007/978–1–4614–8072–3_3