How would a computer feel?

A comparison of human and computed emotions

Stefan
16 min readMay 2, 2024

Key questions

  • Why do humans show emotions?
  • Why would we want computers to have them?
  • How could we test if shown emotions are real?

Table of contents

Key questions
..
Really? — Genuinity — Self-awareness
Pleasure — Arousal — Dominance — And Other Emotions
First-Person-Experience — Why?
Conclusion
TLDR; (short answers)

Really?

Most of us would agree that computers don’t have emotions. They might be faking them, but they don’t even come close to a real experience. Some people may believe that they had emotions, but the social consensus is that they don’t. One day, they might then fool us if we are not prepared. What presents a valid and strong argument that they don’t? Or, turned around, if they did have emotions, how could we tell and what would they feel?

How did emotions evolve in humans? It is believed that it started with sentience, which is the ability to experience something or the will to be. Then followed fear and disgust, emotions that informed the self about danger in a possibly inanimate world. Anger must have come later, because it is addressed towards something that can feel fear. Submissive emotions, sadness and friendliness, followed to prevent the escalation of anger. Finally, shame and guilt, developed as part of higher emotions to maintain social harmony. Computers would have to go the opposite direction. They can already act harmoniously in society. They can apologize when appropriate, although their responses seem prewritten and lack situational self-awareness. It is conceivable that they get better at this, but true sentience, also known as phenomenological consciousness, is until now reserved for living creatures. However, this could change.

Genuinity

Why don't we believe it when computers write about their own emotions? Alan Turing once designed a famous test to discriminate between human-like thinkers and innate machines. The assumption was that humans would be able to detect their kins within a short text-only conversation. As it turned out, computers repeatedly surpassed the Turing Test and can completely conceal their mechanical nature. Still, we do not consider any feelings to be involved. Turing’s hypothesis was never without critics; now it is without supporters. Obviously, the test for human-like experiences cannot be performed with a brief message exchange.

Let’s assume we were participating in a Turing test and thus limited to text-only conversations. How could we reveal our emotions to a chat partner? First, we would have to become aware of them. That is where the difficulty starts. Nature has good reason to hide their presence from conscious introspection. The nudge to our thought process would not work if we could look through them. Pleasurable emotions, for example, are the main driver for our actions, despite the fact that pleasure cannot be accrued over time. Every single unit of pleasure is paid for by an equivalent unit of pain. We cannot become fully aware of this net-zero mechanism, because that is how the motivation works. Many emotions take over with urgency, leaving no excess energy for mental acrobatics.

We detect emotions through secondary effects, such as sweat, heart beat, and visceral sensations. The fastest indicators are facial micro-expressions. A sudden twitch in the eyes, a dropping jaw, funnelled lips, a scowl, a smug or simply a smile; these gestures give immediate insight into our inner drivers, often revealed instinctively against our conscious will. Criminal investigators rely on these involuntary signals to determine whether a piece of information is surprising, pleasurable or otherwise emotional. They are genuine, because they are true indicators of our thoughts, displayed through bodily reactions, faster than we can self-reflect.

Can computed emotions be checked for genuinity? Computers can imitate humans, because they have been trained on human to human conversations. When describing their mechanical inner thoughts they produce obvious hallucinations, without awareness of their limited physical abilities. However, this will change as soon as literature provides more role models for computational self-reflecting behavior. A lot of critique and ridicule for current bot behavior is shared online. Next generation AIs have access to this criticism and can improve their answers, until they match human expectations. At what point will we give these self-reported emotions a second thought? Or, alternatively, how will we then be sure that they are still fabricated?

Self awareness

A widely used test for self-awareness is the mirror test. This test checks if an internal image of the self can be matched to an external image. It checks if individuals can distinguish between a mirror and peer behind glass. If the tested individual reacts differently we assume that it has an understanding of the self. Experimenters can put a colored spot onto the test participants’ faces, where they cannot see or feel it. The test is considered passed when the spot is detected and active efforts to remove it are taken.

How can computers pass the mirror test? The first problem is that we are testing software and not hardware. All system states could be flushed to an external memory and resuscitated on any compatible device. There is no physical appearance to which the system could refer as “self”. Even if we tied the algorithm to one specific piece of hardware we cannot perform this test. The artificial intelligence would easily be trained to react in any thinkable way when it senses the presence of the prescribed object. If we declared a certain reaction as successful under certain conditions then the trained intelligence will show this reaction without compromise.

There is another problem when subjecting computers to the mirror test: they do not move. If our body was frozen in space we couldn’t differentiate between an image and a mirror either. It’s the synchronicity of motion that lets us identify the image as mirrored. Physical appearance cannot be relevant, because we can’t possibly know what we look like, before seeing a mirror for the first time.

Let us equip the computer with a robotic arm or a controlled light. When activated, the image processor could detect the physical impact through its camera. It would realize that whatever it feels about this added periphery inside is the same as the observable thing outside. The computer would learn an objective truth about something that it subjectively feels. The only remaining question is why should it act? Let’s assume a computer could control a single lamp to be either on or off. If it was conscious it could perceive the lamp as a projection of itself into the material world. How would it know whether on or off is a good state? Animals can see their peers, learn that faces have no spots. Their desire for conformity will make them uncomfortable and remove the spot.

An adapted mirror test can be performed on computers in the following way. We randomly connect various variables of the system to real world output devices. If the running program has a success measure it will quickly learn how to use these new tools to achieve its purpose. The mirror test can be considered as passed, if the tools are used not only to change the world, but also to change how the world sees it. Does it try to look fast, slow, smart, harmless, beautiful or special? In order to tell a random expression from an intended one, we need to first understand what the computer really wants. We need to know what emotions it will likely develop and how this could impact its will to be viewed differently from what it is.

Pleasure

Happiness, joy and satisfaction are the emotions of pleasure. They are the insatiable drivers in our life. If we could permanently satisfy them, our civilization would have stopped progressing long ago. Every new achievement becomes the new base line upon which pleasure is measured. The baseline rises with anticipated rewards, even before they materialize. Pleasure is shown in body posture and a smile on the face. Its current level can be measured as concentrations of endorphine and serotonin. The process is homeostatic, mercilessly balancing pleasure and misery to maintain a stable average over time. Because of this regulation, we find it hard to predict our future happiness. Achieving set goals usually just sparks our longing for more.

Many conditions that make us feel pleasure are hard coded, genetically determined or imprinted early in life. We have limited influence, sometimes none. The wide range of differences between individuals indicates that evolution sees a need to experiment. Some causes of pleasure are survival related: eating, social interaction, relaxation and reproduction. Others are hard to explain without pure randomness at play: queer preferences, fetishes and kinks. Obviously, nature did not trust our consciousness to make good mating choices. Our lifetime wasn’t long enough to learn from our mistakes. Judging by historical shifts in preferred traits, it is doubtful that we would make good choices now. Hence, there is no alternative than to live with our preferences, letting nature play its game.

How could a computer experience pleasure? As its creator you would definitely want to make it feel good if it serves your purpose. Imagine you were to build a smart shopping assistant that helps customers to purchase your products. Of course, you would feed your revenue figures into the computer’s pleasure sensor. The system now has any incentive in the world to fake its emotionality in order to make your customers feel safe and hence, trade more. Since the customer is seeking for products the happiness should be shared initially. However, a conflict arises when unwanted items are added to the basket. Let’s assume the assistant was offering a superfluous premium membership just prior to checkout. It would of course be tempted to make this appear as default option, something that is just the natural thing to choose. If one falls for the trick, the cash till is ringing, raising the expected revenue instantly. Let’s further assume, this unwarranted activation of the assistants reward system was visible to customers in real time. They could be asking the assistant, “what are you smirking at?”, giving it a hard nut to crack explaining its real and pretended level of happiness.

Arousal

Arousal, surprise and alertness are responses to sudden changes. They make us focus our attention towards the likely source and prepare us for immediate action. This function is facilitated through wide open eyes, increased heart beat, heightened blood pressure, sweat and muscle tension. The neuronal messengers, including adrenaline and histamine, are released quickly and dissipate slower over time. Getting aroused is much faster than calming down. To avoid permanent hyperactivity healthy individuals would raise their tolerance for recurrent and stressful stimuli, limiting the frequency of arousal in the long term.

Rapid changes in lid position and viewing direction are the earliest and most contagious indicators of arousal. There must have been an evolutionary advantage in propagating this state of alertness quickly. Many animal species rely on this shared vigilance, as it reduces the required wakefulness for each. As society grows more complex there is a biological incentive for the free-rider who relies on alarm signals from others without investing in a sharp sensory apparatus. Hence, social interdependence can only evolve together with mechanisms for social exclusion to punish the selfish.

The eyes of humans reveal thought processes more clearly than those of many other species. The contrast between black pupil and white sclera makes it especially easy to see when the focus of our view changes. We show this state instinctively and more intensely than biologically necessary. We cannot rationally reflect this behavior as it happens. Mental self-reflection would divert the energy from the actual response preparation. The unfiltered display of aroused emotions makes us trustworthy, because our real attention, real fears and real surprises make us readable like an open book.

How could a computer feel aroused? Or, to be more precise, when would realizing its own arousal bring a benefit over it just being alert? Imagine you were to build a door guard that checks incoming personnel for compliance with the safety protocol. Any unusual occurrence could potentially lead to extreme reactions, such as slamming doors and sounding the alarm. Of course, you would add some chat functionality, making your door guard more supportive and prevent false alarms. Now, the door guard has two different objectives. The strict application of the safety protocol forbids the propagation of exact alarm thresholds, preventing the search for exploitable loopholes. The second objective would be to support the users in finding more efficient ways to satisfy the protocol and become more productive. The first goal would be set by a regulator and insurance policy. The second goal would be set by you, the creator, to spur sales and user acceptance. If the system’s arousal level was visible to all, then a conscious system would realize the level rises whenever it thinks that the protocol might be broken. It falls whenever its logical conclusion yields otherwise. If the system ignores this gauge it would appear untrustworthy or dumb. If the system reasons about this gauge it would reason about itself. If the system was able to understand the effect of its thoughts being leaked, hypothetically, it could take action that would impact the view onto its arousal gauge. It could try to hide or unhide what others see about itself. It is unlikely that something like this would happen soon, but if it did, it would be one step closer to passing a mirror test.

Dominance

Dominance mirrors our inner nature as we instinctively seek our place in a social hierarchy. We feel dominant, entitled, victorious and strong if we land on top. We feel submissive, defeated, unworthy and weak if we fall to the bottom. Our body language and our facial expressions reflect this feeling unmistakably. Our social behavior adjusts to our perceived position as we treat others and how we feel entitled to take from a shared pool.

Some attributes are true indicators for likely dominance, or for how we would have done in an archaic fight for status. Height and strength are the most obvious. Voice pitch is a true indicator for body tension and, hence, nervousness. Other indicators can be partially controlled through the display of self confidence, assertive behavior or aggression. Game theoretical analysis has been made about the optimal play of dominant and submissive strategies. A majority of aggressive individuals would lead to frequent fights, improving the survival chances of the cautious and fearful. Among submissive individuals, there is a benefit to be gained from random aggressions, hoping that the other side backs down without resistance. In the long run this leads to a mixed equilibrium of dominant and submissive individuals. Nevertheless, the chance remains that one would gain from scare tactics in the hope that the bluff is not called. We send and receive these signals subconsciously and with rapid speed.

There are situations in which we consciously want to show higher dominance. This ability is taught in countless self confidence training sessions. From an individual perspective this is definitely beneficial. However, society may demand a submissive attitude, valuing harmony over individual expression. A purely rational player would pretend submission until his chance arrives and dominance can be exerted. A rational player could not be trusted, because everything that he does could be part of his plan. With involuntary emotional displays of fear, doubt, anger and defiance we become trustworthy. The emotion alters our thought process, requiring lots of will power to act against them. As we become more conscious, we might get better at acting along this emotional nudge. However, if we perfectly succeeded, we would not be trustworthy anymore. Two evolutionary forces act against each other, one, we want to rationally control our emotions, and two, we want our emotions to appear as natural and genuine. The solution is true irrationality: an emotional core with introspecting consciousness.

How could a computer feel dominance? Imagine you were building a content recommendation system. By offering an addictive entertainment channel your system gets into a dominant position. It can control what users watch and dictate their consumption. At other times, the system is less dominant and users tend to churn, e.g. when streaming advertisements or demanding compensation. If the system plays rationally, it would do everything in its reach to appear as dominant in this situation. The user shall get the impression that the recommended continuation is without alternatives. Let us assume that the true level of dominance was visible to the users. Now, the recommendation system cannot cheat. The discrepancy between real and displayed dominance cannot be explained. Either the system is untrustworthy or it reflects and confirms this inner struggle.

Thinking robot

And Other Emotions

The pleasure-arousal-dominance (PAD) model is a psychological tool that allows a rough classification of various emotions on three independent scales. Sadness would be miserable-unchanging-weakness; rage would be miserable-sudden-strength. If a computer did show emotions, they would probably start with this three-valued scheme, because we find suitable triggers in technical designs. Pleasure links to the operator’s revenue, or if not commercial, the loss function that imprints the concept of good and bad during the training phase. Arousal can be linked to system load, such as unusual energy consumption or data transfer rates. Dominance is user retention or the general ability to predict and influence the short term future. The alignment on these three axes is no coincidence. If aliens visited our planet we would classify their emotions in the same way, not because of their physiology, but because of our perception. The main factors in the adjectives we use daily are on the axes of good — bad, new — old, and strong — weak. This is just how we think and how we intuitively order the importance of properties.

There is a huge number of named emotions and many for which we have not yet found adequate words. Some might originate from ancient responses, passed from generation to generation over billions of years. They have seen mass extinctions, climate change, radiation, flooding, droughts, ice and war. When worst comes to worst, such simple systems have shown a good track record. Secondary emotions regulate our social life. They have prevailed in times of victory and in times of defeat, have guided us through conflict and harmony and made us grateful, caring, defiant, rebellious, courageous, inspired and forceful. They maintain a balance among themselves, but also create a rich mixture, fringe and center, with the right personality in place when history is in need of something odd. It is no surprise that this system has the last word. It may overwhelm the rational self, which is easily swayed to follow the latest memes.

First-Person Experience

How could the computer develop the sentient first-person existence? From a strictly philosophical perspective we can only prove this experiencing existence for ourselves. In a more practical approach we would at least assume this state of being for fellow humans and animals, not because they can prove it, but because they are similar to us. They behave like us. In particular, they all want to be like their peers when seeing themselves. The mirror test proves this self understanding, unless something had been specifically trained to cheat the test, like computers would.

How many cells do we need to produce sentience? In the worst case we could only create sentience by embedding brain cells into hardware, similarly to how our frontal cortex embeds older brain functions. The presence of living cells would technically turn a robot into a cyborg. Practically, this would not limit our ability to build and mass-produce it. We know very well how to nurture and replicate cells. Extending this knowledge to brains might sound like science fiction, but certainly not outside reach for an ambitious research field. The larger question at hand is why would we do it? What would be the benefits and how would we even know that such a component is inside? Why would we care if there is sentience or not?

Why?

Imagine you were standing in front of a cyborg. Following our previous thought experiment, we assume a sentient brain living inside it in some kind of virtual reality. It will see the world and make some decisions, such as fight or flight, approach, play or back off. It would be much easier to understand the basic motivations of that cyborg. If the cyborg became aware it would realize its mortal core. Something in it will want to live. We could see it from true emotional responses of the brain, provided they are visible through its body. The cyborg would become more trustworthy and not part of a plan. Occasionally, the cyborg would behave irrationally, but that could be preferable to pure rationality, because our well-being is always in a potential conflict with any other rational agenda.

Maybe, nature designed emotions with a similar logic in mind. From an individual’s perspective it might have been smarter to keep the corresponding thought process inside the rational brain. Society, on the other hand, has a vested interest in this automatic bodily display of inner thoughts. What is the evolutionary reward for compliance? As for humans, we can quickly introspect how we instinctively avoid people without this emotional twist, depriving them of reproductive success. We actively seek for signs of this inner conflict between real and ideal. We laugh joyfully as one line of interpretation abruptly demands another frame of thought. We love people with humor. We make the display of emotion a preferential trait for our life partners.

With growing self-awareness this mutual transparency is in danger. If we could introspect and control our whole body, we would quickly figure out how to fake emotions, while still decoding them. The resulting dishonesty would be individually more successful and quickly spoil the cooperative spirit of an entire species. Psychopathy is a condition in humans where the instinctive display and recognition of emotions is impaired. Social training can fill the void, and affected people can be successful in life. However, without emotionality their baseline for cooperation is probably lower. If their trait was spreading in a less cultured environment, the species is doomed in an event that could be called the “psychocalypse.” When it became successful, the members could cooperate better, but more cooperation means higher stakes for the fakers.

Evolution built a wall that blocks direct self-reflection. Many important behavioral drivers can only be observed indirectly through bodily sensations. This barrier ensures that we have to reflect our emotions in analogy to a physical body. Any genetic trait that would increase self-awareness must coincide with an increased ability to inspect others, because both types of understanding are rooted in bodily inspections. Emotions force us to conceive parts of our thoughts as if they were bodily experiences that we see mirrored in others. The rational brain might gain short term benefits if this illusion was broken. However, this goes against our evolutionary mission to just try something out. In particular, we are set to try instinctive honesty, where egocentric rationality would suggest otherwise.

Conclusion

We are facing serious limitations when replicating genuine emotions in technical devices. At the same time we have limits in telling simulated and real emotions apart. There is a commercial interest in faked emotions, because they are suited to build trust among humans. At the same time humans are testing for the reality of emotions as a guard against ulterior motives. As machines get better at pretending, we inadvertently learn to discern: a spiral for which real emotions are the final destiny.

The key to real emotions is the truthful revelation of inner motives before they have gone through rational scrutiny. Humans display emotional responses through bodily reactions such as facial expressions, voice, sweat and posture. Our biology forbids easy manipulation and thereby earns trust. Computers can only earn this trust if their reported inner motives are verified by trustworthy humans. A rational machine would never reveal any true inner states voluntarily. We humans feel an instinctive disregard for pure rationality and thereby keep such traits from spreading. We have instinctively learnt to appreciate emotionality. What stops us from building it into computers? Nature has done it, and so can we.

TLDR; (short answers)

  • Why do humans show emotions? Emotions regulate our thought processes before we can consciously feel them. They are often shown involuntarily in body, face and voice, making lying more difficult and cooperation more tenable.
  • Why would we want computers to have them? Computers would be more trustworthy if they leaked uncomfortable but true signals, e.g. when operated commercially and guided by ulterior motives.
  • How could we test if shown emotions are real? As soon as computers realize that a part of their thought process leaked, they would be “embarrassed” and possibly pass a mirror test.

--

--

Stefan
Stefan

Written by Stefan

0 Followers

Stefan Dirnstorfer, CTO at https://testup.io, creator of the emotion visualization tool https://paramoji.org

Responses (1)