Will affective computing influence e-commerce
Imagine machines, that could read human emotions. They would tell you if you are sad, disgusted or joyful, and what’s more important — they would give the same kind of feedback about your clients. Their reaction to the product would be instant and measured by machines. But, are we talking about close-future, or do interfaces like that really exist?
Paul Ekman’s study shows, that we have several shared emotions, that are recognisable globally, and depend on our biology rather than the context or culture. Ekman studied facial expressions that represented emotions like anger or fear, and tried to find out, are they recognisable universally. Throughout european culture and up to the forests of Papua New Guinea, the study showed, that we have six major affects, that are recognisable on photos and labeled as the same emotion by all the questioned. Those emotions are: joy, anger, sadness, fear, love, disliking, and liking — based on mimic structures that represent listed affections, algorithms and interfaces read, interpret and even simulate human emotions.
Affective computing is a term developed by Rosalind Picard — MIT professor, but also founder of startups like Affectiva and Empatica. Generally, algorithms and programmes can work two ways — either read emotions from human facial expressions, voice recognition, skin temperature, pulse or can pretend to have empathy and once they have recognised an emotion, give a feedback that would act upon the user. That means chatbots, that say “I’m sorry” and actually mean it. Sensors can capture data on the state of your behaviour, mimic expressions and alternate a response based on that data. That’s the idea behind startups like Affectiva — they measure emotions in human behaviour, and can be used in product testing, checking websites design or e-commerce experience…
Coming to e-commerce: from the beginning — the offer, marketing advertising or store’s design, can be measured by algorithms, that track emotions. It’s still cheaper to run tests and questionnaires, asking about the feelings towards an offer, but we can imagine cameras, that wouldn’t need the permission to measure expressions, running on each and every computer — mass information like that would be preceded by marketing specialists & product development teams. The second way of using affective computing, could apply to all the automated communication tools, that would send information, asking about the state of being interested in the offer — a response that would relate to the state of emotion would be much more efficient than a regular chat bot.
Will this kind of technology be applicable worldwide anytime soon? Yes and no — there are still some technological problems that would need to be figured out in the future. First of all, we have more than six emotions — and even the creator of the thesis about basic emotions — Paul Ekman, admits that. Algorithms would have to learn much more about the complicated expressions that we carry — and what about those, that are hidden, and don’t show on our face? Well, that’s another problem. Secondly, cameras that are used to read human affects from face, or voice recognition systems, would have to be available in computers, mobile devices and laptops — it usually takes about six high-definition cameras for interfaces to act upon reading emotions from human face. We don’t have such systems (yet!) in everyday usage. There’s also no possibility to isolate our environment — are we angry, because the bus is late, or because we just saw the shipping costs on our iPhone? It’s hard to exclude our reactions and make a simple implication, without knowing what caused our mood. Lastly — permissions. Even when the technology will be at a point of developing sensors to our internet-connected devices, or home automated systems, we, as users would have to agree on cameras and sound algorithms that read our feelings — it’s a really sensitive and personal case, but we probably won’t be agreeing on such extended data collection any time soon.