The age of the face-reading device is coming

Enrique Dans
Enrique Dans

--

Our relationship with electronic devices has traditionally been pretty much a one-way street: they provide us with content and information, and we read, watch, or listen to them. Few devices “read” us; okay, my Fitbit can take my pulse, as can by Apple Watch, but only when I ask it to, and it then limits itself to storing the date, and doesn’t react to the reading, doesn’t take decisions based on it, except in a few limited situations.

But Apple’s recent acquisition, announced on Thursday, that it has bought Emotient, which produces software that can read facial expressions, could change all that. Despite having eliminated information from its website about the products it had on the market, we know that it has tried to work out the reactions of people to advertisements, interpret pain reactions in patients unable to talk or express themselves, and to monitor the expressions of customers window shopping. Information found on Archive.org covers fascinating topics such as attention indicators, feeling and engagement, A/B tests for campaigns and advertisements, loyalty prediction analysis, along with other neuromarketing concepts.

The cost of the operation has not been revealed: all we know is that the company has so far managed to raise some $8 million, the last $6 million in March 2014. Apple has given no indication yet as to what it intends to do with the company, but everything indicates that the acquisition is part of a larger drive that has seen it buy Perceptio (deep-learning based image recognition) in October, or Faceshift, a Zurich-based startup that created animated avatars (its software was used in the latest Star Wars), and that can be used to create biometrics that can unblock devices or to authorize payments through facial recognition.

Analyzing companies on the basis of their acquisitions is a complex science that is open to all kinds of interpretations. That said, if Apple is buying companies working on facial recognition, we can only be moving toward a world where we will be surrounded by devices that read us and act on our facial expressions. This is all closely related to human computer interaction, or HCI. Can you imagine a Siri that instead of sitting quietly waiting for our next voice command, instead analyzes our facial expressions on the basis of the information it has provided, and then acts accordingly? We don’t know how long it will be before we start to see these kinds of products in the market place, but everything suggests this is where we are headed.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)