Look into my eyes, look into my eyes… or turn on your FaceTime Attention Correction
One of the features of Apple’s new iOS 13, due for release this fall, that has attracted a lot of comment is FaceTime Attention Correction. Easily activated, it allows users to correct what can be an unsettling aspect of video calls: your eyes look at the image of the other person on the screen, not at the camera, which makes it look as though you are avoiding eye contact or looking someplace else. Some people with a certain experience in video calls have learned that looking directly at the camera can help, particularly if they want to make a particular point during a conversation, but still, it feels unnatural to focus on a black and motionless hole rather than the space on the screen occupied by the face of the person you’re talking to.
The solution, until manufacturers put the camera in the center of the screen without the need for a notch, seems futuristic and perhaps slightly controversial: digitally manipulate the image of our eyes so they appear to be looking at the other person. People who have tried it say the image is practically indistinguishable from reality and that it feels like you’re making eye contact.
What does it mean to manipulate a person’s eyes so they appear to be looking at you? The technology has been available since Microsoft Research experiments in 2004, but has not been mass tested via its inclusion in an operating system that millions of smartphones will download very soon. As with other technologies, the effect will be almost imperceptible and it will simply seem as though the videocall call is simply more “natural”, as with a face-to-face conversation. But in practice, we will all be looking at a digital reconstruction of somebody’s gaze, created from the real image of their eyes.
We might as well get used to these types of technologies to virtually reconstruct people’s faces. When two people wearing virtual reality viewers talk, for example, neither will be able to see the other’s eyes, which will be covered by an opaque glass screen. The solution would be to use sensors to capture an image of our eyes and reconstruct them in the place where the viewer is located, which would allow us to enjoy a reasonable conversational experience. Another solution for these kinds of videocalls in the meantime is to use avatars.
What are you looking at? Soon, the question will be meaningless when using FaceTime, because an algorithm will have decided to make it appear we’re all looking into each other’s eyes.
(En español, aquí)