Wearing Our Hearts on Our Sleeves

ACM TiiS
ACM TiiS
Published in
4 min readNov 19, 2018

By Henry Lieberman
MIT Computer Science and Artificial Intelligence Lab (CSAIL)

Intelligent user interfaces often seek to analyze the behavior of the user. An assistant can’t be helpful unless they know what it is they’re supposed to be assisting. It’s too much to ask of the user to explicitly communicate every relevant detail of their activities to a software agent, so the agent is often left to infer the goals, preferences, and context of the user from observational clues.

The field of behavioral analysis has exploded in recent years, as the ubiquity of online interaction has led to the proliferation of “big data” sets of user behavior. Machine learning has provided a whole new set of tools for analyzing this data in ways never before thought possible, with astonishing success.

This month’s issue of ACM Transactions on Interactive Intelligent Systems, while not a themed issue, includes a number of articles that present innovative ways of analyzing user behavior, in a variety of domains. And there are some pretty sophisticated things that can be learned from what seem like simple measurements.

Who’d have thought that you could tell how expert someone was in a domain, just by looking at their handwriting [Oviatt et al. 18]? That Jane Jacobs’ 1960 theories about urban neighborhoods would be confirmed by today’s cell phone mobility records [Park et al. 18]? That senior citizens, a group that is thought to be reluctant to use technology, can mitigate feelings of isolation by interacting with a software agent [Sidner et al. 18]?

We’ve also got public displays that can “look back at you” to see how much attention you’re paying, and personalize themselves [Narzt et al. 18]. Wearable sensors track movement, heart rate and other indicators to analyze your sleep cycles for health applications [Hossain et al. 18]. Motion capture from people performing physical movements can be traced back to solve the hairy problem of *inverse kinematics* — joint angles and motion that can be used to animate onscreen characters or robots [Carreno-Medrano et al. 18].

A lot of projects get hung up on the “accuracy” of behavior analysis. But accuracy isn’t all it’s cracked up to be. More important is whether the high-level intent of the user is faithfully captured, and that the analysis plays a useful role for the end user. That’s the message of [Hammond et al. 18], who have a system to help teach students the valuable skill of *design sketching* — freehand “back of the napkin” drawing, to communicate physical or graphic designs. Sketches communicate in subtle ways, such that a sweeping, suggestive line might be better than a more accurate, but less expressive alternative.

Some may find all this computer monitoring of human behavior “spooky”. They don’t like the feeling of being watched all the time, even if it’s by a machine, even if the machine will presumably be non-judgmental. If it does make users feel creeped out, then, by definition, it’s not helping users. In that case, maybe we shouldn’t do it (except perhaps in urgent cases like medical monitoring, where life may depend on accurate diagnosis). I think users are right to be concerned.

But I think it all depends on the use to which this stuff is put. It depends on whether the software is helping the user, or has a “hidden agenda”, to the user’s detriment. It depends on whether the organizations deploying the software are acting in the user’s interests, or not. And in today’s world, the answers to those questions are decidedly mixed.

Take the case of the display that tracks your attention. I’m perfectly fine with the applications the authors describe, an interactive art piece and a collaborative game. But I’d worry that once advertisers got their hands on it, they’d flood us with increasingly intrusive “interactive” advertising. Advertisers have conflicting motivations — to present you something that you’ll be interested in (good), and also to get you to buy as much as possible regardless of whether it helps you or not (bad). If too many of the uses are bad, then users would be justified in completely rejecting the technology. That wouldn’t be good for the field.

Sharing information, even intimate information, isn’t a problem when the values of the person or organization you share it with are fully aligned with yours. You can’t expect the doctor to help you if you’re not willing to take off your clothes. What behavioral monitoring technology does is increase the urgency of assuring that our companies, governments, and technologists have values that fully align with the users. That’s a tall order, but I believe it is possible in the long run. For the full story, see my book, [Fry and Lieberman 18].

More than we think, we are all wearing our hearts on our sleeves. It’s up to software designers, and the organizations they’re part of, to make sure they don’t break our hearts.

--

--

ACM TiiS
ACM TiiS

ACM Transactions on Interactive Intelligent Systems (TiiS), a journal dedicated to publishing original research combining AI and human-computer interaction.