The Ethics of Experimentation in the IoT

As part of the ongoing weekly series of lunchtime “Data Bite” talks that the recently launched Data & Society Research Institute has been holding, today we had an opportunity to listen to Jeff Hancock from Cornell talk about his research on emotional contagion on Facebook. In a discussion afterward with fellow fellow (!) Gideon Lichfield, I wondered whether the focus on Facebook and the manipulation of content is a red herring… or more accurately, a signal of far more serious kinds of risks that will manifest themselves as the internet of things is built out.

Here’s what I’m worried about. Facebook’s study manipulated users’ news feeds by selectively withholding either negative or positive posts (based on certain keywords) and seeing if that impacted the subjects’ resulting output (measured the same way). According to Hancock, the study found an almost indiscernible effect, which he had expected — which was key in his assessment going into the study (and why he didn’t push for a more thorough institutional review) was that there was very little risk involved in the study’s experimental manipulation of the online environment.

But as similarly large corporations like Google and Apple begin building substantial parts of their business on networked devices and services that interact with the physical world, these kinds of research will begin to raise fare more serious ethical questions, that don’t seem likely to have easy answers.

Here’s an example — not a perfect analogue to the Facebook study, but potentially useful to start a discussion: what happens if Google starts manipulating the temperature in people’s homes via their Nest thermostats on a large scale basis — maybe 1/10th of a degree here or there, but more over time to try to understand our tolerance of temperature shifts.

Now, just as Facebook’s study presumably was designed to hone in not on the average user but the outliers who might be driven to delete their accounts or merely stop using the service if they received too much negative content — we can guess there would also be outliers to a Nest study of this sort. But these outliers could suffer serious negative physical harm — I’m thinking here of the children who get sick from a home rendered unduly cold as part of a Nest experiment, or an elderly person passing away from heat exhaustion.

What happens when IoT companies start running experiments on populations?

Now, this is important science to do. If Nest can figure out how to give us the only the climate control that we actually need to be comfortable, not what we think we need, the energy savings could be enormous. There’s no reason to assume that human beings understand what the actual temperature reading on a thermostat means—Nest could simply recalibrate it as more of an index. But would the aggregate environmental benefits to society outweigh the harm caused at the margins to those outliers?

To make things more complicated, what if Nest didn’t actually intervene, but merely nudged its users to voluntarily turn the thermostat to a more green setting.

This is still an incomplete thought, but I’m experimenting with Medium as a platform for sharing these kinds of half-baked ideas that never quite get fleshed out via Twitter.

Show your support

Clapping shows how much you appreciated Anthony Townsend’s story.