AI for Good: Smart Homes and Abuse

How do we stop leaving household members behind in the march towards smarter homes?

A recent New York Times piece took a new look at some unintended effects of the smart home revolution. Entitled “Thermostats, Locks and Lights: Digital Tools of Domestic Abuse”, the article raises unsettling questions about how the Internet of Things can upset the balance of power in a home, and be turned against inhabitants who may not have direct control or full understanding of these devices.

“Abusers — using apps on their smartphones, which are connected to the internet-enabled devices — would remotely control everyday objects in the home, sometimes to watch and listen, other times to scare or show power.”

If you haven’t read the article yet, I encourage you to do so. Regardless of your opinion on the NYT (and mine are complicated) — this is an important piece, and one that was shared with me from a number of friends familiar with my own contributions to Internet of Things enabled products like Alexa and Cortana.

This piece leaves us with two major insights: one, that these devices can be used in unanticipated ways for real harm against household members; and two, that the harm is exacerbated by an imbalance of power.

Each said the use of internet-connected devices by their abusers was invasive — one called it a form of “jungle warfare” because it was hard to know where the attacks were coming from. They also described it as an asymmetry of power because their partners had control over the technology — and by extension, over them.

How can we, the designers and engineers implementing the future of smart homes, help prevent this kind of abuse with our products? It’s not enough to say “this isn’t what we intended” — these uses are real, and it’s now our job as an industry to respond. Worse yet, the potential harm will only increase as these devices and their capabilities become more prevalent. Cameras, in particular, make this problem more urgent than it was a few years ago, when we only dealt with light switches and thermostats.

What can we do? Let’s engage in a thought exercise to explore potential avenues for addressing these dangerous emerging use patterns.

What happens when good technology goes bad? Smart home tech is usually deployed as seen here — with the best intentions. But recent research uncovered the role of IoT devices in domestic abuse. (Licensed photo: Adobe Stock)

Access for all

When smart home products first launched, they were very much the territory of early adopters. App designs frequently assumed a single user, since the systems were so complicated it wouldn’t make sense to educate the whole household in the system.

As the years have gone by, this assumption has manifested itself in the way smart home products are managed. Almost universally, there is one person in a household responsible for updating and maintaining the home’s networked IoT devices. Everyone else is simply a consumer, and in many cases doesn’t have the ability to make changes to the configuration.

And in some ways, this makes sense. We don’t want to make it easy for a child or spouse to accidentally undo configuration decisions. But what about when these individuals need to make a change for their own safety or sanity? By the time a situation has degraded to that point, asking for access is no longer an option.

The solution is to build systems that encourage all members of a household to be granted access to the system in some capacity. This requires careful design from both perspectives. For our home controller, it must be extremely easy to add these customers without making the system less secure. And for our household members who are not making routine configuration decisions, we must add enough friction that accidental changes won’t be made.

But as with most consumer products, we can’t expect that telling a customer “you should add your family members” will result in a change of behavior. Ideally, we’re offering both the home controller and the household members some kind of benefit for the effort. And we must still resolve the app ecosystem problem: in reality, a connected home is usually controlled by multiple apps, not just one.

While I don’t have the right answers here, I’d like to jumpstart the exploration of these issues with some important questions:

  • What are potential reasons that a home controller would want to add their family members at the time of installation?
  • Can we make it so that household members can add themselves without the home controller without reducing security overall?
  • Do we have to grant household access to EVERY smart home app, or simply the aggregator apps like Alexa or Apple?
  • What is the right level of access? Is simply seeing the system configuration and any changes enough, or does everyone in the family need the power to change the configuration of the system?

Emergency release valves

Beyond granting access, we need to consider what our coping mechanisms are when a family situation goes bad.

App-driven transparency

In a perfect world, we could answer some of the questions we just asked and move towards a model where all members of the household have an appropriate level of access and control.

“Graciela Rodriguez, who runs a 30-bed emergency shelter at the Center for Domestic Peace in San Rafael, Calif., said some people had recently come in with tales of “the crazy-making things” like thermostats suddenly kicking up to 100 degrees or smart speakers turning on blasting music.”

For the individuals Graciela worked with, having access to an app that provides transparent information about the operation of all their in-home IoT devices could provide some relief. A well-designed system would accomplish three goals:

  1. Prevent these individuals from being blocked or removed without notice,
  2. Provide a real-time log of any commands issued and their source, and
  3. Provide some sort of override mechanism, especially if the person making the commands is outside the home but other household members are still inside.

AI, explain thyself

However, we can’t assume all families will magically grant access to all household members. We must also consider how a family with a single point of IoT contact can cope with abuse of system control.

For digital-assistant enabled homes, those that use Alexa and Google Home, we have more interface options that don’t require an app. How could a voice-only interaction help prevent or diffuse some of the harm described in the NYT article?

Our current generation of voice assistants is still not terribly contextually aware — often they only know of commands that were issued immediately prior. Today’s Alexa devices, for example, can’t tell you why music started playing if the request was issued elsewhere.

A Holy Grail for an IoT enabled AI household would be a quick, vernacular way to get that information:

“Alexa, why did the lights just go out?” 
“Jeff used the Alexa app to set a timer for these lights yesterday. The timer turns your downstairs lights off at 9PM every day.”

That’s a bit far off, but that level of transparency could really help provide sanity both to customers enduring abuse — and to large or busy households, too. AI systems are often criticized for behaving as “black boxes”, and it’s about time we start asking our digital assistants to explain their behaviors.

Emergency release valves

But some homes aren’t using Alexa, Cortana, or Google Home to run their smart homes. There are plenty of homes without voice assistants that rely on apps - and physical controls.

For families where equitable access to the IoT apps is not possible, can our physical devices provide some sort of emergency override for a customer who feels they’re bring abused? The Amazon Echo family, for example, has a hardware Mute control that prevents microphones and cameras from triggering. Muting the volume is a different matter, but is also possible as long as you have physical access to the device.

How can this principle be applied to other devices? For simpler objects like physical light switches, have you as a device manufacturer provided a local override, to disconnect the switch from the service? This could help in security breaches just as much as it would help in situations of abuse.

If you don’t have separate mute or reset buttons, are there ways that you can rig special input on existing buttons (like a long press, or a sequence of rapid presses) as an “emergency release valve” to reset or disconnect the device in question? And how might you make that information available to customers?

Device capabilities and proportional risk

If we can’t expect customers to roll out general access because it’s the right thing to do, when should we enforce general access, transparent access, or ease of reset? The riskier a device is, the more important it is for us to ensure transparency (and ease of reset) when necessary.

Cameras in particular are potentially harmful; in a romantic relationship gone wrong, camera access could be used for blackmail or worse. Microphones are not quite as risky, but still a huge potential invasion of privacy. For devices that allow this form of broad input, the risk of abuse is much, much higher. Consider the potential scale of impact; if your device is powerful, you probably can’t afford to overlook the complexities of domestic abuse.

Digital families and the balance of power

These issues may not just extend to the smart home, but to our very digital accounts themselves. We’re starting to see household account relationships — Microsoft, Google, Apple, even Amazon. Most aren’t yet tied to any smart home automation; but they could be.

And even if they aren’t, most of these family models still rely on a single point of power — the “family organizer” (or, in technical parlance, the family’s admin.) This model, while seemingly simple and easy, can lead to the same imbalance of power the NYT article mentions. Why can’t organizational duties be shared between adults?

Let’s run a hypothetical worst-case scenario: a technique any designer for AI should become comfortable with as part of your design process. (Design for AI is very often design for the unknown, and design for emotional impact at scale.)

Say you’re a spouse who’s removing their children from an abusive situation, but you weren’t the ‘family organizer’.

  • Must you delete all accounts and start over?
  • Could your spouse cause damage or distress to you or your children using their powers as the “family organizer”?
  • In lieu of multiple “family organizers”, how might the system arbitrate disputes or allow a sort of democratic removal of a bad actor from the family system?

Beyond device-based abuse

We’ve raised a number of questions that could be used to start exploring solutions to the problems raised in the New York Times article. But are there other problems just beyond the scope of their investigation?

A harder question lies ahead for devices like Alexa and Google Home, which benefit from our human tendency to anthropomorphize. Many consumers consider them a part of the family, for better or for worse. But what does this mean when the devices are bearing passive witness to domestic abuse?

Certainly, the necessary protections in place to keep these devices from listening constantly also remove some responsibility. If you’re not listening without being asked, then you’re unlikely to pick up on ambient signs of abuse. But how could the device’s role as a family member become an asset to those suffering from abuse?

In educational circles, some states in the US have the concept of a mandated or mandatory reporter: someone who is obligated due to their profession to report any suspicion or evidence of abuse to the authorities. I don’t think AI is in a position to make complex judgements like this one without causing potential harm.

But what if a customer in need reaches out to the device? What, then? Could our assistants play the role of what Washington state calls a “permissive reporter”?

“Washington State law encourages persons other than mandatory reporters to make a report when they have reason to believe that abuse, abandonment, neglect, or self-neglect, is, or has, occurred. Persons other than mandatory reporters are called “permissive reporters.”

Many victims of abuse are at a loss for how to reach out. Their abusers often control all means of outreach: they pay for and control cell phones and thus call logs; they might see comings and goings at the home via door cams, and they could monitor browsing activities.

What if our digital assistants could serve as a safe means for outreach? In particular, what if a particular intent (not prone to accidental recognition) could initiate the equivalent of an incognito browsing session, where the conversation is not logged in the household’s activity log, but when appropriate a message is sent to local authorities?

It’s a complex problem — and this is not to imply that our AI assistants are people. Enabling functionality like this would most certainly require heavy legal and technical scrutiny, and careful design to ensure this feature itself cannot be abused. But if we’re talking about the role of these devices in domestic abuse, we should also ask whether these devices can provide some aid.

Our role as technologists

The old mantra of “fail fast” fails us when we’re invading the sanctity of customers’ homes. But even when we’re being careful, we likely won’t anticipate every abuse of the system. In these cases, our responsibility is to listen, learn, and adapt.

These abuses of our connected homes weren’t intended use cases, but they are real use cases. Our job now is to figure out how to move forward — both to solve these specific problems, and to use this information to anticipate other similar abuses and head them off before they become widespread.

With great power comes great responsibility. How will you help the customers that need your help most?

Cheryl Platz has worked on a variety of smart home products including the Echo Look, Echo Show, Amazon’s Alexa platform, and Cortana. She is now a Principal Designer on the Customer Care Intelligence team at Microsoft, designing systems enabling others to build the future of AI experiences. As founder of design education company Ideaplatz, Cheryl is also touring worldwide with her acclaimed natural user interface talks and workshops.