The Smart Home interface

Leon Barrett
Smart Home Thoughts
4 min readMay 10, 2017

--

Amazon Echo Show

I’ve been living with a HomeKit powered smart home for about the past year now and over that time built a range of device types including lighting, audio, power, heating and security. Part of my goal was to make the control and automation of these devices seamless, simple and at least as convenient as the regular devices they replaced.

One of my guiding principles for my smart home is that I shouldn’t have to rely on my phone to control basic things such as lights, power and heating. I’ve carefully chosen each device with the ability for it to be controlled in the same way as it’s ‘dumb’ equivalent, but with added smarts should I need it. For example, the Tado system allows for manual control of the radiator value directly on the unit rather than having to use a phone to set the temperature.

Voice control has been something that’s been hailed as the future of smart home control, but the truth is that context plays a huge part in how we control our homes. I believe that there isn’t a single interface for controlling a smart home, but instead multiple methods based on the task to be completed.

There’s a danger of smart homes becoming less convenient if they require too much thought and effort to control.

I currently have various scenes setup to control multiple devices with a single command to make automation easier. The majority of the time, I trigger these scenes via the Home app. Lets take a typical scenario of relaxing in-front of the TV to see how context drives automation.

In my lounge I have the following:

  • 2x ceiling lights controlled via LightwaveRF
  • 2x floor lamps with Philips Hue bulbs
  • 2x table lamps with Philips Hue bulbs
  • Sony Bravia smart TV
  • Sonos PlayBar

I have a scene called Watch TV which turns off the ceiling lights and turns on the Hue bulbs to a specific brightness. It also turns on the TV and subsequently the Sonos. I have this set as one of my favourites, meaning it’s a swipe up on iOS to enable it. So a swipe and a tap and the scene is set. In this instance, voice actually has a higher latency (at least in the way that Siri is invoked either on iOS or via my Apple Watch).

After a while, I decide that I want a beer and a snack from the kitchen. As I walk into the kitchen I manually turn the light on. I could use a motion sensor, but I’ve found that in reality the time between it picking up motion and triggering the lights is too slow. Again, voice control is less convenient that just pressing the switch for this.

Now that I’ve got my beer and a snack, I head back to the lounge with my hands full. I could either struggle to turn the light switch off with my elbow or make a return trip to turn the lights off. Instead, I rely on a motion sensor to detect that there’s been no movement for 30 seconds which turn the lights off. This is ideal as I’ve not had to think about interacting with anything for this to happen.

I decide that I’d like more heat and turn up one of the radiators which also tells the Tado to turn my boiler on (overriding the schedule) until I turn it off again.

Finally, I’m ready to go to bed. Once in bed, a tap on the Good Night scene in the Home app, turns everything off in the lounge, sets the Tado back to schedule and locks the front door. In this example, voice is actually less convenient as there’s a risk I could wake others up.

Lets take another example. This time I’m doing the washing up, and the last thing I want to be doing with wet hands is touching any kind of electronics. It’s starting to get dark outside and I want to increase the brightness of the lights in the kitchen. Using voice in this context makes sense as it’s more convenient than drying my hands and pressing the switch.

In some instances, using a screen is the only viable way of interacting something that has a complex input or set of requirements, or where you need to see something. For example, it would be impossible to understand who is at the door, without seeing the stream from the doorbell camera.

Whilst these examples might see quite contrived, that’s the reality of life; small ordinary, repetitive tasks. It may also seem quite lazy and pointless, but the purpose of the smart home is to make things a little easier and more conveniet.

The release of the Amazon Echo Show recently shows where we might be heading. An always on device that houses a screen, camera (for motion) and microphone could be the ultimate control device; no need to unlock, open an app and it’s there when you need it. For this to really be convenient, it would require a device in each room or zone of the house that displays controls specific to that location, meaning that managing devices is a single tap, removing the steps involved in using the Home app on an iOS device. It doubles as a motion sensor and voice assistant, so is able to adapt to the context for which it’s required.

In the future an array of more advanced presence and movement sensors paired with machine learning will mean that our homes learn to adapt to our movements and interactions and start to control themselves without us having to even think about it or touch (or say) anything.

--

--

Leon Barrett
Smart Home Thoughts

Product Director working in Birmingham for the award winning @383project. Writing about tech, product and connected things.