Learning IoT Users’ Habits with craft ai and ARTIK Cloud

craft ai team
Craft AI
6 min readSep 19, 2016

--

This post was originally published on the Samsung ARTIK Blog

On July 2, the craft ai team woke up way too early for a Saturday to join fellow developers at the Samsung/Legrand booth at VIVA Tech and hack for a day. Our objective: Show that smart homes can offer a better user experience thanks to artificial intelligence — beyond smartphone remotes, complicated dashboards and manual scenarios! This is the tale of how we used craft ai in conjunction with Samsung ARTIK to make a few Legrand devices learn usage patterns and automate themselves.

The Use Case: Automatic Lighting Management

Managing the light in a house is one of our pet use cases at **craft ai**. It was the subject of one of our earlier demos, “Home Together“, and one of our latest, “AutoBlinds“. The idea is to control the lighting inside the home by automating the lights’ luminosity and color, as well as the opening of the blinds, based on user habits in context, time of day, outside luminosity, and the states of other devices like TVs or sleep monitors. The goal: Reduce the number of interactions users have with their devices while having a house that adapts to their needs and comfort preferences.

Since we had to do a live demonstration, we could not possibly show automation based on habits related to time (i.e., habits that would depend on the time of the day and/or the day of the week), even though this is something that we would obviously do in real life. The goal we set for this demo was to detect causalities between the states of several devices. Throughout the day we played around with the devices to “simulate” an actual usage: we “watched” the TV for some period of time and when the TV was turned on, we closed the blinds and turned on the light.

The devices used were the ones from the Legrand panel that was available during the hackathon. The TV was actually a lamp plugged into the connected power switch of the Legrand panel, but from a storytelling point of view, it made more sense to call it ‘the TV’.

At the end of the day we were able to demonstrate that, thanks to craft ai, our devices were able to learn how to react to context changes. In particular, when we turned on the TV, the light turned off and the blinds closed, autonomously!

Technical Architecture

Overall architecture of the solution we set up during the hackathon

The solution we developed was based on the following components. On the ARTIK Cloud side, we created a “BLT(Blind/Light/TV) agent” device instantiating a custom made device type. The purpose of this device is to represent the craft ai based automation of the three devices in ARTIK. It had the following manifest:

  • Fields: CurrentBlindState / PredictedBlindState / CurrentLightState / PredictedLightState / CurrentTVState / PredictedTVState
  • Actions: BlindStateChanged / LightStateChanged / TVStateChanged
  • On the craft ai side, we created agents to model the setting of the Blind, Light and TV according to each other.
  • To make everything work together, we developed a small Node.js server, to implement the behavior of the “BLT agent”. We connected to ARTIK using WebSockets and to craft ai using our official JavaScript client.
The resulting decision tree for the blind.
The resulting decision tree for the light.

The behavior of the agent was implemented in our little server:

  • When an Action was received (BlindStateChanged, LightStateChanged, TVStateChanged) it meant the context of the agent changed; the matching field Current[Device]State was updated, and this information was sent to craft ai thanks to a context operation addition,
  • New versions of the decision trees (one per device) were computed by craft ai,
  • For each device, we applied its decision tree, computing a “predicted state”; if craft ai confidence in the decision was more than 80%, this predicted value was then sent to the corresponding Predicted[Device]State field.

There is a tricky part in step 3: we don’t want the BLT agent to contradict what the user (on any external system) just changed on a device; i.e., if I just turned the light on, it’s certain I don’t want it to be turned off even if the AI learned it’s the “right” thing to do. In this particular example, because time is not a parameter we take into account, we know that if the current device state is different than the previously predicted state (i.e., Current[Device]State == Predicted[Device]State) it means the user changed this state. In such cases the BLT agent “validated” this change.

As explained earlier, for a more realistic model we should take into account time, which means the decision could change between the morning and evening, for example. In this case, we should put a time threshold on this filtering: after a defined amount of time is elapsed the AI could change a setting the user previously changed manually, thus applying the basics of hysteresis.

ARTIK Cloud Rules Usage

We used the ARTIK Cloud Rules feature to integrate with the various concrete devices. This allowed our BLT agent to remain independent from the make or type of devices it integrates with. As a result our solution could virtually work with any devices!

The Rules we implemented were quite simple. Two kinds could be found: sending device state change to the BLT agent, and sending BLT agent decisions to the devices.

This Rule notifies the “BLT agent” device whenever the light state has changed
This Rule will close the blind when the predicted state of the “BLT agent” says so

Currently the ARTIK Cloud Rules provides simple arithmetics such as ADD, SUBTRACT, and CONCAT, although we have limited ability to convert values from one representation to another — for example, if one field represent luminosity as a percentage and the action you want to trigger accepts it as a value from 1 to 10. The system could be even more powerful in the future, but perhaps at the price of simplicity.

Future Improvements

The very next step is something we didn’t have time to finish during the hackathon: add a Netatmo Welcome camera in our scenario to identify a user and provide automation based on their individual preferences.

The work we did during the hackathon will be the basis for a more generic solution that will support many more devices available through ARTIK Cloud, allowing more complex and more personalized automation. Our BLT agent has nothing specific about Blind, Lights and TV except its I/O and its Manifest. We’d like to explore ways to create a “generic” Manifest that can expose any number of actions and fields.

Hackathon Epilogue

The craft ai team putting the last touch to our demo during VIVA Tech hackathon

What about the Hackathon? Well, at the end of the day we did win! We will be showcasing an evolution of this work at the Legrand Booth at CES Las Vegas next January! Stay tuned for future update on this demo on our blog, and in the meantime join craft ai beta to build your own self-­learning system!

--

--

craft ai team
Craft AI

The team behind the AI-as-a-service platform craft ai