Making Homes Really Smart with OpenCog and SingularityNET: Part 1

Alexey Potapov
SingularityNET
Published in
10 min readMar 23, 2020

What would make homes really smart?

The idea of smart homes is far from new, and tech geeks have been trying it for decades. Nowadays, many companies produce various smart home devices abundantly present in the mass market. Many smart home integrators exist, who help users designing and installing whole-house projects bringing all these smart devices together.

Manufacturing companies are doing their best to make their solutions work out-of-the-box, developing their own ecosystem of devices being managed by proprietary smart home applications. Tech giants like Google, Apple, Amazon, and others push forward their virtual assistants to interact with both third-party and own smart home devices. In most cases, smart homes rarely meet end-user expectations and usually turn out to be expensive toys with few really useful functions.

Home automation software offers some level of solution to this problem by allowing manual scripting and/or GUI for sending commands to devices and displaying information. Virtual assistants add voice interfaces to this, which is very useful in many situations and creates a somewhat deceptive impression of intelligence. One may argue that they add much more intelligence, but if you don’t want to get annoyed by them, you shouldn’t expect them to be smart. Much better user experience can be achieved, if one considers them as a voice-controlled analogue of GUI to the connected devices and pre-programmed scripts. Virtual assistants do their best to disambiguate a vague natural language input to figure out what menu item to choose and what values to fill in the slots of a certain skill service.

To see what the skill-oriented approach lacks, let’s consider the following query: “Turn on the light”. It is so simple. What can go wrong?

  • If the light is already on, but smart home sends a command to the device anyway, it may seem not a big deal if the user is not aware of this. But still, it will indicate that smart home acts not too intelligently. Also, it would be better if smart home says “The light is already on” instead of silently doing nothing because there can be a real reason behind this useless request.
  • The light is off, and the command to the device is sent, but the light doesn’t turn on.

Of course, both cases are not difficult to take into account, but this should not take a form of ad hoc solutions for each device separately. In fact, anticipating the outcome of own actions is one of the general features of intelligence.

  • Then, what light should be turned on? There can be several lamps in the home, but only one of them can be off. The user could mention a concrete location in the previous phrase (e.g. “Is it dark in the kitchen?”). Without any context, the users most typically want to turn on the light in the room, where they are. Something can be handled via dialogue-based slot-filling that takes the dialogue context into account. Other cases can be hand-coded on the skill side. However, it would be difficult to manually cover all the situations especially for new third-party devices with limited interaction between skills. What we really expect from smart homes is to have some common sense that can be transferred to new devices and situations without re-implementing it from scratch each time.
  • A not smart lamp can be connected to electricity through a smart socket, but an ordinary user will still prefer to say “Turn the light on” instead of “Switch the socket on”. What smart home should do is not the blind command execution but the fulfilment of users’ goals by achieving the desired state of the environment. This can be done via reasoning, in the course of which the sequence of actions leading from the current state to the desired state can be inferred.

The user may want the room to be dark. This may require not only turning off the lamps but also pulling the curtains. These two actions can be united in a hand-coded scenario. But this should not really be necessary if a smart home can reason. The reasoning is compositional. Scenarios are not. If the smart home knows that bright light wakes humans up, and it is undesirable to wake up humans without necessity, then the smart home can postpone the execution of the user command to turn the light on if it knows that someone is sleeping (and ask the user for confirmation)… or can even figure out how to fulfil the request to wake up the user at night by turning the lights on without being specially programmed for this type of request! This may really look smart, while the ability to do only what the system is programmed for has never been considered as intelligent behaviour.

Thus, what smart homes need (and what they are lacking) are knowledge, common sense, and reasoning. More advanced features would include episodic memory and learning new pieces of knowledge.

Traditional symbolic systems are focused on knowledge representation and reasoning, but they typically manipulate symbols, which lack semantic grounding. At the same time, the reasoning for smart homes should be substantially based on the physical reality behind the symbols. Either a lamp is on or off is a question of not logical truth, but consistency with reality. Thus, what is needed is grounded reasoning, which we studied in our previous blog posts with the use of the OpenCog cognitive architecture on the example of the Visual Question Answering task. Can it be used here as well? In this series of blog posts, we will try answering this question.

Integrating OpenCog and Home Assistant

First of all, OpenCog should have means to communicate with smart home devices. There are different ways to do this, but since we need to send low-level messages to and receive them from various devices without excessive interference and restrictions, and want to do knowledge representation and reasoning locally, and, at the same time, we do not want to rely on proprietary home automation solutions, especially, running in the cloud, which require sending private information out, the ideal solution for us is to build on top of open-source locally deployable home automation platforms. We found convenient to use Home Assistant (hass), although other similar platforms could be equally good (our implementation of the OpenCog and hass integration can be found here).

Installing hass is fairly simple. One way convenient for the Python users would be python3 -m pip install homeassistant or just pip install homeassistant depending on the environment (that should typically just work, but one may need to create a python or conda environment for hass with an appropriate version of Python and in some rare cases manually install some missing dependencies like aiohttp_cors). Another way is to use a docker container (see the guide for more details if necessary). After installation, one needs to run hass --open-ui (note that the first run can take a few minutes before the web interface appears) and add necessary devices to hass.

Installing OpenCog is more involved, but there is always an option to use docker. Let’s assume that both hass and OpenCog are set up and running either on the same or different machines.

The way to integrate OpenCog with hass can also be different. For example, one can receive and send messages using the hass event bus directly. However, we found using the WebSocket API more convenient (it requires websockets package, which can be installed via python3 -m pip install websockets), which allows for subscribing to events, sending messages, requesting states of the connected devices, etc. while possibly running on another computer. The only additional requirement is authentication. The client (used by OpenCog) to communicate with hass is fairly simple. To connect to hass, it requires Long-Lived Access Tokens, which can be generated using the web interface to a running Home Assistant (at the bottom of the user’s profile page). Let’s put both the token and hass WebSocket address (which will be ws://localhost:8123/api/websocket if hass is accessed from the same machine) in the configuration file.

OpenCog reasons over its knowledge base stored in a hypergraph container, Atomspace. In order to reason over not just abstract symbols, but over the real world in its current state, OpenCog implements grounded Atoms. For example, grounded predicates (typically used in robotics and machine perception) are those predicates whose truth-values are determined not inside the system, but by the state of the environment or some external process. Although GroundedPredicateNode is a standard type of Atoms in OpenCog and it fits well with logic-style reasoning, more recent GroundedObjectNode, which embeds foreign objects into Atomspace and allows executing their methods from Atomese, is technically more convenient for dealing with smart home devices.

In our implementation, the Python class Entity describes entities known by hass. They can correspond to regular smart home devices or some additional entities like users, weather and sun, etc. Entity objects are supposed to be created automatically by requesting entity states from hass. In our implementation, HomeState is responsible for this: get_states message is sent to hass by HassCommunicator just after authentication and the reply message is passed to the constructor of HomeState. It also wraps created Entity objects into GroundedObjectNodes stored in the Atomspace, which then can be accessed from Atomese by their IDs (IDs of known entities are stored in a text file). Entity objects have some methods (accessible from Atomese) to get information from and send commands to the entities. For example, if we have a smart bulb with ID light.bedroom_lamp, executing (with execute_atom function) the following Atomese code (embedded in Python) will turn the bulb off:

ApplyLink(
MethodOfLink(
GroundedObjectNode("light.bedroom_lamp"),
ConceptNode("send_simple_command")),
ListLink(ConceptNode("turn_off")))

Sending a command with parameters through Atomese to hass can be done by wrapping them into GroundedObjectNodes in the following way (only supported services for lamps in hass are turn_off, turn_on and toggle, so one needs to call turn_on service with corresponding parameters to change brightness or colour even if the lamp is already turned on):

GroundedObjectNode(":bright", {"brightness": 100})
GroundedObjectNode(":red", {"hs_color": [0, 100]})
ApplyLink(
MethodOfLink(
GroundedObjectNode("light.bedroom_lamp"),
ConceptNode("send_command")),
ListLink(ConceptNode("turn_on"),
ListLink(GroundedObjectNode(":bright"),
GroundedObjectNode(":red"))))

Similarly, get_state_cn method returns the current state of the entity (e.g. if the bulb is on or off) as a ConceptNode, which can be further used in other Atomese expressions.

Since we want to react to events, we introduce one more class Event and create GroundedObjectNode("current_event"), which refers to the corresponding object. Its methods get_type_cn and get_data_cn allow us accessing the event content from Atomese, e.g.

EqualLink(
ApplyLink(
MethodOfLink(
GroundedObjectNode("current_event"),
ConceptNode("get_data_cn")),
ListLink(ConceptNode("click_type"))),
ConceptNode("double"))

will be evaluated to true if the event is a double click.

Home Automation in Atomese

Combining conditions and actions in BindLink will yield basic home automation in Atomese

BindLink(
EqualLink(
ApplyLink(
MethodOfLink(
GroundedObjectNode("current_event"),
ConceptNode("get_data_cn")),
ListLink(ConceptNode("click_type"))),
ConceptNode("double"))
ApplyLink(
MethodOfLink(
GroundedObjectNode("light.bedroom_lamp"),
ConceptNode("send_simple_command")),
ListLink(ConceptNode("turn_off"))))

— turn the bulb off if a button is double-clicked. More conditions can be added using AndLink to check if the desirable button is clicked and so on.

This still doesn’t differ from automations and scripts supported by Home Assistant. The latter also supports locations or areas, which can be assigned to devices and other entities. Knowledge about locations can naturally be represented in Atomese and used in BindLinks. Ordinary predicates can be used for this purpose, e.g.

EvaluationLink(
PredicateNode("placed-in"),
ListLink(
GroundedObjectNode("light.bedroom_lamp"),
ConceptNode("bedroom")))

BindLinks in Atomese are not mere if-then rules. They can include unknowns (variables) and are executed by Pattern Matcher, which tries to find if any subgraph in the knowledge base can be matched against the query. For example, executing the following query will (unconditionally) turn off all light devices.

BindLink(
EvaluationLink(
PredicateNode("has_domain"),
ListLink(VariableNode("$l"), ConceptNode("light"))),
ApplyLink(
MethodOfLink(VariableNode("$l"),
ConceptNode("send_simple_command")),
ListLink(ConceptNode("turn_off"))))

We can also modify our previous rule, which turns off the lamp with a known ID on any double-click event. For example, the following modification says that all smart lamps in any area should be turned off if any smart home button in the same area is double-clicked (complete example can be found in the repository).

BindLink(
AndLink(
InheritanceLink(VariableNode("$l"), ConceptNode("lamp")),
InheritanceLink(VariableNode("$b"), ConceptNode("button")),
EvaluationLink(
PredicateNode("placed-in"),
ListLink(VariableNode("$l"), VariableNode("$room")),
EvaluationLink(
PredicateNode("placed-in"),
ListLink(VariableNode("$b"), VariableNode("$room")),
...),
ApplyLink(
MethodOfLink(
VariableNode("$l"),
ConceptNode("send_simple_command")),
ListLink(ConceptNode("turn_off"))))

It should be noted that such rules can be rather general, and one can write down a number of such rules applicable to any house, which configuration will be reduced to specifying some declarative knowledge about devices and their connections. For example, this rule could rely on a predicate “connected”, and toggle any device $d when any button $b is pressed if there is the fact in the knowledge base that $d is connected to $b.

Chaining of such rules in order to find actions leading from the current state to the desired state is the crux of reasoning.

What’s Next

In subsequent posts, we will develop the topic of reasoning for smart homes on the base of OpenCog, consider the problem of creating a natural language interface to such a cognitive home using contemporary NLP DNNs and probabilistic generative models, describe how to use SingularityNET services to make a home smarter while remaining the master of own home.

Join Us

SingularityNET plans to reinforce and expand its collaborations to shape the coming AI Singularity into a positive one, for all. To read more about our recent news click here.

And if you have an idea for AI tools or services you’d like to see appear on the platform, you can request them on the Request for AI Portal.

--

--