Never Feed Your Coffee Machine After Midnight

Antonio Gonzalo
6 min readMar 24, 2017

--

Source: Flickr

Never feed your coffee machine after midnight

New connected objects are threatening to surround us. They are backpacks, forks, toothbrushes, pans and egg trays (?) that, thanks to being connected, have transformed themselves completely. They have become an uglier and futuristic version of their traditional brothers, growing new screens, buttons, lights, speakers and aerodynamic shapes (something desperately needed in my coffee mug) that ‘improves’ them. «The devices of the XXI century», their makers claim.

These objects have mutated. And of course all of them are comfortably controlled through an app in your smartphone, the device that represents the modern version of the bouncer of the disco that is Internet. It is probable that the engineers did not follow the three rules:

  • Don’t put the circuits near light.
  • Don’t let them get wet with water.
  • Never feed them after midnight.

If things were small, furry and they could talk, I strongly doubt that they will choose to evolve this way. They need Internet connection to communicate, sure, but not to radically change their shape, form and colours while putting in front of us another screen with data, graphs and buttons to be understood.

Things need to talk with each other in order to liquify the physical world.

The idea of a liquified world is to transform the physical world into something so efficient and customized as the digital world. This is the last and final promise of the Internet of Things to, for instance, suddenly rent a private car that is suddenly available in front of me because, magically, supply and demand have adjusted to my specific need. For that to happen, things have to actively communicate with each other. The question is how.

The underlying problem cannot be solved through this new futuristic appearance they have. Manufacturers are designing things to communicate in a very, very heterogeneous way because things need to be adapted to very specific problems. Due to this fact, the communication protocols (the ‘languages’) are all different and full of particular use cases.

So far, the question has been answered by engineers only, and the answer is very predictable: with a platform. A platform centralizes and translates protocols thanks to an universal-super-protocol that covers all possible use cases, and will become the standard language in IOT for things to communicate.

So every CEO, every Innovation or Development Head for every company that wants to be in the IOT arena has partnered, bought or developed a platform. There are people with even two! Of course, all are meant to win big and become the ‘Facebook of things’, overruling the others. But Facebook already exists and it is led (and quite well) by a man in a hoodie.

The outcome is brilliantly summarized in this comic:

Source: xkcd.com

To further illustrate it, there is a similar problem unifying the TV remote controllers. How many remotes are around your living room? Statistics says around four. It will be great to have one remote for everything, but so far nobody has achieved full commercial traction doing this. It is an extremely simple system, buttons and a IR emitter. There are technologically brilliant solutions in the market without clear success, only into early adopters. No chasm crossed. Customers are worried of configuration and maintenance. What makes us think that now, with much more complicated devices, it is going to be different?

But then, if not through a platform, how are machines going to communicate?

In English.

We have seen it so much in Hollywood that we have ignored the idea. But natural language represents the most elevated form of communication. It is a solution bridging the gap to the user. It presents several advantages:

  • Universal
  • Standard
  • Familiar
  • Transparent

It can be used to express simple instructions and abstract concepts too. Everybody understands natural language and, if things communicate in this way, we could see what they say, what they think, and how everything around us works. Natural language is the most elevated standard.

Source: Flickr

Lets travel to the 2025 smart home. It has the same appearance that a house in 2015, but all things are connected in an invisible way. And of course, all things belongs to different manufacturers. We are comfortably drinking a Coke, reading a book in our garden. Suddenly, the shades start to fold, the pool covers, heating turns on and our parasol closes leaving us in the sun. We raise an eyebrow. All we have to do is look at the communication registry to understand what is going on with our things:

Meteo: it is going to rain heavily in 30 minutes

Shades: I’m folding if it rains

Pool: I’m covering if it rains

Sprinkler: I will not water today if it rains

Heater: will the temperature descend?

Meteo: temperature will descend 5º in one hour

Heater: I’m starting to compensate the house

“Heater, I’m getting into the living room in 10 minutes”

Heater: ok, heating the living room faster

You don’t have to know to code to understand what is going on and react accordingly. It is transparent. It is extremely fast, and everybody can start a conversation; they are not code lines hidden within a platform or a HUB controlled by a corporation that only a bunch of developers understand.

This way, we are not allowing things to think and act without control. If we let them do it, we will be heading to the “Internet of Paternalistic Things”, where suddenly our refrigerator thinks that there is somebody pregnant at home and will not allow anybody to grab a beer. Really scary.

The good news is that we have advanced enough technology today to build all this. Objects, helped by generalist communication modules (such as Particle, Intel Edison or Thinking Things), can send and receive all sort of messages.

Source: Flickr

Wait a second, can machines really understand those messages? More good news, we surpassed the maturity of understanding natural language in specific contexts a while ago. When the weather station called ‘Meteo’ publishes “it is going to rain heavily in 30 minutes”, what every machine understands through NLP technology is something like:

intent = weather forecast

text = rain

recipient = all

date = today

time = present time +30 minutes

Every object captures what it is programmed to understand with these parameters, and acts accordingly. We are not talking about building Siri or Cortana or Facebook M or Amazon Echo or Google Now for every device; they are complete virtual assistants. We are talking about building small intelligences inside things, with a very limited vocabulary, in a perfectly defined context and adapted to the small actions that things perform.

Since the personal computer became mainstream, humans have learned to use computers adapting to their language. We started with very strange words in the command line (DIR, CHKDSK…). A while after, due to certain maneuver in Xerox Park performed by Steve Jobs, graphical interfaces arrived and machines were closer to our world. With the iPhone we could even touch graphical interfaces, but we still need to learn and understand what every new app represents in front of us.

Conversational interface opens a new world in user experience, it is clean, simple and efficient. In an increasingly complex world, simple solutions seem like the best way to deliver value.

It is high time that we cease to learn their interfaces and we improve the way to communicate with devices. It is time that they, the machines, travel the last mile to us. It is time for them, finally, to talk.

“Never trust anything that can think for itself if you can’t see where it keeps its brain.” — J.K. Rowling

(in Spanish, here)

--

--

Antonio Gonzalo

Technology is another form of art. Audio enthusiast. I love pizza with milk. This is my 'Bruce Wayne' account.