How do new things come to be? Creations are neither miracles nor magic, but the consequence of many small, often meandering, steps. Sometimes creators head in one direction only to become lost or reach a dead end, yet — if they continue to hope — they still end up somewhere interesting.
All this is true of my work on the Internet of Things.
The only thing for which I can perhaps claim sole credit is the name: three ungrammatical words that now label computing’s future.
I may even be wrong about that. I think I came up with the name while working on a PowerPoint presentation at Procter & Gamble in the spring of 1999. But I was working with many visionaries at the time, and it may be that one of them said it first, and it later reappeared in my mind, a borrowed thought disguised as an original one. No one has ever claimed as much, and I suppose they would have done so by now, but it is possible nonetheless. I am certainly not some heroic individual contributor. Creation never happens that way. Every movie has a poster highlighting a handful of names — a few stars and co-stars, the director, perhaps a producer or writer — and every movie has end credits, where hundreds or thousands of other names appear. The poster shows creation’s myth: this was made by a few. The end credits show creation’s truth: this was made by many. All creations are like this, but only movies have end credits; for everything else, a few people get all the attention. The Internet of Things is the same: I may be on the poster, but
I was only one person in a large community of creators.
I became interested in computers when I was nine or ten years old, because that is when personal computers first appeared. I stayed interested through my teenage and college years, when the Internet became public for the first time, and when I started my first real job, as an Assistant Brand Manager for the Procter & Gamble company, where I was part of team launching a new range of cosmetics under the Oil of Olay Brand name, between about 1995 and 1998.
The cosmetics launch went well, but I was frustrated to find that some of our most popular products were out of stock when I visited my local supermarket to buy my weekly groceries. My colleagues in sales and distribution told me this was not a problem; it was probably only my local store that was out of stock. But to me this was not probable at all:
thousands of stores carried my products, so the odds that the only one with a problem was the one I visited were thousands to one against.
When I checked other stores, I found my most popular products were unavailable in about 40 percent of stores at any given time.
But why? We had made enough products, but they were sitting in our warehouses, never making it to the empty shelves. And that was a clue: the stores did not know they were out of stock.
I asked why again. My interest in computers helped me understand the answer: after much investigation, I found the stores’ information systems were approximate; they were driven by bar code scans at checkout counters, did not know if a product was lost or stolen, and sometimes made errors.
Gradually I realized that it was the bar code scanning, and, more generally, the human entry of data that was the root of the problem. Human-entered data is error prone, inexact, and expensive: no one can afford to be constantly entering data about all the details of an ever-changing environment.
The real world contains countless trillions of terabytes of information, and twentieth century computing could capture none of it.
For me, the problem had manifested as a missing lipstick on the shelf of a grocery store shelf, but it was everywhere, it was everything. Once I saw the problem in those terms my paradigm changed and the solution became obvious, if only conceptually.
Computers needed to gather their own information by sensing the world for themselves.
Sensors had existed since at least since the early days of automation in the early 1800s. During the 1900s sensors became digital, and smaller and cheaper, and by the late 1990s, when I started thinking about them, the next logical step was becoming obvious to at least a few people: connect them to the Internet.
Internet-connected sensors were not a big topic in the 1990s — in fact, they were barely a topic at all. This was the time of the “dot-com boom,” which, as the name suggests, was pretty much all about websites. The idea of creating a vast open network of sensors to gather data about the things in the real world automatically was weird, and the community of people thinking about it was small.
I could not pursue this idea without support from my employer, Procter & Gamble, so I had to find a quick way to sum up my weird vision for the company’s senior executives. To do that, it needed a name that was both familiar and intriguing. Everyone wanted to know more about the Internet in the late 1990s, so using that word was obviously a good idea. I just had to link it to something about the physical world.
A good word for physical stuff was “things,” which was already being used by Neil Gershenfeld, a Professor at the Massachusetts Institute of Technology, and the leader of a research program called “Things That Think.” Neil’s work at that time was not especially focused on networking, but he had a good early take on the potential value of embedding computers and sensors into everyday devices, as did other researchers working in a field then called “embedded computing,” and now more commonly known as “ubiquitous computing.”
I combined the two concepts using the word “of,” calling my vision — and the accompanying PowerPoint presentation — “the Internet of Things.”
Until then, as far as I know, “Internet” was a standalone noun — a monolith with no variations, seldom, if ever, used with a preposition. Today it’s not uncommon to hear about an Internet “of” something or other.
My executive meetings achieved their objectives: Procter & Gamble’s senior executives gave me money to start a research project at MIT and so I made the presentation many more times all over the world, until my phrase “the Internet of Things” became famous. Far more importantly, the vision it named became real. That was the result of the cumulative contributions of a vast community of people, from different places and times, all working to make things better. In that sense the Internet of Things is like every other creation. If this were a movie, the end credits would start now, and they would be long indeed.
“Beginning the Internet of Things” was first published as a special introduction to the Japanese edition of “How to Fly a Horse—The Secret History of Creation, Invention, and Discovery.” The English language edition is available here.