The Device

Goodbye Apps. Hello Services.

mark smillie
3 min readOct 5, 2015

--

What if there was a device that just let you speak and enabled you to get/do what ever you needed?

  • I need a flight from SAN to BOS tomorrow morning. An Uber pick up at the Airport. Also, get me a hotel room for around $350 a night near Back Bay. And dinner at a great Tapas place tomorrow night for 4 people.
  • What time is the next train due to arrive?
  • Pay my cable bill
  • I need a doctor’s appointment this week, any day after 9AM
  • I need my driveway plowed
  • What’s a good wine to have with steak?

My device knows who I am, my payment credentials, where I am, what time it is. It could, over time, know what I am doing based on learning my schedule so as to anticipate my needs.

All of these examples can be accomplished today via Apps, APIs and web services. Except it’s a pain. I don’t want to have to download an app, open it, create an account, enter my payment info to make a reservation, order flowers or get movie tickets. We need a move to a higher meta-layer of abstraction and attraction, where we are employing automation/machine learning/AI or actual humans to leverage and connect the existing services. This is starting to be done today with apps such as Operator, Magic, Luka and Apple is certainly moving this way with Siri. We can do better.

What do we need to accomplish this?

There needs to be an open source API service layer. Kind of a “meta” API that can interact and communicate with existing APIs while managing and orchestrating complex/multi-part requests.

There needs to be a way for companies to easily publish or expose certain elements of their APIs to this meta layer and a way for the meta layer to hook into them.

Services should be able to be selected (or allowed to compete in the service layer) based on attributes like price and time (cheaper or faster).

The services should remember my choices and preferences: I like pizza from BestaWan, I usually use Uber, my favorite airline is Virgin Atlantic.

The Build

Phase one is to design and build an open source metaApi layer to allow integration with services and some kind of “App” that will be based on text in order to demonstrate functionality and proof of concept.

The Build II: (The Device)

Phase II will be to build a device that is based on speech.

It’s a just a screen and microphone. You say what you want and the service does the rest, responding with text and colors.

The basic components: Wifi chip, solar/kinetic battery/microphone/touch screen/LED/magnet/logic chip.

The device is resin with hardware embedded. The resin “package” can be any shape the user wants. It be something they can wear, as jewelry, neckless, broach or bracelet or just carry around.

We don’t need apps anymore. We need services that we can interact with.

Who’s in? Let’s build this.

Additional thinking on this from @libovness

--

--