Powering the Internet of Humans

Marc Canter
UX/UI developments, advances and innovation
20 min readAug 20, 2014

--

The Internet of things. IoT. Its the latest, hottest buzzword du jour. But if you think about it — why are we so concerned with connecting the Internet to Things?

Isn't it really about connecting to humans?

Social Media has shown that when you connect humans together — almost anything is possible.

Would the situation in Ferguson, MO be different if the social web had NOT exploded with protest, photos and videos? Could the ALS ice bucket challenge even exist without Twitter and Facebook?

We sit on the edge of a distributed, real-time world that embraces and utilizes social media as a tool for communication — which begets change. That change keeps our world moving forward — with each new step of possibilities that technology enables.

Mobile devices have put those possibilities into our pockets and tablets enable us to squeeze and zoom, touch and drag, poke and peek around the world’s culture, economics and lifestyle.

So now comes the IoT, which up until now has been personified by platforms like SmartThings, Nest, Pebble and Fitbit.

Miniaturized low-power computing has met BigData and machine intelligence, and it all can be purchased at BestBuy or Lowes.

The platform battles commence

We’re about to witness some very powerful meta-platform players (Apple, Google, Samsung, Microsoft, etc.) pummel each other over market share in the IoT. One has to almost be sorry watching Intel and Cisco TRY and keep up!

That’s proof that the IoT has HUGE potential and upside — or else none of these companies would be “bothering” to grab as much market share and installed base as possible. It costs big bucks to wage war in the marketplace today. You can almost HEAR the thunder of the galloping horses approaching!

Samsung recently acquired SmartThings and Apple has launched HealthKit and HomeKit and announced their Apple Watch. Google’s Nest platform and Google Fit have been announced and Nike Fuel and IFTTT are major players in this reality.

Machine intelligence platforms are circling the wagons and BigData is exploding in every direction.

I think one can safely say that the Internet of Things platform battles — have officially commenced.

Retail players like Staples and Best Buy have their OWN platforms, and 1,000's of startups, VCs, industrial conglomerates and leading technology companies have all thrown their hat “into the ring.”

“Oh goodey, here we go all over again!”

First it was the battle for PCs, then the Internet, then mobile and now IoT. With each new era of technology comes another round of lock-in exemplified by a walled gardens and silos approach to market control and domination.

What’s really going on is that the term “Internet of Things” is just another placeholder label that’s been created to express the next era of when the on-line world meets off-line worlds.

Cisco has already tried to change the term, calling it the “Internet of Everything” (this is right after Cisco called the world the “Human Network.”)

Mobile, Media, the Internet, Devices, Sensors, Low-power micro-electronics, BigData and machine intelligence are ALL hallmarks of the IoT — and it’s all rolled up into one giant ball of opportunities. Indeed.

My own company MacroMind practiced this kind of “lock-in” strategy, when we came in and became THE multimedia tools company in the late 80's and early 90's. Netscape, Google, Facebook, Twitter have all played these platform lock-in games.

Or shall I call them wars?

‘Cause this is all very serious business.

Creating ecosystem platforms where developers can live and thrive is big business and Samsung’s move to purchase SmartThings shows that David Eun and company have figured out where the real differentiation is — and that’s in the software.

Anyone can make TV sets, mobile handsets and refrigerators. But its the middleware and OS layers that create the platform lock-in effect.

As I watched the interview that Kara and Walt did with SmartThings at D11: in 2013, I was struck with how clear SmartThings platform’s purpose and intentions were — to be a system that can plug together ANY vendor’s devices and provide a clean, seamless, authoring platform for humans to control those devices via their smartphones and tablets.

SmartThings has achieved the litmus of capabilities for all smart hubs and integration systems — allowing end-users to easily write “logical” sequences that trigger simple “actions.”

The SmartThings authoring capability in fact is very similar to the IFTTT — If this, Then that metaphor.

IFTTT is an authoring tool that enables users to connect the Internet of Things to web and mobile services; ranging from Evernote and Flickr to email, Instagram and SMS.

Even Yo can plug into IFTTT. Amd vice versa.

Congrats to SmartThings and IFTTT

As the IoT grows and establishes itself, it will be dominated by the current set of vendors, services and capabilities available — which will then enable a wide range of end-user features, services and benefits.

This burgeoning industry is fighting hard — tooth and nail, benefit by feature, platform by alliance, distribution by experience — to create compelling new solutions that end-users will really care about and utilize.

We can now connect all sorts of devices together and make that available in an App (SmartThings, Wemo, Wink, Electric Imp, Pinoccio) — and we can create recipes of a wide range of platforms, Apps and services — and we can then connect these devices and the Internet to totally easy-to-construct electronics projects.

That’s so awesome — I’m so excited!

My daughters are in the other room right now playing with their littleBits synth. They’re learning analog programming first — just as I did in the mid-70's. Oscillator thru filter and mixer — now create those beats and melodies! We even made a video — which Lucy shot.

This new era of capabilities (whatever you want to call it) is sure to be the groundwork — for all we’ll know as technology moving forward. Its all there laying at our feet — ready to be put together Apps, kiosks, robots, mechanical structures, hand held displays, headsets and all sorts of goggles and sockets that will surround your head.

When Kara Swisher and Walt Mossberg interviewed Alex Hawkinson at their “last” D: conference, what they saw was the state of the art demonstration of what the Internet of Things IS.

“If the door opens, then turn on the light.”

Its this sort of “trigger-action” paradigm that IFTTT personifies and clearly is an authoring paradigm that will dominate in these initial days of the IoT.

Turns out MOG (which became Beats Music — which became Apples’ streaming music service) ALSO has their OWN If this — then that UX interface!

Simple logic like: “If it’s raining, close my windows” represent the current ‘state-of-the-art” in humans connecting the Internet and mobile devices to these “things.”

Now a true SMART home can go beyond just turning off the lights or turning on the AC remotely. Now the lights can be turned off, AND the locks secured, all from your smartphone.

What they’ve all been missing up until now is “what comes AFTER the Internet of Things?”

Now that I can have the lights turn on when the door opens — so what? Now that my garage door opener knows who I am and opens for me — automatically — again — so what?

It will take more than just simple commands and hard-wired outcomes for the IoT to truly go mainstream.

What’s needed is an authoring paradigm that can express more sophisticated and complex concepts than just “if this, than that…”

Trigger — Action

The Internet of Things — so far — has been the Internet of turning things on and off and connecting things together. Don’t get me wrong — that’s totally awesome!

IFTTT has become the ipso facto standard for connecting on-line services and content together.

If you’ve ever “played” with IFTTT and set up some recipes to “send me an email if THIS happens” or “put my data into THAT service” then you’ll know what I’m talking about. Its an incredibly easy system to setup and configure, but what it DOES kind of only goes half-way — to where you want it to go.

For all I know the folks at IFTTT are building exactly what I’m writing about here — right now! We’ll just have to wait and see what they do next!

SmartThings (which is a IoT SmartHome Platform) turns home objects; such as lights, locks, AC or dog food feeders into IFTTT-like “trigger-actions”.

SmartThings uses the exact same “trigger-action” metaphor that IFTTT utilizes for it’s IoT middleware authoring platform. SmartThings calls it “Settings” and IFTTT calls it “Recipes.”

SmartThings put the control over the home onto mobile phones and tablets — thus connecting the real-world to the on-line world.

IFTTT connects the on-line world together and provides gateways to IoT platforms and services.

Most people love the notion of IFTTT, have a few recipes going, but they really still see it as a toy. I suspect that customers utilizing SmartThing’s ‘Settings’ will also feel the same way.

Despite the hype and the 100's of billions of dollars being POURED into the IoT — right now it’s little more than a gimmick and hype.

What they’ve all been missing up until now is:

“what comes AFTER the Internet of Things?”

Now that I can have the lights turn on when the door opens — so what? Now that my garage door opener knows who I am and opens for me — automatically — again — so what? Is the M2M revolution really as good as it gets?

It will take more than just simple commands and hard-wired outcomes for the IoT to truly go mainstream.

The Internet of Things “trigger-action” paradigm just ain’t enough.

And that’s where the Interface story begins.

Authoring Programmed Services

Programming software is the technique utilized to enable humans to define and control how other humans interact with software technology. The exact features and capabilities of that programmed software is determined by the bosses and marketing folks of the company and implemented by the technical team.

In some cases the technical people are the people inventing and creating new kinds of software experiences, but more times than not — the technical people are NOT the bosses — and they are being told what to do — in a pre-determined, controlled manner.

Because it is so expensive to develop complex, sophisticated, intelligent IoT products, services and solutions — the features and functionality of the typical IoT product tends to be simple, clear, succinct and to the point. There’s little room for “playing” or wiggle room.

Agile and LEAN development techniques have evolved so that software products can be more tightly aligned to the reality of the market and (supposedly) what people want.

Changing customer’s energy usage or water patterns is a good thing, but for the IoT to truly hit mainstream we’re going to need entirely OTHER kinds of benefits, use cases and communities evolving around IoT usage — that WE HAVEN’T EVEN FIGURED OUT YET!

How are we going to discover these new kind of solutions and use cases — if it requires hiring dozens of programmers and spending a year building this solution?

So what’s needed is a tool environment where Authors directly control what the features, capabilities and content of the experience is — and have room to imagine, fail, customize, iterate and publish.

Interface Authoring

Interface Authoring personalized experiences take a completely different view on this whole process of creating interactive software. The Interface Authoring tool environment enables humans who are their OWN bosses to control the entire process of conceptualizing their product or experience.

Interface Authors will create content, author interactive activities and correlate and synchronize these assets into some sort of an “App experience” that end-users would access — via their smartphones or tablets.

These Apps will be personalized experiences created JUST for the end-user, as the end-user and the author will have plugged into the App — all of the contexts, (people, places, things, scheduled events, behavior patterns, favs, etc.) that are relevant and important to the end-user.

Interface Authors will record audio or video, take photos, create ‘pages’ of information and build interactive “activities” that can lead their end-users down whatever path to wherever they wish them to go.

This is not your grandmother’s web site or CD-ROM. This is a whole new thing: smart apps authored by individuals.

Points, leaderboards, messaging, notifications, media management and social media integration will all be standard functionality available for any Interface app activity and or experience. Its up to the Author to decide what is made available — in each “agent”.

Interface activities and experiences will be tied to the end-user’s friends, groups and venues which matter to the end-user. As the context and what matters to the end-user changes — so too will the Interface app — adapting to the changing world around it.

Interface Authors will also choose from a series of built-in intelligent “sentences” which will determine what background monitoring “smarts” occurs while the Interface app is running.

The Interface app will be authored in a tight creator-feedback loop enabling authors to immediately see the results of their authoring.

These results are then published into Apps — explicitly designed for specific end-users (or groups of end-users.) These Interface Apps are then monitored, supported, maintained and upgraded — on an on-going basis.

The Interface Authoring tool environment enables Authors to create Apps which can be distributed to THEIR end-users. So a fitness trainer would create a custom Interface App for her client — and modify it as the client progresses.

A therapist would create personalized intervention and emergency protocols for each of the patients their monitoring or tracking.

Public Interface Agents can be distributed and “downloaded” by anyone who has a Interface App running on their smartphone or tablet.

Private Interface Agents can be distributed directly to end-users and won’t have to go through an App store. This is the so-called “one-on-one” experience.

Each App can hold one or more Agents. Each Agent framework associates specific logic with the Agent which it can execute and tools which the Agent’s authors can interact with — to “author” the Agent.

An Interface Author would take a generic Agent and instance it — configuring and ingesting the Agent with specific end-user content, settings and context (which would include the end-user’s relevant social networks, venues, events and favorites.)

This instanced Agent is then sent to a specific end-user’s (or groups of end-users) App — enabling the Author to keep in sync with the App’s end-users — at all times.

Some Interface Agents might include a Tour Guide’s North Carolina or a Game Master Agent designed to augment sports or board games. Interface will have a marketplace for Authors and a peer-to-peer support community built into the tool environment.

We EXPECT this marketplace and the surrounding Interface Author community — to BOMBARD the world with incredible interactive experiences, art, use case solutions, educational curricula, marketing promos and research simulators.

Interface’s generic Agents will cover a wide range of capabilities — from Home and Life Assistance to personalized Trainers, location based Tour Guides to entertaining Game Masters. Interface authors can take any capability, sentences, activity element or theme built into an Agent, and make it their own.

Content and media collections would be uploaded and utilized over and over again — with different iterations of the customized agent — going to out different customers or end-users. Libraries of generic “sentences” will be personalized, plugging in specific venues, events, friends lists, real-time feeds and favorites and preferences.

All of these capabilities are accessed for authoring — by simply building activity structures (for foreground interactivity) or making selections and choices in “smart sentences” (for backend/background monitoring and intelligence.)

Here‘s a fun explanation of what the Interface tool will do.

We were blown away with what our Macromedia developers did with Director — back in the late 80's and early 90's — long before the web, smartphones and intelligent “things.”

What we’re MOST excited about at Interface is what we DON’T know!

“What our Authors will create with the Interface Authoring tool environment!”

Sentence editors

The problem with the current IFTTT — trigger-action paradigm — is that it does NOT express the full range of sophisticated expression of language.

End-user’s expectations of using natural language goes way beyond toy-like “trigger, action” combinations.

If we want “simple-easy-to-use” authoring tools to be the means which “normal” (non-programmers) express themselves and define social interaction, then we had BETTER have a richer range of expression than “trigger-action!”

The Interface authoring environment will provide it’s authors “sentence editors” which will pick up right where IFTTT leaves off.

More complex sentences; mixing and matching verbs, nouns, intentions and possibilities will be created and managed as a way of defining back-end “smarts.”

We define “smarts” as something that’s going on — all the time — without the need for human intervention.

Interface enables authors to control what happens “in the background” via a Sentence editor. Authors would choose from pre-built sentences and personalize instances of the sentences with verbs, nouns, intentions and possibilities.

Authors simply click on a highlighted word and a madlib-style popup menu appears, allowing for simple selection. Mad-Props go out to IFTTT for correlating simple authoring logic to mad-lib-sentences!

Here’s another state of the Sentence editor — this time showing a sentence where the target Venue — is being modified.

Interface will KNOW which venues are relevant to the author’s end-user, which events matter, what people their end-user is interacting with. Interface will know this information based upon what the author tells it.

Sentence editors are an obvious authoring paradigm for normal people to “author” smarts — intelligence, back-end processes; call it what you want.

But for Interface to provide a full-range of interactive authoring possibilities it will also have to provide a “structure editor” interface — which will enable authors to script out and orchestrate FOREGROUND activities (which humans would directly interact with — via their Interface App.)

Activity ‘structures’ represent a unique sequence of interactive actions that Interface App end-users DO with the Interface App.

Activity ‘structures’ enable Interface to act as a “state machine” authoring system.

Structure editors

Interface’s “structure editor” gives Authors full control over a sequence of “activity elements” and content that occurs in an Activity Structure.

By selecting an “activity” in the App, Interface App end-users can trigger a wide range of man-machine interaction — ranging from displaying media and text, to recording media and text to supporting interactive surveys, forms, maps or dialog/wizard interfaces.

Thinking of it as part-Learning Management, part-Content Management, part-App Creator and part-Outliner.

The sequence of the interactive actions, navigational controls and information design are authored and controlled by Interface Authors. This capability gives Interface Authors the ability to “imagine” an interactive sequence (complete with media, dialogs, interfaces, maps, search, location, messaging, etc.) and output an App that offers that sequence to end-users.

These experiences could be socially based, interacting with other end-users running Interface Apps — or it could be a more intimate one-on-one kind of experience.

These experiences might be educational in nature — such as entering or capturing data from real-world research and creating reports or the experiences might take the form of an fun game of augmented Beer Pong.

Interface Apps will be able to provide end-users with personalized experiences taking into account not only their favorite types of music, exercise or food — but also what devices they’ve got configured in their home or are wearing on their wrist.

The idea is to provide Interface Authors a personalized publishing system so that these Authors can encapsulate and present THEIR knowledge bases, products, services and media — in exactly the manner that each App end-user requires.

The example above shows an Author attaching a video to a list of exercise routines. The Video bank (on the right) provides persistent media storage that can be utilized in any App, while the activity elements on the left hand side enable Authors to embed various kinds of interactive activities — into a “Interface Activity structure.”

Agent frameworks

How do you combine a sentence editor paradigm for authoring “smarts” with a “structure editor” which authors interactive activities?

The Interface tool environment puts all these concepts into an ”Agent Framework.”

This is an ‘age-old’ concept that we’re adopting which means to embed intelligence, content and personalized settings into a ‘semi-autonomous’ Agent — and let it “loose” in the on-line world.

Some Agents run in the background and so “smart” kind of stuff and other Agents are there — at your beckoning call — just like Apps work today.

The goal of the Interface tool environment is to put into the hands of non-programmers the capabilities of creating their OWN Agents — and distribute them as smartphone or tablet Apps.

Interface does this by creating a Context Map display which not only ingests all of an end-user’s social, personal and media world — but also acts as a real-time display of any CHANGES that might happen to the end-user’s life. Whether it be the real-world or the on-line world of technology.

Interface’s Context Map divides up the world into four quadrants: Tech, Real-world, End-user and People.

Everything about the end-user is in the End-user quadrant and all the other people effected or interacted with go into the People Quadrant.

All of the real-world settings, pointers, context state (weather, time, location) and real-world events are all kept track of in the Real-world quadrant, while all aspects of technology and the on-line world are then tracked, monitored, configured and controlled — in the Tech quadrant.

Interface authors will maintain Apps — which can be distributed to individual end-users or groups of users — and embed Agents inside of those Apps. Each Agent will encapsulate the content, logic, services and personalized settings — of a series of generic Agents.

In that sense the Interface App is the run-time player for Interface Agents.

This is how we expect Interface to grow virally.

I get an App from my trainer, and I grow to love Interface experiences. I then go to the Interface marketplace and discover ALL these other kinds of experiences! I then decide:

“I can build one of these — for ME and my FRIENDS!”

or

“I know a WHOLE lot about X, Y r Z. My blog, Facebook page and Twitter feed are great — but how can I provide my customers with smart Apps — which directly tie me to that customer?”

or

“I can make a LIVING by building and maintaining Interface Apps!”

Interface developer community

I love the very notion of an Internet of intelligent “things” talking to other “things” but as the title of this white paper implies

we’d better make sure that humans are a intrinsic part of the equation!”

That’s why we've started a new company that’s dedicated to building authoring tools for “non-programming” Authors — so that the experiences and solutions which propagate the “Internet of Things” will be created by individuals, educators, small business owners, professionals, marketing agencies, consultants, researchers, artists and writers.

In the old-days we used to call these kind of people “multimedia developers” — so maybe now they’re “IoT developers.”

These kind of folks don’t really exist today. There are no IoT agencies or IoT “shops” who specialize in building smart Apps and provide services to other IoT developers. But there will be.

Interface expects there to be LOTS of different Agent framework authoring platforms appearing over the next five years. It’s inevitable. We just wanna be the first that succeeds and brings the notion of “Agent authoring” to the mainstream.

It’s impossible (right now) to “author” for anything more than “if this happens (from this on-line service) or (coming from this device) then……”

The trigger captured by IFTTT represents the open web and that’s why we love IFTTT. Same goes with Yo.

Interface will provide a compatible IFTTT channel so that our Authors can plug and play into any of the devices, Apps, services or platform that IFTTT is connecting together.

But Interface will go beyond what simple trigger-action logic can offer — and that will be at the heart of what makes Interface authoring (and the Apps that it begets) so unique and compelling.

Our customers will not be satisfied with just “trigger an action” kind of authoring. They won’t be satisfied with simply connecting devices together and controlling what happens with the devices.

There’s GOT to be much more to do and author — or else…..

The Internet of Humans will be authored, not programmed

I’m a tool-smith and I've spent my career enabling normal people to “get their arms around” technology and do fun, creative, empowering, educational, marketing and simulating things with multimedia.

As I've watched our “personal computer revolution” go on-line and into mobile devices — I've been building social networks — for hire — for folks such as RadioOne, NVidia, the Sac Kings, Bell Canada, Mondadori, theTimesofIndia and the U.S. Army ROTC. Not one of those social networks succeeded — because the people creating these networks were not in direct contact with the members of the network.

The reason why most social media customers today connect to each other via Facebook or Twitter is because the creators of Facebook and Twitter were USERS of their own technology. This gave them direct contact to their users and thus the platform iterated and evolved to exactly what the users wanted.

Facebook and Twitter represent an era in on-line technology where the people and users of the products have become the #1 most important quotient in the equation. If you cannot tightly iterate, listen to and evolve your platform — with your users — than you will fail.

For the IoT to fulfill its destiny, compelling use cases, solutions, platforms and companies must be born which go beyond turning the lights on and off, or monitoring your heart rate.

Human authors will need to be empowered to create these new experiences and they’re going to AUTHOR these experiences, not PROGRAM them.

I've learned a lot along the way and one thing us old timers can notice — is when the tidal waves start to rise and entirely new era commences.

This time around it’s like multiple beaches with multiple tidal patterns are colliding. Its a meta-tidal effect — and it’s called the “Internet of Things.”

Interface is a new kind of authoring tool designed to leverage the concept of Agent Frameworks — in today’s day and age.

Personal Trainers, Tour Guides, Home and Life Assistants and Game Masters — will be the first five Interface Agent frameworks. We hope to ship something to start utilizing within one year of today.

General Magic pioneered this concept of Agent frameworks, though early visionary films called Apple’s Knowledge Navigator and the BBC’s Hyperland also serve as inspirational “jumping off points” for Interface.

But the true inspiration for Interface’s Agents framework design goes to Andy Herzfeld. Not only was Andy a co-founder of General Magic, but Andy also co-created the Macintosh, created a Home Entertainment drag-and-drop UI/UX called FROX and has been a major contributor to the open source community with the Chandler PIM (personal information manager) project (paid for by Mitch Kapor) and Easel (which begat Nautilus, which became GNOME Files.)

Andy has been leading the way in graphical authoring paradigms since its inception and laid the ground work for Interface to exist.

Thank you Tom Foremski on the excellent article on me and ThingFace (our former name) as an authoring platform for the Internet of Things. But perhaps it should be for the Internet of Humans — connecting to things.

--

--