Your Personal Sim: Pt 1 — Your Attention Please

The Brave New World of Smart Agents and their Data

A Multi-Part Series, Posted on Wednesdays

John Smart is a futurist exploring the intersection of technology and culture from universal, acceleration, and evo-devo-based perspectives. These posts are excerpted from his new 15 chapter book on the foresight profession, The Foresight Guide. The Guide will posted free online, as a permanent, page-commentable blog, on June 30th at ForesightGuide.com. To be reminded when it goes online, leave your email address at ForesightGuide.com.

Siri, iOS 7, June, 2013

Part 1 — Your Attention Please: A New World Is Almost Upon Us

Summary (tl;dr)

  • This series will explore the five- to twenty-year future of smart agents and the knowledge bases they use and build. These may be the most socially important forms of AI that will emerge in the coming generation.
  • As we’ve seen in the headlines about deep learning since 2012, the AIs are presently awaking all around us, whether we want them to or not. They are also coming in our image — in their neural form and function — again whether we want them to or not. To paraphrase futurist Stewart Brand, “We are gaining superpowers, so we better get good at using them.”
  • A new kind of software agent called a personal sim is the most empowering and intimate form of AI on the horizon. We’ll soon be using sims that model our interests, goals, and values in their knowledge bases, and which act as our assistants and digital interfaces to the world.
  • In their early years we’ll likely think of sims as bright but slightly autistic children, much better at many tasks than we are, but still unschooled and unwise in many ways. At the same time, the knowledge bases our sims use will be full of errors, and won’t be sufficiently open at first.
  • The takeaway from this series will be that we will need to build and raise our sims and their knowledge bases well, with love and care, as they will be central to how billions of us live our lives in the 2020s and beyond.

Article

The biggest advance in computing technology that humanity has yet seen is sneaking up all around us, right now. In just the last five years, our leading IT companies are building the first truly useful intelligent assistive personal software agents, or “smart agents” for short. The exponential convergence of speech recognition, natural language understanding, deep machine learning, increasingly intricate and valuable knowledge graphs and knowledge bases of big data, and contextual information from our digital devices, including our online habits, email, social networks, purchases, and location data, is allowing smart agents to anticipate our daily actions, wants and needs.

Conversational agents include Google Now, Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, Facebook’s M, the Siri founder’s startup Viv, Baidu’s Deep Speech (technically, the speech rec front end to an agent), IBM’s Watson Analytics (technically, an analytics front end to an agent), and a large number of offerings from smaller companies, such as Soundhound’s Hound, call center automation agents like IPSoft’s Amelia and Next IT’s Alme (I am an advisor to NextIT), dedicated tech support bots like Slack’s Slackbot, scheduling bots X.ai and Clara, “smart texting” customer service bots like Go Moment’s Ivy (for hotels), email assistant Crystal, chatbots like Microsoft’s Tay, and a host of others either under the radar or in development. Smart agents are increasingly good at understanding the semantic meanings in our text, voice, GPS and other data, inferring our likely next actions, and predicting our needs, based on our current context. When they don’t understand, they are also learning how to ask questions to clarify our intent.

At the same time, Google, Baidu, Microsoft, Amazon, Facebook, IBM, Nvidia, Factual, and other IT platform leaders are building both proprietary and open knowledge graphs and knowledge bases, vast databases that allow semantic representations of public and private information, and which use machine learning to do reasoning and inference with this data. In early 2015, Google began ranking websites based on their factual accuracy, not just PageRank. Any site with more than a few factual inaccuracies, according to the base, such as the antivaxxer sites, now gets a lower ranking in the new algorithm, renamed RankBrain, as it is centered around artificial neural networks.

Certainly today’s clickbait and unhelpful comments will eventually follow, a topic called personalized search. Imagine YouTube’s comment wasteland autosorted by relevance, truthfulness, and truenames. We’d all start reading the comments on YouTube videos again. Just like spam was tamed via a knowledge base of known bad actors and spam reporting buttons in our cloud-based email, all data on the web will be semantically rated, ranked, and filtered by a combination of AI, agents, and people. Roll RankBrain forward a few years, and we can see how powerfully and irreversibly tomorrow’s open knowledge bases and AI will change the web, making it much more valuable and relevant to each of us. Conversational agents, of course, and the dashboards, infographics, choices they offer us, will be our primary interface to how the web is continually customized, to each of us.

All this has some very big implications. As we’ll discuss in a future post, each of us will be able to use agents and their knowledge bases to increasingly see only what we want to see. That power can lead us into an ignorant, biased, filter bubble hell, as Eli Pariser warns in The Filter Bubble (2012), or into a well-crafted and empowering set of digital living and working spaces, each of which measurably transports us to new heights of insight, empathy, and productivity. Which way we use our personal agents will be ours to choose.

Google’s Knowledge Graph. Announced May, 2012.

As these knowledge bases and their brain-like networks grow, we can expect not only knowledge, but also truth, opinion, reputation, probability, goals, values, and other information graphs to become available on the public and private web, in both open and proprietary forms. In 2005, to honor futurist George Gilder’s seminal 20th century thinking on technology, I called that near-future world a “valuecosm” a time when increasingly granular maps of the values, interests, and goals of participating users become part of the open public web. The valuecosm is a predictable outgrowth of today’s “datacosm” (cheap and abundant big data), the “telecosm” (cheap and abundant telecommunications) of the 1990s, and the “microcosm” (cheap and abundant microprocessors) of the 1980s. It is almost upon us.

Microsoft CEO Satya Natella says “smart agents will supplant the web browser,” and “bots are the new apps.” Such statements are aspirational at present, but will be increasingly true in coming years. Looking back from the view twenty years hence, we will come to see today’s web of social big data (Web 2.0), as the precursor to a “Knowledge-Mapped” and “Agent Web”, a near-future when agents and their knowledge bases become the main way we choose to interface with the world. That may be the world we’ll call Web 3.0. It is a world where semantic knowledge bases, brain-like machine learning and smart agents all emerge in one big transition, essentially at the same time.

There are no general-interest books on the present and future of smart agents and their knowledge bases yet, to my knowledge. But Scoble and Israel’s The Age of Context: Mobile, Sensors, Data, and the Future of Privacy (2013) is a great intro to context-aware technologies and some of their social implications. Eric Siegel’s Predictive Analytics (revised for 2016) is a great intro to all the industries and strategies presently building knowledge and anticipation from big data. Mayer-Schonberger and Cukier’s Big Data (2014) is also a good read. More recently, Chris Brauer of UCL led a near-future study on smart agents in 2015 that is also a great place to start.

The name used most often to describe smart agents today is virtual assistants (“VAs”). But the term “virtual assistants” is clunky, and it gets confused with living virtual assistants, people that work online for others. Computer scientists have been calling these intelligent agents for years, so like Natella, smart agents is the main term I’d recommend using when you describe these systems. Let’s find a good term soon, because we’re going to be talking about these for a long, long time.

In a recent Slate article, Will Oremus predicts smart agents will increasingly be “the prisms through which we interact with the online world.” Consider next what happens when we add wearable audio and video augmented reality to our agents, and our knowledge bases get a bit deeper and smarter, aided greatly by the internet of things. In that future, it’s obvious our agents will become the main software interfaces we use to interact with the world, period. Oremus’s article is titled “Terrifyingly Convenient,” a phrase that is a great way to highlight both the disturbing and the enticing aspects of agent technology. As humans, our minds naturally go to the dystopian aspects of agents first, for deep evolutionary reasons. Only secondly, and warily, do we contemplate their positive aspects. But both negative and positive outcomes are likely, and we’ll do our best to cover both futures in this series.

Our most intimate agents will be highly personalized to us, building accurate internal models of our current context, preferences and values. That makes them different enough from smart agents that I think they deserve their own unique name as well. In 2014 I began calling highly personalized agents “personal sims”, or simply, “sims”. Anyone who knows the definition of simulation, or has seen The Sims, a game played with graphical representations that can look like the user, understands the idea.

Our coming sims won’t have to look like us in order to have an accurate internal model of who we are. In academic labs, a sim that acts like a great butler, secretary, humorist, guide, or coach, who is not a visual copy of us, is usually more popular than a sim that looks like the user, which is often seen as narcissistic or creepy by users, at least today. But any highly personalized smart agent, though it may have its own appearance and personality, perhaps like Carson in Downton Abbey, must also have a large portion of its mental architecture dedicated to being a software simulation of us. Thus “sim” is an apt term for the mental architecture of any personalized smart agent, whatever their appearance. In the not-too-distant future, I can imagine us saying “my sim said this”, or my sim did that” when we talk about social events in this brave new world.

Charles Carson, Downton Abbey

I’ve been thinking about sims and their knowledge bases for about fifteen years, since the start of my career in strategic foresight. I gave my first tentative talks on them at a Foresight Institute gathering in 2001. In 2003, I published an extended interview and a popular web article on them, and the conversational interface they would need to build good semantic models of us. Prior to Sept. 2014 I called hyperpersonalized agents “digital twins”, to signify that they would become like software twins to best assist us and act as our agents in the world. I now find “sim” the simplest and most useful term.

After science fiction authors and futurists, two groups of thinkers who always get there first, as you can see in Wikipedia’s awesome AI in Fiction page, Apple was the first big company to bring the idea of the personal sim to the general public, in their Knowledge Navigator concept video in 1987. In that commercial, which was set in 2011, a user talks to a bow-tie wearing personal sim on an iPad-like device. The real iPad debuted in 2010, and Siri was launched on the iPhone in 2011. Pretty good foresight, don’t you think?

Apple’s Knowledge Navigator (1987)

As their smartness grows, we will increasingly use our most trusted sims to advise us, and even to act on our behalf. As our most personal agents, our sims will be continually conversationally trained by us, and have private personal data about us that we don’t share with the outside world. They’ll help us make choices that better reflect our personal interests, goals, and values, and they will increasingly encode interests, goals, and values of their own.

In this series I will explore the five- to twenty-year future of sims and their knowledge bases, to do my bit to try to improve our public conversation and open foresight about their future. Please add your comments as we go, and I’ll do my best to learn from the conversations. As a technology futurist and a systems thinker, my own bias will be to stretch our discussion horizon, to offer provocations, what ifs, predictions (bets), and questions. I’m convinced that talking constructively and respectfully together, engaging in open collaborative foresight, while acknowledging and championing each other’s different values and ways of thinking, we can see much further, and craft better strategy and plans, than each of us ever could alone. See Markova and McArthur’s Collaborative Intelligence: Thinking With People Who Think Differently (2015) for more on that very powerful idea.

At this point in our intro, a host of sim-related questions may spring to mind:

  • When you act in the world in coming years, how will you know when to trust your sim’s recommendations for who to date, what to read, buy, invest in, or how to vote?
  • How will you judge when its intelligence exceeds its wisdom (common sense), and when it is serving your interests, rather than the company that created it?
  • How early should children be allowed to use sims? How early should educational sims, via smartphones, be given to emerging nations youth?
  • How many “virtual immigrants,” working online in tomorrow’s startups, can we expect when global youth learn English, other leading languages, and technical skills, from birth from their wearable sims, via what futurist Thomas Frey calls teacherless education?
  • How intimate will you let your sims get with you? What do we do when people start to fall in love with their sims (see Her, 2013, for one excellent scenario)?
  • What will be the impact of therapy sims? Correctional sims? Shopping sims? Financial management sims? Voting sims? Activism sims?
  • If your mother dies in 2030, will you find it helpful to talk once in a while to the sim she herself talked to for the last ten years of her life? Will you let Google, Facebook, Microsoft, or whoever provided her sim keep improve her AI, and even talk to surviving friends and family, so her sim can become an ever better contextual interface to all the data of her life? There’s even a few startups working on that idea today, like Eterni.me. How will this kind of “simmortality” change our culture?
Eterni.me

These are just a few of the big social questions raised by sims, and we’ll try to take a good early look at many of them in this series.

Surprisingly, if accelerating computer hardware and software trends continue, sometime between now and mid-century our sims will begin to seem generally intelligent, to their users, both intelligent in the human sense and in a number of senses wholly new. At the same time, our most powerful sims will increasingly come to be seen, by their users, as digital versions, and indistinguishable extensions, of us.

In fact, I think that’s what the long-discussed technological singularity will primarily look like, to the typical person, some time in the middle of this century. Each of us will experience our own “personal singularities” as our increasingly intelligent sims, and the data and machines they control, start to reach and then exceed us in their understanding and mastery of the world.

In this view, we are heading for a primarily bottom-up, diverse, and massively parallel world of distributed sim intelligence, with a small amount of ideally well-intentioned but ultimately secondary top-down efforts at control of the gathering intelligence storm by various authorities. In my opinion, a very open, distributed, and highly bottom-up approach to machine intelligence is also the only way we’ll actually create all the experiments, data, and training necessary for human-surpassing machine intelligence (also called “general AI”) to emerge, both quickly and (for the most part) safely in coming years.

At the same time, to balance all this new personal empowerment and collaboration capacity, individuals, teams, and nations will need ever better security, privacy, and adaptive political systems. I think those better rules and systems will also emerge by primarily bottom-up means, again with a small fraction of top-down strategies as well.

Understanding all complex adaptive systems, whether they are organisms, organizations, societies, technologies, or even universes, as primarily bottom-up, experimental, and selective, and only very secondarily top-down, rational, and planned, is a way of systems thinking that has a name. It is called evolutionary development, or “evo devo”, and it comes from the field of evo-devo biology, which I believe is the best current framework to understand change in living systems. In 2008, philosopher Clement Vidal and I formed a small research community, Evo Devo Universe, to study this particular approach to complexity and change. A great early book on evo devo thinking, applied to societies and technologies, is futurist Kevin Kelly’s Out of Control (1994). If we live in an evo devo universe, then most processes and events will always be evolutionary, unpredictable, and “out of control,” while a special few things will be developmental, top-down, and predictable. Both unpredictable and predictable futures lie in front of us, waiting to be seen.

Evo Devo Universe Blog

Unfortunately, every popular book I’ve read on the future of artificial intelligence either ignores or discounts the likelihood of a mostly bottom-up, divergent, creative, unpredictable, and “evolutionary” agent-driven future of AI, in combination with a much smaller amount of top-down, convergent, conservative, predictable, and “developmental” set of architectures, priorities, and controls. Yet as I will argue in this series, given the impressive advances we’ve seen in deep learning since 2012, that bottom-up approach now looks to be the most probable future, and it lies directly ahead of us.

A world of exponentially more intelligent agents and sims acting as proxies for us, in deep harmony with our robots and machines, will be a tremendously empowering but also a disruptive and potentially dangerous future. Deciding who controls their construction and training, and the sensors and data they have access to, will be among the most important social, commercial, and political choices of the coming generation.

But this is also a future I don’t think we can avoid. It seems a developmental inevitability, so we better get better at thinking and talking about it. Let’s end this post with a version of a prescription from one of my favorite futurists, Stewart Brand, editor of the Whole Earth Catalog and co-founder of the Long Now Foundation: “We gain new superpowers every month now, whether we want them or not. So let’s get good at using them, to help each other thrive, as best we can.”

Stewart Brand, and The Long Now Foundation

As you think about agents and sims in the coming week, I like to suggest three questions for conversation:

  • Who will build the most trustable and popular agents? Big corps? Open source? Government? What about sims?
  • What does our future economy look like, in a world of ever-smarter personal sims?
  • What is the future of politics, as our agents and sims increasingly understand, assist, and advise us?

I’ll post Part 2 of this series, Deep Agents, on Wednesday next week. It covers the emergence of deep learning, and why tomorrow’s agents and their databases are going to advance a lot faster, and in a much wider variety of ways, than most people are presently thinking. Meanwhile, if you are near San Jose this week and have the means, consider taking a day at Nvidia’s 2016 GPU Technology Conference. With 5,000 academics, technologists, and entrepreneurs in attendance, GTC is presently the builder’s deep learning event of the year. It’s got the excitement of Macworld in the 1980’s. A whole new frontier of human-machine partnership is emerging, right now.

A highly recommended skim, for all the tech curious, is Monday’s keynote from CEO Jen-Hsun Huang. If that doesn’t blow your mind and give you a severe case of future shock, I don’t know what will.

Nvidia’s Pascal Hardware for Deep Learning (Source: WCCFTech.com)

I look forward to our conversations. Thanks for reading.

For more on John, visit his speakers page, JohnMSmart.com.