Your Personal Sim: Pt 2 — Why Agents Matter
The Brave New World of Smart Agents and their Data
A Multi-Part Series
This is an excerpt from my book, The Foresight Guide, free online at ForesightGuide.com. The Guide intros the field of professional foresight and offers a Big Picture view of our accelerating 21st century future.
This is Part 2 in a multi-week series on the five- to twenty-five year future of smart agents, (aka, intelligent software agents, conversational agents, virtual assistants, bots, etc.) and their knowledge bases. I think smart agents and their knowledge bases are the most important IT development humanity can expect in the next generation. I recently made this claim, with this wording, to one of my favorite futurists, Kevin Kelly, who just finished a fantastic new story on (computer-)mediated reality, The Untold Story of Magic Leap, Wired 4.19.16, and he responded with two words: “I agree.” (For those who know him, Kevin is a model of brevity in his email exchanges :).
As a futurist, I’ll try to do my bit to stretch our thinking around these issues, and improve our public conversation about their future. Please add your comments as we go, so we can all learn and grow our foresight around these issues. I’m convinced that talking constructively and respectfully together, engaging in open collaborative foresight, while acknowledging each other’s different values and ways of thinking, we can see further, and craft better strategy than each of us ever could alone. See Markova and McArthur’s Collaborative Intelligence: Thinking With People Who Think Differently (2015) for more on that very powerful idea.
Why Agents Matter: Empowerment, and Much More
This post considers the “So What?” question, why agents, and especially sims, will be increasingly valuable to us and our society. In Part 3, The Agent Environment, we’ll explore our increasingly mediated reality, ranging from universal to personal perspectives. Part 4, Deep Agents, will see how our smartest agents and sims are most likely to be built, and why deep learning, a biologically-inspired approach to artificial intelligence, will grow in usefulness far faster and more broadly than most people now realize. Part 5, Safe Agents, will propose the most evidence-based strategies I know for how we’ll keep our agents trustable and dependable as their smartness accelerates, and how we’ll use them to grow global security, while maintaining personal privacy and freedom.
In every post after that, we’ll turn our attention to how agents will increasingly influence each of the following eight societal domains:
1. Personal Agents — News & Entertainment, Education, Personal Growth
2. Social Agents — Teams, Relationships, Poverty and Social Justice
3. Political Agents — Lobbying, Repres. & Tax, Basic Income & Tech Unemp
4. Economic Agents — Shopping, Financial Mgmt, Funding and Startups
5. Builder Agents — Built Environment, Innovation & Productivity, Science
6. Environ. Agents — Population, Pollution, Biodiversity & Sustainability
7. Health Agents — Health, Wellness, Dying and Grieving
8. Security Agents — Security, Privacy & Transpar., War, Crime & Corrections
In each, we’ll explore two scenarios for how agent development and adoption might play out. The first will be a dystopia, a sample of social outcomes we’d like to avoid. The second will be what Kelly calls a protopia, a better world that the large majority of us would like to reach.
A Good Society — Five E’s of Social Progress
Before defending this claim about the uniquely important social power (for good or ill) of smart agents and their knowledge bases, I should further define my terms. What makes for a Good Society? How do we define social progress? This are questions we should all ask ourselves, as citizens and as future-thinkers. Even our imperfect answers can help us as we strive to make better futures, each in our own way. The variables that I would argue best define social progress, if we limit ourselves to one hand, are the following:
- More entertainment (creativity, experiment, play, re-creation, awe, fun)
- More empowerment (abilities, wealth, and rights that are “freedoms to”)
- More empathy (cooperation, connectedness, love, understanding, ethics)
- More equity (security, fairness, and rights that are “freedoms from”)
- More evidence-seeking (sustainability, science, data, rationality)
These are just a modern version of Plato’s Transcendental Triad of social values: “The Beautiful, the Good, the True”. The middle three are the “Good.” We’ll keep each of these “Five E’s” in mind in our agent stories.
In the long run, our personal sims, which we will come to see as the most rapidly improving, fastest-learning aspects of ourselves, will be critical in helping us achieve major progress on these variables. But in the short run, lots of bad things can and some unfortunately will happen. So as with any powerful new technology, we can expect many social problems with first-gen sims, including new forms of political and commercial manipulation, polarization, insularity, mob behavior, distraction, dependency, and crime.
But as they exponentially gain intelligence, and we select them for symbiosis with us, it is easy to foresee ways in which agents and sims will improve our activism, and guide us to more representative, fair, equitable, safe, green, and innovative societies. Sims are particularly concerning in a social and ethical sense, as they will increasingly act as assistants, coaches, and proxies for us, and become so intimate that we’ll start to view them as natural extensions of ourselves. That will change the nature of personal identity, and not all of that change will be good either, particularly at first.
So we’ll try to tell both positive and cautionary stories in this series, and please let me know what I am missing or getting wrong.
A Taxonomy of Agent Types
Classifying agents by their ability to help us, I see three primary dimensions. First is their autonomy, or self-model, their ability to act on their own, with internal goals. Second is their intelligence, or world-model, what they know of and can do in the world. Third is their personalization, or user-model, how well they know us and our desires.
- Agents with autonomy but little awareness or personalization are “bots”.
- Agents with intelligence (knowledge bases, learning ability) are “smart”.
- Agents with user personalization are “sims”.
Each of these three dimensions, and any two in combination, allow us to make great software products. But the intersection of all three is where our future software and hardware’s greatest “agency”, or ability to help us, lies. We can call that intersection “empowerment”, if we’re talking about broad personal empowerment, not just empowering the 1%. Improving equity and balance in our power growth are critical to also improving empathy and evidence-based thinking, as all our kleptocracies and autocracies clearly show.
I’ll mention many agent examples in our series, but the ones I’ll focus on, as I think they’ll be the most individually empowering and socially transformative, are personal sims, agents with semantic understanding of our unique digital histories, conversations, emotional states, and behavior, with personalized models of our interests, intentions, goals, and values, and which persistently interact with us to help us live our lives.
Certain agent and sim uses, including commerce, learning, entertainment, performance, security, and customer service, will be much more desirable and achievable, and emerge much earlier than others. We’ll take a guess at what some of those early uses may be in this series.
Web 3.0: The Agent Web
In Part 1, I argued the coming era of deep learning-backed smart agents and their semantic knowledge bases will be viewed in retrospect as Web 3.0, the next major evolutionary development of the web. Web 2.0, the social web, is just ten years old, if we define its arrival (perhaps a crossover from early adopters to early majority) as Sept 2006, when Facebook opened their profiles to everyone with an email address. Web 2.0 still has legs. It gets more powerful all the time, as the late majority billions finally join the web, and we move society into an “Internet of Everything” — a popular term for the Internet of (slightly intelligent) Things.
Notwithstanding its title, Steve Case’s thoughtful new book, The Third Wave (2015), is really about the ongoing Web 2.0 story, which will continue to unfold for a decade. His title is an homage to the futurist Alvin Toffler’s excellent The Third Wave (1980). Case cites the knowledge web, but misses the explosive convergence happening in deep learning, semantic and virtual knowledge bases, and agents. That’s Web 3.0, as I see it, and it’s in prototype today. This convergence is a blend of what futurist Nova Spivack called the Semantic Web and Intelligent Web in a great post in 2007.
An agent is software with a minimal level of self-, world-, and user-knowledge, that tries to act on its own in the user’s interest. A smart agent has all this, plus intrinsic learning ability. Impressive smart agents are now being built by a few leading IT giants, and by scores of startups in various verticals. Self-driving cars, home robots, and apps with machine learning-backed intelligence and goal-driven interactions are all smart agents. Those systems, and many others, will be semiautonomous nodes in the landscape of Web 3.0. Some of today’s chatbots (aka chatterbots) are agents, but most aren’t yet smart. We can have only very basic transactional conversations with them, and we don’t view them as learning systems, versus the teams behind them. Likewise, almost all of today’s collaborative-filtering based recommendation systems like Amazon’s or Netflix’s, or the tools that “pull” personalized info or ads to us are also not quite “agent-like” today.
But tomorrow’s best versions of such software, which will surely have deep learning-based intelligence (Part 4 of our series), will offer us that agency power. The conversations we have with them will be like the ones we have with our young children. They’ll continually get better at knowing who they, the world, and we are. Each time we talk to them we’ll be amazed at how much smarter, in certain ways, they’ve gotten, and we’ll be concerned at what they still don’t understand. Sometime over the next ten years, enough of these deeply anticipatory systems will reach majority adoption that we’ll suddenly realize the rules of the IT environment have changed, and we are in a new era.
Anyone who has used Google Now, Siri, Cortana, or Amazon’s Echo is often amazed at how fast these agents are improving their language recognition. But we haven’t really seen anything yet. Within the next five years, our leading agents and sims will very likely engage in turn-taking conversations with us. Having that capacity for natural language understanding (NLU) will give them many helpful yet also invasive new abilities.
As the conversational interface to agents gets better in years to come, we’ll be able to “talk” to ever more objects in the world around us. They won’t necessarily talk back — often what we want instead of return conversation is visual information, or some kind of action — but they’ll increasingly listen, observe, and understand. We’ll find it very convenient to communicate with agents using the same evolutionary channels, including language, visuals, gestures, and emotional tone — called “prosody” by computer scientists — that we use when communicating with each other.
So just like we all began using smartphones after 2008, I bet many of us, and most of our youth, will use sims almost continuously after 2020. We will be continuously improving and personalizing them via gestures, emotional tone, and conversation. Increasingly, they’ll advise us on how to spend our limited and precious time and attention, on what products, services, organizations, and individuals most deserve our money. At the same time they’ll even begin to lobby for us, making it easy to get involved in boycotts, mass feedback, initiative politics and other civic actions. That’s going to be quite an interesting world, as you can imagine.
An agent with advanced NLU abilities can listen to our conversations in realtime, and display potentially relevant text, images, and links. I call that capacity “memeshows”, the ability to display potentially relevant and shareworthy memes, in their appropriate context during conversation, as a coming aspect of ambient intelligence in the 2020’s. I first wrote about memeshows in an industry study, The Metaverse Roadmap (2007), co-authored with futurists Jamais Cascio and Jerry Paffendorf, on the future of virtual worlds, mirror worlds (like Google Maps, Earth, and Street View), augmented reality, and lifelogs, a study that still offers good foresight today (let me know if you disagree).
With poor agent design, our early memeshows will just distract us into low-value activities, and create more attention deficit disorder. But with good design and intelligence, memeshows can focus and deepen our conversations, foster intimacy and collaboration, grow our “further learning” lists, and play well with our education agents, so we can keep getting better at everything we care to measure and improve. Deep NLU capability will also allow agents to model our interests, contextual needs, preferences, and even values. But even with today’s shallow NLU capabilities, agents can begin to filter our information floods, helping us better select what to read, watch, and buy.
Just as with people, having the ability to choose between multiple modes of communication with our agents is ideal, as face to face, voice, images, or text are each preferable for different uses. But as researchers like Mark Sagar and his teams at the Auckland Face Simulator Project are now exploring, once an emotionally expressive face and voice are added to a conversational agent, and it understands a user’s emotional and facial cues, that brings a whole new level of empathy, connectedness, and motivation to the table. So-called embodied (conversational) agents will be great at establishing initial trust, managing emotional states, helping those who dislike faceless machines, and any other conditions where we prefer face-to-face interaction. Our sims will try face-to-face interactions, in a range of sizes and resolutions, when our sensors, context, and the looks on our own faces suggest that might be helpful. The data will show if that facetime was helpful or distracting.
The interactive entertainment industry is accelerating the growth and sophistication of embodied agents, and home virtual reality (VR) and augmented reality (AR) will soon take that growth to new levels. At a certain point, people won’t shut off an embodied agent when they get annoying. They’ll ask them to come back later instead. Clippy and Bob would be proud.
For a motivating vision, imagine an agent that can act as your first-pass answering machine. To do this it will need not only good language understanding but to communicate with other agents in spam-identifying message reputation bases. Such agents will finally banish unwanted telemarketers and robocalls. See Fertik and Thompson’s The Reputation Economy (2015) for more on the reputation bases and frameworks that will enable that agent, and many others like it, to emerge.
Soon after they can converse with us, some of our most useful educational agents will automatically teach leading languages and other knowledge to young children, through their (nearly free) wearable smartphones. That will result in a mass de-Babelization of the planet, an inevitable development we can call Global Language. In the decades after these agents arrive we’ll see hundreds of millions of new “virtual immigrants” to urban and leading nation’s economies from emerging nations. All the leading languages, French, Spanish, Chinese, Hindi, Arabic, German, Russian, and others, will gain millions of new virtual immigrants. But it is English that will gain the most new speakers by far, as it has the largest vocabulary (over a million general and technical words), is presently the global language of business and it is much easier to learn and use than Chinese, another business contender.
Of course, the “lean back” approach for parents will be to let their kids simply use simultaneous automatic language translation. But whoever “leans forward” and learns a leading language while learning their local language — as well as some technical or creative skill — will be a lifetime consumer, as a cognitive native, of that language’s media, ideas, and education. Those kids will get the best ratings and the best jobs on startups and in freelance work on the global jobs platforms of the 2020s.
This new surge of economic immigrants will greatly empower global entrepreneurship and collaboration. I think this move toward a universal language will grow our empathy for each other as well. It is sad to say this, but any barrier between us tends to decrease our identification with each other. Meanwhile, the accelerated decline of all our least-spoken languages, and with them local customs, will also cause us to lose some of our cultural diversity, and bring new stresses, inequities, and social challenges.
As wearable technology and sensors keep improving, sims will increasingly watch our actions. They’ll even start listening to and archiving what we say, in searchable text, image, and video lifelogs. But long before this, they’ll map our activites and interests by indexing our existing troves of digital data, turning it into the various knowledge graphs we talked about in Part 1. All this quantification will allow many new forms of evidence-based thinking.
Even first-gen sims will quantify and visualize what we say we say to them, so we better understand who we are and what we say we want. They’ll show us tag clouds and infographics of our interests and values, and we’ll be able to easily compare them to our friends private clouds, borrowing what we like, while viewing the public clouds of folks we are not yet or no longer close with, folks we dislike, and folks we are competing against.
As software architect and futurist Samantha Atkins observes, today’s personal information management platforms, like Evernote, GDrive, Calendars, Email, and Tasks, will become increasingly agent-like personal knowledge management software, helping us better understand ourselves, organize, prioritize, plan, create, learn, network, and collaborate. The better we see and understand our personal and social knowledge graphs, the more we move society from datacosm to valuecosm. We’ll increasingly find others who share parts of our graph, and collaborate in ways we can’t yet imagine. We see the beginnings of the valuecosm in all our groups on social networks, in group question answering platforms like Quora, in group wiki editing (“wiki raids”), in crowd prediction markets like Metaculus, and in the ways we all pile on to support ventures with shared interest, in crowdfunding and crowdfounding. This all going to get ever richer and more exciting.
We’re presently seeing a current activities graph emerge with platforms like Facebook (social activities) and LinkedIn (business activities). The activities graph is reaching new heights of immediacy and immersiveness, with visual blogging platforms like Periscope, Facebook Live, and Snapchat — the latter being the most user-empowering and rapidly growing at present — which let us experience what others are experiencing in realtime.
Opinion graphs will be rampant on the agent-enabled web. Politicians and corporations will know much more accurately where people stand on issues. Wal-mart will know which cities and neighborhoods will pay to have a new superstore, and how much, and which will fight against it, and how hard. Individuals will be able adjust their goals, values, and spending priorities relative to friends and their favorite opinion leaders, using infographics that summarize and compare differences on various graphs, all mediated by verbal and nonverbal engagement with our agents.
The Medium platform — what you’re reading now — offers a much better, and presently less ad-infested, topical interest reading graph than Facebook, and makes it incredibly easy to write and edit beautiful posts. I view these as the two main reasons that readers and posters are presently moving to Medium from Facebook in droves. I hope Medium keeps smartening and personalizing their graph, and its NLU interface! If not, readers and posters will move again, to a platform closer to that vision of a reading sim. That’s what we want.
Better NLU will also allow us to create goals and values graphs on our platforms, maps of both current and long-term intentions. I like to say that everyone with cloud-based email is “a blogger who doesn’t yet realize it.” Consider how for cloud based email, texting platforms, and social networks, activities, goals, and values graphs will inevitably emerge on the back end. Facebook already uses primitive versions of such graphs today to market to you, and I’m sure Google and Microsoft use them as well.
Want to meet the ten other people in the country, or in the world, who are currently interested in the same niche topic, social action, startup idea, or other collaboration-enhanceable behavior as you? Opt-in to the goals and values graphs on Google, Microsoft, Facebook’s or whomever’s platform, and you’ll be able to do so, either publicly or privately, as you prefer, without the email provider sharing the private contents of your emails (unless you agree to permission them in, email by email). As our interest and values maps and agents improve, we’ll increasingly associate with those few individuals around the world who are presently most interested in, or can most effectively help us with our current activities and goals.
In the Web 3.0 world, we can say “everyone will be famous to fifteen people”, rather than for fifteen minutes, as Andy Warhol described 20th century fame. The public and private valuecosm, and your agent, will increasingly guide you to the fifteen people in the world most interested in buying, or paying top dollar, for the service you’re selling, or associating with you for some voluntary activity. Subspecialization, cooperativity, and team and subcultural diversity will shift into hyperdrive in that world, as agents use these emerging goals and values graphs to suggest a vast number of new positive sum interactions (“win-win and cooperative games”) to us. Presently, the social friction (time and resources involved) in finding, visualizing, and presenting such opportunities is quite high.
While Boomers might be horrified, Centennials will jump in with both feet, to see what benefits might emerge from this new knowledge-enabled collaboration. More extreme and negative-sum behavior will surely also emerge as our public activities, goals, and values graphs grow. That will force us to develop better transparency and security to manage the downsides of this new personal empowerment.
There will of course be a fat head of shopping and customer service agents, led by Amazon, Wal-Mart, and others, assisting us with what we buy. But there will be a very long tail as well. As exponentially deflationary IT products, the marginal cost of adding conversational agents to physical and digital commerce will rapidly become negligible. IT platform leaders will make basic agents available to every company for free or nearly so. The best agents will improve their semantic understanding so rapidly, and offer us so many things in conversation, that within ten years of their emergence, I bet consumers will start bypassing “agentless” products and services.
Imagine, some time in the mid to late 2020’s, getting an instant monetary credit for your negative verbal feedback regarding the failure or shortcoming of any product or service (your car, your home appliances, your bank), at the time of use, via that product or service’s agent, through your wearable smartphone, and the personal satisfaction you will get in knowing your feedback went straight to that company’s product design team. In that world, what will you think of companies, or institutions, that still don’t offer agent-based feedback systems as part of their customer service? Would you continue to patronize any agentless companies that you don’t have to?
As the price and technical challenges to developing agents drops rapidly every year, and as the performance and abilities of our existing agents continues to double every eighteen months due to exponential technology trends, strategic thinking about smart development should begin now, for institutions, companies, and organizations, large or small, seeking to be leaders in their industries.
Smart agents are already being built by small companies. Soundhound, for example, started as a very small technical team that built a fantastic mobile smart agent, Hound, for song identification, and they are now going after very big markets with their agent technology. Even open source agents, like Mycroft.ai are in early development.
A single scrum team (seven plus or minus two) can presently stand up, sell, and iterate a simpler agent, and bot frameworks from Microsoft, Facebook and others are rolling out right now, so this is a great time for entrepreneurs to find the pain points that agents can uniquely solve today.
Audio Augmented Reality
Augmented reality (AR) will be a key knowledge base — and interface — in the coming web. We’ll explore the future of AR in our next post, but let’s just say a little about the audio component of it now.
I expect our sims will begin to listen in on our lives in realtime at some point during in the (late?) 2020’s, recording just our side of our conversations (an easy technical feat), and those of any others we have permissioned from, in our personal knowledge bases. Sims will use that continuous textual data (a trivial amount of storage) to keep improving their personalized knowledge of us. They’ll be able to use this data, and their growing general knowledge, to whisper into our ears during our conversations via audio AR, just like Samantha in the brilliant sci-fi film Her (2013). You might be surprised to discover how useful these abilities will make them, even long before they have any kind of higher neural-network based intelligence.
I once heard a claim, from a computational linguist whose name escapes me, that if you have two years of recorded conversation of a typical user, even with non-deep learning based (statistical) NLU, your natural language model will be able to predict with up to 80% accuracy, the word they are struggling to say next if they are having a tip-of-the-tongue experience (“senior moment”). One year isn’t a complete enough conversational map of our common sayings, but two is, and it presumably gets even a bit better from there over time, at least for adults, who learn much less rapidly than children. Will our deep learning-backed sims be able to whisper those completion words in many folk’s aging ears in the 2030s? Will many seniors enable this feature? I would bet they will, even though it will be creepy at first.
Long before this, we will use audio AR, and secondarily, visual AR, as a very efficient and natural way of interacting with each other and our sims while doing things in the world. See John Li’s great Medium post, “What if the Future of Technology is in Your Ear?” 4.22.16 for a similar view. Li’s post reminds us it is simply an oversight — a lack of vision — that no smartphone maker has yet developed a magnetically docking and autocharging mini-earpiece that is integrated into the ends of our phones (and will beep if it gets too far away from the phone) so we don’t have to keep track of them. They can make big bulbous ones for parents with small children at home (a choking hazard) and little ones, like this one here, for the rest of us. Dear engineers, please deliver this to us. It’s overdue!
From Apps to Operating Systems
I hope this background convinces you that sims and their knowledge bases and hardware may begin as apps, from the developer perspective, but the smarter they get, the more attached to them we’ll become. I think we’ll eventually come to see them as much more like operating systems. Like our smartphones today, we’ll think of them as the new root-level hardware and software mediating the interface between us and the world.
As our sims get to know us and “test” us conversationally, they’ll increasingly advise us on how to best interact with others, how to work well in teams, who to associate with, how to learn new skills, which skills are most marketable given our current skills, how improve our health and performance, and even what to invest in, how to vote, and what political actions are in our best interests.
People who use, talk to, and train their sims well will be increasingly better off than than those who don’t. If this vision comes true, the use and deployment of increasingly smart agents of all types will be central to individual, team, and organizational strategy and competitiveness.
The Quest for Lock-In
Big companies will spend fortunes on agents in pursuit of consumer lock-in with bots, agents, and sims. Some of this lock-in should even be more economically valuable than previous lock-ins. But at the same time, the smarter our agents get, the easier switching will be, even for highly personalized sims. There will always be an agent retraining cost, in time and money, but the wealthier society gets, as we’ll see in our sim economics section, the more users will value other things besides just having the fastest and smartest sims on the block.
So as their smartness grows, we will argue that the IT platform “lock-in” that occurs with sims is likely to be even lower than we’ve seen with every antecedent IT platform (operating systems, social networks, ecommerce and entertainment platforms) to date. We had lock-in for a couple of decades with Microsoft Windows on our desktops, but eventually Apple’s iOS, Linux, Android, etc. also emerged as viable alternatives. Alternatives will emerge even faster in the world of sims.
Basic Income, Sim Trust, and Open Source
Eventually, our sims will get smart enough to show us the benefits of voting in basic income to combat ever-accelerating technological unemployment, a coming form of personal empowerment and one of the big topics we’ll discuss in this series.
Read Brynjolffson and McAfee’s excellent The Second Machine Age (2014) for more on basic income. Consider how sims that can lobby for us, when combined with disappearing population growth and accelerating technical productivity, will make a basic income inevitable in coming years, in every country still based on one person, one vote. Of course, just because it’s inevitable doesn’t mean it will arrive in a timely manner, or in the most socially empowering way.
If we give out income at too high a level, too suddenly, and without incentives to personal growth, we can take away personal incentive to work, as we see in dependency economies like Saudi Arabia. If we aren’t fiscally responsible, we can also drive our country to insolvency with benefit bloat as in Spain, Greece, and other countries where leaders have grown benefits faster than technical productivity. So there are lots of ways to mess up a basic income. But lots of evidence, including Canada’s Mincome experiment, and income incentives like the U.S. GI Bill, have shown it can be greatly socially empowering as well. I’m personally convinced that lobby sims will play a key role in bringing it about, as we will discuss in a later post. So let’s make it happen well.
Because the trust we need to have with our sims will grow as direct function of their power and intimacy, there will always be a core of users, from the very beginning, who find open source sims a more desirable, and controllable solution than proprietary alternatives. I’m sure a few open source sim projects will gain the kind of traction that Linux gained with developers in the 1990s. As the technologies behind sims get commoditized, and once basic income rolls out, open source sims seem likely to be particularly powerful, serving a major role in keeping proprietary sims honest and reasonably priced, just as open source does with existing commercial software today.
Consider that once we have a basic income, the need for any of us to have a sim that can outcompete our neighbor’s sim becomes far less compelling than it is today. Eventually, the need to have the “best sim on the block” will be as uninteresting to most of us as having the “best desktop on the block” has become today. Even just five years ago, when processors weren’t that powerful, desktop computer performance was still a “thing”. At a certain point, the average 21st citizen will have enough personal technical capacity, with their agents and home automation, enough financial savvy, with sims smart enough to help them live within their means, and enough social equity, with basic income and the sharing economy, to cover their living essentials. In that world, our values will change, and agent trust, personalization, and trainability will be key. We’ll be less inclined than ever to use agents whose loyalties we suspect are divided between the interests of its makers and our own, even if that gets us additional personal competitiveness.
Meanwhile, we have a long road to a basic income society. On the way there, control over our sims private data will be a right we all have in principle, though the extent to which we exercise that right remains to be seen. The ability to export our private data, to contest and correct public data, may be the greatest extent of data control that the average citizen ever gets. In the coming world, what matters most to us, I believe, is not democratic control over big data, which may remain unreachable, but control over our most intimate algorithmic interface to the world, our personal sim.
Our Next Post
In Part 3, The Agent Environment, we explore the accelerating arrival of mediated reality, the virtual component of the knowledge web, and a few of its many implications for agent evolution and development.
Calls to Action
- If you’d like an email reminder on this series, enter your email address to get our newsletter, Accelerating Times.
- Consider making Reddit Futurology your homepage. They’ve got 6M+ “Futurists” now. It’s now a great open discussion space for what’s next.
- If you know an agent resource that should join this story, please let me know (john@foresightU.com) and I may share it in future posts.
Thanks for reading!
CC 4.0. Anyone may share or adapt, but please with link and attribution.
Think others might like this? If so, give it a clap, thanks!