Addressing AI governance with distributed data curation.

AI may be an emerging kind of government. We need to maintain our rights and address problems AI has introduced. One way is with blockchain’s distributed ledgers.

Mark Stephen Meadows
28 min readJun 12, 2018

Here’s the short of it:

Part 1) To reduce loss of jobs, we can:
• 1a. Make AI more accessible (rather than proprietary)
• 1b. License our knowledge (rather than sell our heartbeats).
• 1c. Provide tools for learning new skills (not automate learning)

Part 2) To reduce loss of control, accountability and power imbalance:
• 2a. Provide open source tools we can all use (not consolidate in few companies)
• 2b. Give bots license plates (not inhumane machines with no reputation)

Part 3) To regain privacy, self-governance, and democratise AI, we can:
• 3a. Own our own data (not give it to others to publish).
• 3b. Build secure Conversational UIs (not lose trust in others)

This article explains solutions and organizations that are solving these problems. In summary, we consider AI as an emerging form of government.

— — — — — — — — — — — — — — — — — — — — — — — — — — — —

Two different videobots that Botanic Technologies built for iWithin, an Australian startup. The system was able to converse, see, measure the users’ emotions and provide advice.

Outlining AI, the reflection of ourselves

Artificial Intelligence now tells us where to drive (maps), where to eat (Yelp, OpenTable) what to do with our finances (WealthFront, NutMeg) and how to manage our health (FitBit, Apple Health). They tell us what the weather is doing, what we should wear and whom we should date. Talking to us in our homes, suggesting actions from our phones, wearables, appliances and cars, AI is surrounding us. They speak and recommend.

Analysts from Gartner to Forbes to Deloitte to McKinsey predict these system will offer immense economic and societal opportunities. There’s also a fear in the air as luminaries from Bill Gates to Stephen Hawking to Elon Musk predict AI as top societal concerns. It is as if these chittering children of automated manufacturing will emerge, like climate change, are surrounding us. AI makes guest appearances in our phones, speakers, cars, refrigerators. AI can beat us at our own games (namely Chess and Go). And some fearfully cry “The Singularity!” That famous scenario in which we are swallowed by our synthetic offspring. Whether reviewed as SkyNet, HAL 2000, West World, Ex Machina or perhaps some less somber scenario, the human vs AI conflict seems inevitable. The conflict seems to be starting in multiple fields, including work, control, privacy, and the very definition of humanity.

AI isn’t an emerging, intelligent species: it’s a group of technologies people made. AI is an implementation of multiple things: input / output methods of sounds and images, storing, processing and predicting patterns that allow social and physical navigation, etc. For most people that group of technologies is unknown. And that introduces doubt, uncertainty and sometimes fear.

“What can it do better than me?” they think, “Can it think?”

We built it therefore we can understand it and we can interface with it.

Some of these systems, specifically the interface to AI, are known as bots.

BOT = UI | BOT ≠ AI

Bots, an interface to AI, can be broken into taxonomic categories. Bots can be scripts that monitor traffic, software that methodically launches attacks, and algorithms that converse. Bots can be chatbots that read, then write back. Messaging bots that dwell in the apps we use to message one another. TwitterBots, SlackBots, KikBots, and others are all relying on text as the user interface to the information it provides.

Let’s now add ears and a mouth to our bot so that it can speak and listen. If we add audio the bot becomes multi-modal. This bot (like an Amazon Alexa, Echo or Dot) is also known as an Assistant, sometimes called “smart speakers. It has a voice recognition capability that converts spoken words into written words. When you speak to the system and say “Star” it sends that audio recording to a computer that then looks for the best match between the sound and the letters. It might come across some similar-sounding word, like “scar,” but regardless of the results this ability is based on past data that was collected. Lots and lots of people mapped the sound of a word to the letters of the same word. This, ultimately, gives the system ears and a mouth.

This is “Andi” — a multimodal bot to help prepare for job interviews — Botanic built on the Skype bot platform.

When we add eyes to our bot we start to see the future of bots emerging. Just as you, dear reader, have spoken to someone via videochat bots can now do the same thing. They can see you via computer vision (CV). They can identify objects that are in the background. They can measure your appearance. They can also, by having a face and waving their hands or shrugging their shoulders, express themselves better. These bots may now be deployed in real-time 3D systems such as VR and AR. Conversational avatars may also appear in videochat channels, such as Skype, Messenger, Signal and others.

Whether we use text, voice, or sign language we’re still having a conversation. Whether a chatbot, an assistant, or a video chatbot, these are conversational user interfaces, or CUIs.

CUIs are my own obsession and the raison d’être for our company, Botanic Technologies. Some CUIs use, as we humans, faces and hands to converse. At Botanic Technologies we have implemented these multi-modal systems for nearly a decade.

Bots are an interface to AI. This means that bots are also an interface to the problems that AI presents.

Problem #1: Displacement of Jobs

A fear of losing jobs to AI is a top stressor for many people, and Americans in particular. According to a report from Udemy in June of 2017, 43% of American workers link their stress to a fear of losing their job to artificial intelligence. This fear is confirmed. Reports by both Pew Research & Oxford University indicate that AI will impact at least 43% of jobs by 2025. Globally the numbers are higher.

This is a concern more evident than the numbers.

Most of us work by selling our heartbeats. In order to make a living almost all of us have an hourly wage or an annual salary: the money that goes into your pocket is proportional to the heartbeats you put into your job. Especially true in unskilled and manual labor, human heartbeats are being bought and sold as the primary measurement of value.

Different people make different rates for their heartbeats — some people’s heartbeats are more valuable than others — so knowledge makes heartbeats more valuable. The knowledge multiplied by heartbeats is what allows knowledge workers to make more than unskilled workers.

AI does not operate on this same formula. AI can accumulate knowledge incredibly fast (and it doesn’t have a heart). For example, as the Guardian states it, “AlphaGo Zero took just three days to master the ancient Chinese board game of Go … with no human help.” The interesting thing about AlphaGo Zero is that it used Tabula Rasa learning methods and in a matter of weeks learned all the games of Go that people have been playing, adopted many of the same playing patterns, used some, discarded others, and invented new strategies that people have never used. Based on John Locke’s philosophy, Tabula Rasa was the theory that we’re all born with a “blank slate” and that knowledge is accumulated by our sensory experiences. In a similar way AlphaGo Zero is not only accumulating knowledge on that blank slate faster, it’s actually outstripping the current body of knowledge and discovering new ways to play. This is important because the Tabula Rasa method, also called reinforcement learning, can be abstracted from Go and used in other systems with contextual parameters. So people are afraid of this because it shows that AI can learn really, really fast and about new things we don’t know. In a heartbeat.

Second, after the initial cost of building out the system, AI is able to access, process, and automate many types of information faster and less expensively than people. These information management skills range from natural language summarization of sports stories to industrial machinery prediction and these AI systems far outpace humans in their speed both accessing and processing information. There is a concern for the potential loss of jobs, especially for knowledge workers, and that the resultant outcome will be a cumulative effect.

One important cumulative effect is that AI will generate more wealth in the hands of those that least need it. This kind of monopoly control of markets (and therefore capital) will dominate some parts of labor and push down many wages (for those that keep their old jobs). This then will domino into more AI deployments to replace more people, causing economic collapses, wars, revolutions, and other exciting events. We can see the beginning of this with large automated systems like Uber, in which the backend of the system is consolidating wealth as the drivers each feed more data into the system to provide autonomous navigation skills.

Some jobs will disappear and others will appear. Automated manufacturing has always generated jobs and this fourth industrial revolution won’t be an exception. But still, how do we mitigate losses?

Solution #1.A: Build conversational interfaces to AI.

Because they are familiar conversational interfaces are easily adopted. Familiar interfaces, like the GUI or, now, the CUI, allow simple access to complex systems.

Once upon a time, the calculator was a desk job. When digital calculators were invented those people’s jobs were displaced. Today we all use calculators. Once upon a time the Computer was a desk job as well. Today they are a staple part of the work we do. You are probably reading this on a computer, such as a smartphone or laptop. New jobs appeared as computers became more mainstream. The same will happen with AI, provided we preserve simple interfaces to these systems. Bots now function as tax accountants. This means the CPA is now going the way of the Calculator. And the CPA of tomorrow will have a simple, conversational interface much like the calculator of yesterday had a simple, graphical interface.

AI’s rise won’t necessarily cause permanent, disastrous pain. Augmented intelligence, an extension of you gives possibilities for prediction, review, confirmation, learning, knowledge, dialogue, analysis and thousands of other uses available to you and expanding exponentially with only a temporary disruptive effect on the workforce.

Many jobs actually need to be replaced by bots. Customer-relation call centers have roboticized people for the last decade, forcing lower-wage workers to repeat, line-by-line, the words that appear as they navigate a conversation tree. Insurance centers that pay people to follow a prescripted path of logic to derive a calculation as they interview someone is mind-numbingly slow work that is an insult to intelligence. There are many jobs that, like a calculator, don’t require a human mind and may, in fact, be better if the human is removed from the equation. Financial qualification processes, retirement centers, government application centers, even the vast majority of job application processes have roboticized people. Many jobs need replacing by automation. What today look like if, because of concerns of loss of jobs to automation in the mid-1900s, people were still sitting at desks computing and calculating?

Jobs that contain drudgery, boredom, repetition and work within tight legal or technical compliancy restrictions are not jobs that are well-adapted to human laborers. Anti-money laundering regulation, for example, requires reviewing thousands of records per day and checking for tiny inconsistencies that humans just aren’t good at — and we get quickly bored doing that stuff. Research has shown that AI’s best-suited to predictable tasks when errors are cheap. As jobs becomes more complex and less predictable AI makes mistakes, and the more complex the job the more costly the mistake. With retraining and reskilling we can provide value for people missing out. Retraining and reskilling takes time, creativity, collaboration and tools. More on that in a minute.

With social interfaces to AI via bots and CUIs we not only keep people in the loop, we augment their skills and build on the very human traits such as creativity, symbolic recognition, and social interaction.

Solution #1.B: License our knowledge rather than sell our heartbeats.

As any Silicon Valley investor will tell you — services is a poor business model for a company. So why, as individuals, are we working this way? How can we scale individual labor?

How can we, as individuals, provide a product we each author? What if you could develop a bot that allowed you to receive an ongoing royalty for valuable things like your daily activities, social graph relations, knowledge, language, understanding, skills, or personal information that someone else finds valuable? Companies like Facebook and Google, companies that sell our personal data, already understand this and we can see the value of our data as measured by the size of these companies. Your data is the most valuable commodity in the world. As Derek Powazek once put it, “If you’re not paying for the product, you are the product.”

Software companies scale the value of their data by licensing it to others. It is time that users be able to scale the value of their data in the same way. We need to find scalable business models for individual knowledge workers. AI won’t take your job if your job is to inform what the AI knows. Computers and Calculators of the 1950s — the humans that computed and calculated at desk jobs — learned this lesson decades ago. They’re now programmers and many of them license their knowledge (or, more accurately, their companies license their knowledge). They make knowledge that can be licensed and some of them program AI. The challenge is to build bots that allow everyone to license their knowledge and build it in a way that is open, monetized, scalable, and free.

Q: How can people individually license their knowledge?

A: An Open Source Bot Economy.

The Seed Ecosystem — a dialogue market that is mediated by a bot framework.

Seed Vault, a Singaporean company that is providing a blockchain-based solution, has begun building this ambitious solution. The SEED token is an open-source solution that enables a bot economy. Like Wikipedia or Linux people contribute knowledge. Unlike Wikipedia or Linux contributors are compensated for the data they each contribute. This allows us to establish an interface to AI that is a democratic, trusted, and fair. Companies, individuals, groups and bots may exchange value equally. This is done via CUIs. It can solve many of the problems of job loss from automation by allowing people to contribute to an economy in which they can be paid by improving automated systems in the same way that a software programmer improves a computer. And other projects are taking complementary approaches. Open Ocean, confronts large data. BotChain is about scripted bots.

This is an inflection point at which, for the last several decades we have trained people to behave like robots, at call centers, in service roles, and in many knowledge worker roles. It’s time to allow machines to take on the more automated tasks, allowing people to fill in automated processes with more thoughtful approaches. It is the graduation from calculator to programmer.

We must squarely address the changes that confront us. Emerging technologies, like blockchain, can solve many of the problems that AI introduces. The problems AI introduces are problems we have invented, and that means there are solutions we can build.

The Convergence Ecosystem sees data captured by the Internet of Things, managed by blockchains, automated by artificial intelligence, and all incentivised using crypto-tokens. The Convergence Ecosystem — open-source, distributed, decentralised, automated and tokenised — is an economic and societal paradigm shift.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

PART TWO: The high road of ethical engineering.

The Tay Rail Bridge under construction in 1878 (Getty Images)

One morning in 1879 a train with more than 60 people was passing through the town of Dundee, Scotland. They had all stepped into the train trusting that it would safely deliver them on their passage and as some of them enjoyed tea the track extended forward over a river and onto the Tay Rail Bridge. The bridge was engineered by a Sir Thomas Bouch who had established a solid reputation as an engineer of such bridges. But on the Tay build, Bouch used lattice girders supported by iron piers less thick, less costly and less robust than those used on previous similar designs Bouch had engineered. There were other flaws that included integral design, lack of consideration for rust, and efforts to cut costs and save time, all of which were — at least to some degree as lead engineer — Bouch’s responsibility. The bridge collapsed and all passengers died. This mortally wounded Bouch’s reputation as an engineer and later that year Bouch, himself, died.

The ethics of engineering, however, live on today, especially in software engineering where the implications are far more massive than a bridge. Even governments or religions, systems that have always been intimately linked with ethics, haven’t enjoyed as much power as is enjoyed today by networked software applications. Social media systems are now having an impact on elections and political stability around the world.

Multiple Western elections have been influenced by unauthenticated bots.

The American, English and French elections were all impacted by bots on these systems — automated conversational influencers that impacting the self-governance capabilities of these three democratic bastions of power.

The ethical engineering aspects of AI also take on an even darker palour when we consider dissociated responsibility — if you’re working on a self-driving car, making algorithmic decisions that will impact the safety of people that you won’t meet, it impacts others, not you. For example, when there’s an emergency decision that algorithm has to make, how do you optimize? Do you choose to prioritize pedestrian or driver? These same decisions are made with data selection and we like to assume someone else has done the right thing. But that’s not always going to be the case because sometimes there is no “right thing.” Just as they learn fast, an AI system can make a mistake just as quickly. This can be a bug in the system or the AI correctly learning the wrong thing due to misaligned assumptions. AI technologies still require tuning and training for optimizing and avoiding mistakes. Often times there is an exception to the rule or an inability to consider an unknown detail and that loss of control can — and our lifetimes will — result in catastrophes far worse than the Tay Bridge example.

Each industrial revolution introduces new ethical dilemmas.

Now, entering the fourth industrial revolution, AI presents many more problems which are closer to you now than the rioting in the streets that happened in the late 1800s. The most immediate problems AI posts are a loss of control in your ability to make decisions, an influence in how you think, an impact on what you choose. And if what you think and choose are impacted then your actions are impacted and that impacts the very motor that drives your life — your free will.

PROBLEM: LOSS OF CONTROL

A computer should never subvert the free will of a user. But today they do. Today you likely trust it to show you the way on a map? When will you trust it to help you make a mortal decision?

Some people today already do. People are more likely to obey a robot than a human. Strangely, as we’ve found at Botanic, as Skip Rizzo has learned at USC’s ICT, and as reported by Heather Knight of CMU of The Brookings Institute, and studies by Timothy Bickmore, and others about 85% of people are more likely to tell a bot about their medical ailment than a doctor. Perhaps they don’t want to be embarrassed. Perhaps it feels more private. Perhaps they don’t trust people. Regardless of individual motivation, this has been known since 2012. It has been said, “Silicon Valley Lacks a Social Theory.”

To make the imbalance of power more evident (and again we have most of our data from healthcare patients) people are more likely to follow the advice of a bot or AI system than they are a human doctor. People seem to consider AI as having an “objective truth” and a higher probability of being right. But this is far from the case and there are deeper and more hidden losses of control to be concerned about. So not only are people more likely to discuss things with the bot (as they have the impression it is private) they are, as mentioned above, more likely to obey the bot (as they have the impression it is objectively, mathematically correct).

There’s a very good reason to be concerned: AI is neither private nor objectively true. Despite that we are willing to tell it our secrets and obey its commands.

Enter the challenge of value alignment, self-governance, and incentivization. We need to be able to trust that the AI system we’re using is acting in our best interest — that it represents the end-user values. The alignment of values between multiple end users and multiple engineers needs to be built so that we have things like security and accountability built into the system. We need trust built into the system.

PROBLEM: LOSS OF ACCOUNTABILITY

Accountability is lost in automated systems for a range of reasons from ownership of the data to quality of the entries to the methods of input, processing, output and networking.

First, in a high-contrast example (and one to become historic). This week a 49 year-old woman in Tempe, Arizona was struck and killed by an autonomous vehicle. It was the first time a person had been killed by a driverless car. Within hours of it happening Twitter and Reddit were ablaze. Some were calling for the CEO of the company to be held accountable, others were were saying that the person in the passenger seat, attending the autonomous system, was to blame. Some said engineers were the guilty parties, others claimed the victim herself was the problem because she was walking her bike and it was unreasonable to expect engineers to design a car to that accounted for such erratic behavior. We will leave robots and war aside (though I wrote about it in chapter 1 of my book We, Robot, and it’s a chilling extension of this domestic example).

Metal can be dangerous, and so can language. Spam, scam, spoof, phish, abuse harangue, and badger, people have known the pen is mightier than the sword for millenia. So, in a similar way, language can do harm and needs accountability.

Google’s AI system is markedly sexist in how it treats words, in this example “persuasive.”

This image shows a sexist AI. This example is Google’s Word2Vec / TensorFlow. It was used to find the meaning of “persuasive” when used in a feminine or masculine context. It’s a sexist AI because the training data taught it to be so.

The training of natural language models — often times the language itself contains unaccountable bias. Even though we might not be aware of it the data itself is already biased, and the training just perpetuates that. Bias is also a requirement in language, and so represents a deep, deep problem that AI developers and content authors will wrestle with for decades to come. But the problem is that while there is an undesirable trait that was introduced into the system there’s no one there to account for it. There is no clear line of accountability within a sexist NLP dataset because it was authored by a group of people. This is not just a problem of accountability, but quality, and the very curation of data, which we’ll get to in a bit.

Let’s consider a third example. Would you trust an AI to decide whether you should be released on bail or not? Models of predictive policing and algorithmic bail hearings have been in use for several years, but they’re questionable, and in particular questioned by cops in the San Francisco bay area, closest to the home that builds many of these algorithms. In this instance there is the abstraction layer of the individual responsible for deciding who can walk out of a jail cell before trial. Just like a parent is accountable for their children, there’s a clear line of accountability that has been practiced for centuries. It’s disappearing

So these are three different examples of loss of accountability. Who’s driving the car? Who’s writing the sentence? Who’s letting the burglar out of the clink?

The challenge of identifying who is responsible for the actions of automated systems and how to we hold them, or their authors, accountable. Let’s remember, data is just a person with a mask on, so someone is always behind some piece of data even if they’re not the person that directly made the decision.

SOLUTION #2a: BUILD AND USE OPEN SOURCE AI.

Once upon a time there was a semicolon and it lived in a line of code in a Linux kernel. It was a pretty normal semicolon, doing the job of all semicolons, until one day an anonymous developer, unannounced, changed that semicolon to a colon. In doing so they opened up a security vulnerability that went unnoticed for some weeks. Thanks to the fact that there were thousands of Linux engineers working on this, someone spotted the change and turned the colon back into a semicolon. The vulnerability was patched and everyone got back to work. But then, some weeks later, and again unannounced, someone changed the semicolon back — yet again — to a colon. The security vulnerability was again created and, as before, the swarm of engineers somehow managed to catch this and, again change it back. Then they set a chronjob to check every so often on that particular semicolon, to make sure it was doing its job, and everyone got back to work. Counter-examples are Meltdown and Spectre, which exploit critical vulnerabilities in Intel and AMD processors, both of which were running proprietary code, hence not subjected to the stress-testing of the Linux community.

The moral of the story: open source software is more secure software.

The power of open source is not only that we can all own it, license it, and modify it, but, perhaps most importantly, it is transparent. We do not, today, have that within the bot economy and we need it so that we can eliminate bots that do not, like that insidious colon, act against our aligned values.

We believe in this so much that Botanic Technologies will be offering not only an open source bot framework this summer, in partnership with Seed Vault, Ltd., so that anyone on the planet may contribute to authoring bots, CUIs, and tools to build them. It is a risk for us, as a company, but we feel that the short-term risk will out-distance the long-term gain.

SOLUTION #2b: GIVE BOTS LICENSE PLATES.

These bot and AI frameworks must have trust built into them, both socially and technically. To establish trust we need both technology and society.

Trust is technical. Technically speaking we can use digital networks of trust. Methods like authentication, end-to-end encryption, verification, certificates, passwords, and keys, both private and public, are examples. If a bot is authenticated then, by prerequisite, it has an identity and that allows things like trust and reputation to acrue. Technologists continue to push the boundaries of technical trust such as cryptographers who develop automated systems deemed “trustless.” But even peer-to-peer systems such as blockchain, when coupled with authentication, verification, and the hundreds of other technical frameworks can, when properly weighted, be as stable as a shoddy bridge in Dundee, Scotland. The perils of trustless systems are often outlined in having too much trust in the code. But trust can be technically built, it is clear.

Trust is social. Socially speaking we can use human networks of trust. license plates of authentication and those enveloped packets of communication are upvoted, downvoted, white-listed, black-listed, and the bots themselves — and therefore the AI systems they interface with — can provide trust. But the tech isn’t worth much without the people.

And again, to provide this, in partnership with Seed Vault, Ltd., we will be open sourcing patents, authentication methods, and secure system builds to give bots license plates and unique identifiers so we know we can trust them.

I don’t think a week goes by I don’t write about authenticating bots. We’re collecting feedback and interested parties at botauth.com.

Authentication of bots is inevitable and regulation is needed. We have to build it or it will be built for us.

It’s part of hitting the higher roads of ethical engineering.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

PART THREE: We need to own our own data.

Of course we Americans love our constitution, a kind of algorithm, and perhaps the forefathers understood government as we should, today, understand artificial intelligence. The 4th Amendment is applicable.

In May of 2018, an Amazon Echo recorded an Oregon couple’s personal conversation (about floors) then sent it as an audio file to one of the husband’s employees. Amazon called this “an extremely rare occurrence” which is to say that it was a problem with the system, not a problem with power. But the fourth Amendment of the US Constitution argues otherwise and provides that “[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” But what if the thing seized is information, and what if the information determines a person’s ability to make their own decisions?

What if we consider artificial intelligence in the coming century as we considered government in the last centuries?

As the next billion people come online, a massive potential market, how will they have access to the benefits of the 4th industrial revolution? How will people in sub-Saharan Africa, for example, interface with AI? How will they be able to amplify their knowledge, skills, earnings and savings? How will AI help them manage their health? How will AI governance take advantage of the challenges that developing countries face? What will the bots that present these systems say to them, in what language, with what coercian, under what circumstances that most influence?

AI is only as valuable as the data it’s based on — and that data comes from us, the people.

Isn’t it strange that the companies that are known for building their business models on end-user data are the ones with the most powerful AI systems? By giving away our privacy we give away our data, and that feeds AI which, in turn, is able to collect more of our personal information.

The foundation of AI comes from the data we, as users, feed those systems.

Since these automated systems need information, the people able to take advantage of the system will be providing their information to the system by webpages or whatever. Some of these information collection methods will represent the end-user interest better than others.

Facebook’s African user base has grown to over 170 million, and over 90% of them access the social network using mobile devices. Seven out of ten internet users in Africa now log onto Facebook daily — it is their gateway to the Internet. Therefore these large datasets that feed these AI systems have encouraged Facebook’s efforts to amplify their AI methods whether by finding meaning in your posts, following trends, or by targeting ads. These are currently the primary means by which people in sub-Saharan Africa can interface with AI. Amplification of knowledge by search, interfacing with family via social media, allocation of trending data by posts. But we do not, despite decades of social media, see equitable compensation of information for social well-being, improved healthcare conditions, education, or long-term regional stability. Facebook’s worldwide ARPU (average revenue per user) rose to $6.18 in 4Q17. This number is two dollars more (according to the World Bank national accounts) than the gross national income per day, per capita, of the average Sub-Saharan African: $4.15.

Our privacy is sold via publicity. This seems counter-intuitive.

Companies now publish our private data via advertising revenue models. The value of personal data is as measurably valuable as the value of the company. Revenues and business models are not bad things as they provide alignment of values (Google’s own conceit was that Search and Advertising were two sides of the same coin) but as personal information has become centralized with only a handful of companies there are knowledgeable individuals, people that have been involved in social media since the mid-90s, that accuse these companies of “surveillance capitalism.” This indicates a misalignment of personal and professional prerogatives. This also indicates a conflict between privacy and publicity.

But the imbalance of privacy / publicity and the conflation of personal / professional prerogatives indicate that the next billion people coming on line may provide more value than they reap. We may, were we in a grumpy mood, call these systems contemporary feudalism.

The Privacy Problem.

Privacy is like an onion ring. Our identity blends outwards, overlaps with others, private mixing into public as the outer edges of your information and control blend with others where you socially interact.

Privacy is not a binary thing. What is more private is more valuable, more of who we are, and at at the center of our lives, our homes, our actions and thoughts.

Think of your house, as an example of this onion ring of privacy. Out front, let’s imagine, there is a residential, suburban street. People drive all the time on that street and you don’t think of it as yours. Your sidewalk — that is, the sidewalk that is in front of your yard — you sweep and people walk there and you may or may not see or talk with them. Over your yard they can see your house and you are used to the neighbor’s kids walking through your yard. Your front porch, however, is a bit more private and if someone comes to the door you allow them in with permission. It is common for someone you’ve never met to stand in your foyer, or even, if I had sent you, dear reader, a note I were coming to visit perhaps you would invite me to sit on your sofa for a few minutes. Friends and family may come into the kitchen and perhaps I might as well, should you offer me a drink but if you were to find me in your bed we would have a problem. Your bed is the center of your home, like your heart, and just as we keep the most valuable things we own in our bedroom (not in our yard) this allows us control over the things most precious. You wouldn’t be too surprised if you saw me hanging around in the street in front of your house, but you’d be surprised if you were to, this evening on returning from work, find me in your bed.

The most private stuff is at the center of the onion ring and the outer edges of our public data. If this is reversed then bad things happen; you lose what is valuable and therefor you give your power to make decisions to another group of people. An imbalance of privacy / publicity generates an imbalance of power. Emerging models of AI do not currently respect the various levels of privacy and security we each need to live our lives as we choose. Especially if we’re trying to build a global community in which there are vastly different values at risk.

As Emmanuel Macron, France’s president, put it in a Wired interview, from March 31, 2018:

“When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. [. . .] If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI.”

SOLUTION #3a: OWN YOUR OWN DATA

Blockchain allows us to own our own data. There are governance models by which we can retain, in a radically public (non-private) manner, who contributes what piece of data. Blockchain allows micropayment in which a fraction of a cent may be paid to someone that contributed a piece of data that another person paid a fraction of a cent to use. This means that as AI libraries increase in size we can track who has contributed what data, what accountability, and compensate those people for that data and even responsibility.

The model of the SEED Token allows just this. All CUIs (bots) are composed of developed assets. The more complex or sophisticated the CUI, the more complex the number and type of components and services. Developers needs to retain ownership of their contributed asset. Often, assets are blended and aggregated such that they may be developed by many people. When one of those aggregated assets are used, multiple micropayments then scatter to each of the developers that contributed, according to their licenses. This means that the ledger must track co-data that indicates not only the source, requesting system and destination of that asset, but also which developer gets compensated what amount for that request. The structure of the smart contract becomes most unique when remixing occurs. Like DNA, a tiny change may result in great divergence, so the token must identify unique differences. The Seed token contains four elements: (1) identity, (2) asset locations, (3) balance, and (4) licensing information which declares what the function or dataset is intended to do, separate from its control flow. By way of example, the Ethereum app CryptoKitties, that allows users to buy, collect, sell and “breed” digital pets is a conceptually similar project. CryptoKitties allows merged assets that would normally exist in a walled garden to be properly owned by their many creators. The Seed Token provides the same provenance, copyrights, and compensation for aggregated assets, but specifically for multi-modal, conversational systems. Lastly, authentication of a bot and the aggregated system, at an arbitrary level of the asset tree, may be therefor authenticated. In summary, the token is designed for control over the amount of currency issued, when it is issued, payments for transactions, incentivization for curated data and overall reduction of economic volatility.

SOLUTION #3b: BUILD AND USE SECURE CUIs.

Another way to solve the loss of privacy isn’t so much ownership and compensation as secure interfaces, at least in so far as any conversation can be made private. This is made up of two parts: Authentication of the bot is one part of the puzzle, but end-to-end encryption is another. This means that the conversant can know that there is a limited amount of security up to the deployer responsible for the bot, useful for anyone sharing sensitive data.

Consider someone that is sharing financial or healthcare data with a bot. They need to know that there is no eavesdropping or man-in-the-middle attacks and they need to know that the entity on the other end is who it says it is. This, of course, like any conversation, has a limit in that once the information is on another person’s screen it is no longer private. But the line can be secured and the bot authenticated.

Botanic Technologies was asked to deploy such a system. Doing so meant using an authenticated bot on the fully open-source messaging platform Signal. Published by Open Whisper Systems the Signal messaging app provides complete end-to-end encryption. As open source it is a fully secure communications line and, with the bot authenticated on the other end, user data is shared only with the owner of the bot. This meant that, as with a person, the line was secure, but there is still the need to trust the person responsible for the deployment of the bot.

GDPR, the Consent Act and other regulations are descending and will generate great inconvenience for proprietary and closed AI systems. These systems need to be publicly transparent to demonstrate end-user privacy.

In the end, it’s up to us to respect ourselves. In the end, we are the AI.

We are accountable to ourselves. And this extends to many ecosystems, not just AI and bots, but we are seeing entire economies converge. For more see Lawrence Lundy’s article, on The End of Scale: Blockchains, Community, & Crypto Governance.

CONCLUDING NOTES

Bots provide an interface to AI, therefor to governance. In controlling and designing that interface and the backend curated data we are able to control and design AI so that it represents the end-users’ values. This is not an easy task and it requires multiple large-scale systems to open source the tools, provide the blockchains, and manage the networks. Otherwise we have a lot more problems with AI than solutions. Think of AI as your government, and consider that while democracy may not be the end-all, be-all it may be the best model we have today.

We can democratize AI. And we need your help. Now is time to own the future of AI.

Please join us at Seed Vault, at Botanic Technologies, Ocean Protocol, at BotChain, botauth.com and other projects that are democratizing AI.

Thanks: Massive thanks to the formidable Michael Tjalve, of Microsoft’s Bots for Good for his input, critical thinking, informative examples, and philosophic temper. Thanks also to Lawrence Lundy of Outlier Ventures, and Ben Koppleman of SEED for their reviews, additions, and input.

--

--

Mark Stephen Meadows

Founder & CEO of Botanic.io, co-founder and Trustee of seedtoken.io (and Author, Inventor, Illustrator, Sailor).