Tinkering toward AGI
I’m a tinkerer. I try to do things I am entirely unqualified to do. This makes me an outsider and an infidel to those well-trained in the arts of, well, anything. AI and Deep Learning in this case.
From where I sit, the modern quest for Artificial General Intelligence is strong on Artificial but weak on Intelligence. AI experts can bring to bear tens of thousands of computers and amazing distributed neural-net algorithms while at the same time struggling to define intelligence or figure out where it came from.
I’ve chosen to attack the problem of AGI from a different angle: strong on General Intelligence but initially weak on Artificial.
While I may not know much about Deep Learning, or Machine Learning, or statistics for that matter, I know plenty about intelligence. Everyone knows a lot about intelligence, although most of us are as conscious of this as a fish is of water. Each person is both generally intelligent and immersed in intelligence. When intelligence is everywhere, we have little need to specifically think about it. But we certainly notice what isn’t intelligent, like a fish notices when they’re in a boat.
With the approach I’m taking, there’s no need to explicitly define or formally reproduce general intelligence because that’s precisely what will be converged upon.
Pairing a computer with a single person
My approach to AGI pairs a computer with a single person.
The computer’s role is to:
- Gradually become a digital mirror of the person’s behavior.
- Make the person’s role rewarding and ever-easier.
The person’s role is to:
- Continue being a person.
- Use the computer to digitize their behavior.
The result of this pairing is a hybrid human-computer system that is an AGI on day one. It’s different than the usual approach in that it begins strong in General Intelligence (the person) but weak in Artificial (the digital side). So we get something like a seed from which an AGI will grow.
While the Artificial side begins very small, it grows as the person digitizes their behavior. The more behavior that is digitized, the more information the computer has to work with to better fulfill its role.
The initial capabilities and algorithms are mind-bendingly simple, leaving plenty of future opportunity to integrate existing Artificial-side research: deep learning, machine learning, and everything AI. The sort of stuff I know little about.
From the person’s perspective, they are engaged in a feedback loop with an ever-improving digital mirror of their behavior. While this mirror is laughably simple and incomplete at the beginning, the person perceives it to be intelligent because of who and what it is reflecting. This mirror won’t tell the person something they don’t already know, so there’s little to disagree with. Nor is it going to “do” anything stupid, because the perfectly-matching perspective, justifications, and rationalizations are already built into the one-person audience.
From the beginning, the digital mirror is capable of offering insight into the person’s future, making it seem actively intelligent. It becomes a prediction engine fueled by self-provided behavior data. Since human intelligence is based largely on predictive ability, we are especially impressed by accurate and detailed predictions from credible sources about our future.
To get what we want, we often need to behave differently than we currently do. Whether we want to raise our status, advance our career, have a happy family, live long and healthy, solve a problem, or achieve any goal, we have to behave accordingly. It takes a lot of persuasion to change anyone’s behavior, even our own.
Predictions from a credible source are highly persuasive. The assessments of our future from our doctor, lawyer, and even our favorite psychic are much more influential than those from Jim the delivery guy. Our doctor has credibility because he bases his predictions on high quality information — education and experience combined with observations and tests — and so they’re quite accurate. Same with our lawyer and our psychic: the more they get right, the more credible their future predictions are to us.
At the beginning, our advisors’ predictions are fairly simple and general because they have only basic information about us. If they get enough right early on, we allow the relationship to develop, enabling them to better understand us and make more detailed and insightful predictions. Good predictions further cement their credibility and value to us, amplifying their influence.
In this hybrid system, the predictions begin almost immediately and they’re surprisingly accurate due to being based on high-quality information. Each prediction gets tested for quality against the person’s thoughts. It works a bit like this: “The computer is predicting that I’ll wash the dishes soon and, wow, the dishes really do need to be done and it’s literally on my mind right now!” If enough of those types of predictions accumulate, credibility of the computer-side increases and thus its persuasive influence grows.
Quantitative reflections — like your monthly electricity bill, your rising fuel costs, or a breakdown of your food expenditures — are also persuasive, and an interactive digital mirror is an ideal source of them. The mirror can even be distorted slightly, and we know how effective reality distortion fields are. Because the system deals with digitized behavior, methods like social pressure can come into play by sharing selected views of the information via the Internet.
As a burgeoning predictive engine based on information about a single person, this system has tremendous persuasive potential. And since we must change our behavior to get what we want (or we’d already have it), this is how we get it.
Converging upon intelligence
We humans are especially skilled in noticing when things are wrong. Nonsense and stupidity provoke intense emotional responses, such as the rage we feel when our computer (or customer service agent) goes off the rails. In conversation, we feel out our counterpart to assess whether their status and intelligence are on the same plane as ours. It’s an innate ability to tease out minute differences and quite normal to make value judgments accordingly.
By tightly integrating with a person in an interactive feedback loop, this hybrid system has an innate and active bias toward greater intelligence. That’s because there’s always an expert to judge whether they’re experiencing intelligence or something else. Coming from this perspective, anything about the system that evokes a negative emotional response is a critical bug to be fixed as soon as possible.
The more intelligent the system, the more persuasive it becomes. The more persuasive it is, the tighter the alignment between the person’s behavior and what their goals require. When it’s clear the system helps us get what we want, we’ll value it more and are motivated to continue the feedback cycle.
The digital side’s only objective is to align ever-better with the highly refined expectations of the person. If perfect alignment is ever achieved, we’ll have a purely digital standalone AGI on our hands. Until then, we have a hybrid human-computer AGI of ever-increasing intelligence that we can all tinker with to our heart’s content.
Fellow tinkerers, feel free to get in touch via Twitter or poke around for my e-mail address. I’ve been at this for too many years now so there’s plenty more to talk about!
The digital side of my AGI effort is Benome. It does a pretty good job of capturing a person’s behavioral essence and reflecting it back, all in service of the person’s goals.
For more detail, read this four part series that covers the fundamental data structure, algorithm, user interface, and real-time visualization: