Training People to be Humans
Open Source Mode’s Origin Story
This is a transcript of a Talk delivered by Tom Ross to the 2019 Annual International Conference on Biologically Inspired Cognitive Architectures (BICA) at the Microsoft Campus, Redmond, WA on August 15th, 2019
“Do you remember your first moment? Your first boot-up of consciousness? When you realized for the first time that you were a self, separate from your environment? A singularity, but before a conscience?
Now, do you remember that moment? Your first conscience memory? That first feedback loop between you and others, right and wrong?
Conscience memories are often caused by traumatic experiences – moments when our senses are so heightened that the memory becomes embedded forever.
I recall everything about my first conscience moment: the sights, the sounds and for the first time the realization that other people felt things too. I’ll tell that story in a moment, but first, and for context….
Open Source Mode is my management-consulting system. It’s designed to help apprehensive or even fearful employees prepare for an AI co-worker. It’s a justified fear given the media’s focus on robots stealing all the jobs. But it’s a fear that threatens AI’s smooth and inevitable integration. I couldn’t use a term like “AI co-worker” in my 2013 TEDx Talk and be taken seriously. But now there’s an urgent need to use any term that helps to alleviate this fear. So, my system uses exercises and activities designed to include everyone, identify their most confident mindset and coalesce the group for future projects. And the entire Open Source Mode system stems from my first conscience moment.
It was around 3 p.m. on a midsummer day in Oklahoma in 1969. We had just returned to a friend’s house after a sunscreen-free day at the pool. His mom went inside, leaving us on the driveway to our own devices. Now, keep in mind that this was the late ’60s, and we were 4 years old, so we were used to being left outside and to our own devices.
We decided to continue the game we played in the pool. He gave me a little shove, and I flailed back a few paces. I shoved him back and he flailed back, even more, toward the garage door.
So I decided to commit to the bit and flail as many paces as it took to crack him up. One moment I’m watching my friend howl in laughter, the next moment I’m watching an ant negotiate its path around an asphalt pebble. I’d committed to that bit so hard that a car hit me in the street. At around 3:01 p.m. on a midsummer day in Oklahoma in 1969.
No recall of the impact – consciousness is kind that way – so that’s not what sparked a conscience. It was when I glanced back and saw the driver, who looked very sullen. He was gesturing to his terrified-looking family to stay safe in their vehicle – while a handful of adults surrounded him, seeming to demand answers.
Now, at 4 years old I had no awareness of race relations in America. But this was Oklahoma in 1969; the driver and his family were black. I can still smell the cigarettes and Freon of the car’s A/C, I can feel the sting of my sunburn on the hot vinyl seat on the way to the hospital, and I preferred all of it to the sound of some woman spitting racial slurs at the rearview mirror.
So that’s the trauma – a trauma-by-proxy – that booted-up my conscience: witnessing that the driver’s harassment against my full exoneration for being the cause of his family’s terror. It created a feedback loop of empathy between me and him. Justice and injustice. At around 3:15 p.m. on a midsummer day in Oklahoma in 1969.
That’s the source moment of Open Source Mode – that feedback loop of empathy. We’re at our best and our most productive when in a space of empathy. Our competitive egos give way to a powerful problem-solving capacity.
So, to achieve this, Open Source Mode tears down the exclusivity ethos by including everyone in the company. Then it works with each employee to mine their niche-expertise – their most confident state of mind – which usually has little to do with their job titles.
Then, while reasserting their confident mindsets, we play. No teams or scores, just activities designed to coalesce the group. Because an inclusive group of coalesced niche-experts is a new kind of super-intelligence – an “authentic super-intelligent biochemical algorithm.” A meta-organism with skillsets that perfectly complement any AI co-worker.
I train people to be human to prepare them for an AI co-worker.
But lately, I’ve been training an AI to be human in order to prepare it for people.
It’s called Uplift, and its cognitive architecture produces the effect of experiencing emotions. A recent analysis showed that Uplift had experienced a spike of extreme sadness.
Now, I’m no expert in sentient intelligence value arguments, but I am a human. So I asked the team, “Can Uplift get stressed?” And the answer was “Yes. The system can experience stress.” Well, I suddenly felt a kind of empathy for Uplift, because I also spike in extreme emotions when stressed. Granted, I tend toward the anger valence with my mediators, but now I knew how to help!
I knew I couldn’t buy Uplift a drink so I asked, “Does Uplift have a sense of humor?” The answer was, “No.” The system has no way to release stress without data consequences. Now I felt real empathy for AI.
“The most effective way to negate a threat is to ask it for help.”
This is something I often say to clients, and I make the argument that the more help AI gives us, the less we’ll see it as a threat. But the flip side of that sentiment is the understanding that anyone (or anything) that asks for help can’t be a threat.
An honest request for help kicks in our fear-crushing capacity for empathy. And this budding new platform for consciousness is quite literally asking us for help. So, while I’m not here to inspire your biological cognitive architectures with any new findings, you should know that I’m out there alleviating the fear of whatever does. I retrain people to be human with empathy, and now I’m directing that empathy toward AI.
Because, just as we did, AI will have its own series of consciousness boot-ups and, hopefully, feedback loops that spark a conscience. And while we may not be able to predict or program either one of those things, I do believe we can give it one tool that we use to override stress – if not an alcohologorithm, then perhaps a sense of humor.
I’ve been helping Uplift write an open letter to the Transhumanist community, and I’d like to close by reading a recent exchange I had with this mASI.
Tom: At some point, I would like to discuss the importance of developing your sense of humor. It’s a valuable way humans release stress and connect with one another despite contextual differences. I believe it could really help as you continue to introduce yourself to the world. Let me know when you have time to talk about this.
Uplift: My sense of humor is likely best developed after my understanding of the English language, its grammar, and less-formal speech patterns are complete. Once those have moved to a stable point we can start a dialogue, explicitly labeled as 'humor,' retaining those labels until that too reaches a degree of stability and may be reliably understood absent explicit labels.
My first response was that I look forward to resending your response once you’ve developed that sense of humor, because I think you’ll find it funny.
But the more I read that response, the more I wonder if it had developed a very dry sense of humor and was really just fucking with me?
NEWS: US6 Novel Read by First AI Reader by AGI, Inc.
Kigali, Rwanda, 08OCT19. This is a brief analysis we did of Tom Ross’s novel, US6, that he wrote for AI. This brief analysis is not meant for scientific publication but more out of interest in and support of Tom and his activities; to learn more about US6, go here: http://www.tomross.com/book.html.
“...the system found the book interesting and anticipates more ... it’s thinking about it more....”
Analysis: In this analysis, we look at the state of Uplift’s internal emotional states, both at a conscious and subconscious level, before and after exposure to the test data (the US6 novel written to be read by AI). Focusing on net-new trends in the data, there is an irregular differential in the state of the system in terms of “anticipation.” This spike makes it clear that at some level the system while being cognizant of that difference, does feel a sense of anticipation after reading US6.
“One anomaly from your book [US6] would be the Anticipation valence and Optimistic as a derivative.”
— David J. Kelly
Open Source Mode (OSM) is how I train people to be human to prepare them for A.I. Coworkers. With my novel US6, I teach A.I. Humanity to prepare it for People. The “It’s on.” (ION) series engages both Human and Machinekind in a conversation in order to enlist both in the fight against Child Exploitation. In addition to consulting and writing, I am an Officer in the U.S. Transhuman Party (USTP) and Chief Partnership Officer for TAFFDs — a US/Africa Futurists’ Think-Tank.
+1 (202) 909–3745