Making sense of our
I don’t know about you, but I’m tired of all the inflated language on AI in the news these days. The words killer robots, AI with ability to destroy humans, take over our jobs have all been generously thrown around these past few months. Most of these articles are set against the backdrop of a doomsday story, eery looking, killer robots straight out of The Terminator. Take a look for yourself. Here’s a sample of headlines that have flooded my TL these past few months.
Every breakthrough technology ever built has brought with it the doomsday prediction of the fall of humanity. Whether it was Socrates objecting to the pencil & how writing was going ruin memory; the fear of typewriter as destroying the relationship between the writer and his words; fear of the telephone and how it would destroy hearing or worse, kill you with electricity; the negative impact of radio on children; policy banning car radios in the 1930s; the vulgarization of America with TV; CNN’s famous ‘Emails hurt IQ more than pot’ or the Atlantic’s notes on Google making us all dumb … the list is endless. With the coming of every new technology, we’ve seen an uproar. SFGate in it’s article Fear of the new is an affliction that’s quite old, states, the economic historian Joel Mokyr argued that it is felt that “new technology dehumanizes, turns people into slaves of their own technology, and is responsible for assorted social ills, from crime to loneliness.”
Looking back at this history, and considering all the growing applications of machine learning and robotics, I suppose it is only natural that we’re seeing all this heightened rhetoric of the rise of the machines. The main issue here is that most of this fear is existential. Will our own creations spell the end of our species?! Are we already too far down a path without consciously thinking through how this all ends? The other issue is about what it changes now. With AI, that fear is driven by loss of jobs, need for new skills, a different kind of economy, change in social and moral values. It causes concerns about how it affects what we do now, what changes and what becomes obsolete. So alongside all the ‘be scared, they’re coming’ warnings, we’re also seeing the rapid rise of headlines that go — ‘will machines take your jobs?’ And it’s fair to say we’ve already been witnessing them take part in 3 different types of work (by no means exhaustive,)
First, the repetitive labor intensive work — We’ve seen smart cameras and monitoring alarm systems replace security guards (see Knightscope’s autonomous robots), the Roomba and it’s long list of friends become domestic help, remote monitoring baby and senior cameras aid caregivers, robots replace factory floor workers, drones and robots replace soldiers, rescue and aid workers, to name a few. These have gone from being simple machines like the Roomba and smart cameras to dextrous robots that move and perform functions adeptly. In some cases, they have begun displacing people from their jobs but in most cases, they are still aids, a helping hand.
Second, algorithms have gotten capable of digesting large amounts of data, looking across patterns and making suggestions and predictions. In this second form, they’ve become our translators (Google translate), personal secretaries (Google Now), remote diagnostic experts, travel agents, fund managers and financial analysts amongst others.
In their third avatar, they’ve started taking on personalities on a small scale. They’re not just doing tasks anymore, they’re not just crunching data, they’re learning the meaning of behavior, of emotion and social interactions. While our Science Fiction, cartoons and movies have had a long history of sentient machines, we’ve only now started seeing them come to life, taking baby steps and being all awkward. Siri with her sense of humor and wit, Nao as a bank customer representative, Pepper with her ability to read human emotions and behavior & the AI Mario are all pushing the boundaries of what machines can do.
Lastly, there are machines much more subtle, less ‘in your face’, taking on quieter roles and some characteristics of all the types of work above. Robots as pets, The Nest and it’s contextual learning, Grid.io and it’s effort to Design without Designers are all pushing on these boundaries. They’re being designed to integrate seamlessly into our lives rather than stand out.
But let’s pause here for a moment and look closely at these examples closely
Every one of these is a system built to perform a set of pre-programmed tasks, within the constraints of a predetermined closed environment, in a highly optimal fashion.
The ones built for labor intensive work are more about doing things repetitively and quickly, than about making sense of data. The data crunching algorithms are data monsters, entirely dependent on what is fed to them and learning only by crunching large amounts of data. And given this data is typically in limited domain spaces or closed environments, rarely do they learn ‘naturally’ in context of ‘life’, an entirely open environment. Google’s driverless cars are the closest to systems that are let loose in an open environment, having to react to unexpected behaviors of other entities. And even in their case, they’re essentially preprogrammed to avoid or stop.
Take Alexa, Amazon’s new ‘Siri in a box’. David Pierce outlines beautifully in The Verge, his struggle and frustration with Alexa. He writes:
“If you know what you want to listen to something, the Echo is usually helpful. Saying “play John Coltrane,” or “play ‘Turn Down For What’” is perhaps the fastest way I’ve ever found to get to either one. It’s the most magical thing about Alexa. On the other hand, simply saying “Alexa, play some music” is the most dangerous thing you do with the Amazon Echo. Alexa will, indeed, play some music, but there is absolutely no way of guessing what it will be. It’s always a playlist of songs available in Prime Music, either from your library or curated from other users’ public lists, and just when you think you’re safe with “Road Trip / Long Weekend” the next pick is “Songs to Annoy Your Parents.” “Alexa, stop!” I say, issuing the universal command for the Echo to cease and desist. But Alexa doesn’t hear me — she’s busy cranking the volume on a song that appears to consist only of animals being dragged through a woodchipper. “ALEXA, STOP!” “ALEXA VOLUME 4!” “ALEXA WHY MUST YOU TORTURE ME!”
Humans learn in context. We learn in entirely open environments where the need to learn, act, react, be rewarded or punished drives survival and behavior. We’ve evolved over centuries, through different civilizations to become who we are and our machines are not going to rise in a few decades to match the species that we’ve become. While it doesn’t have to take centuries for our machines to get here, we must realize that we’re still in the process of giving them the basic sensory perception. Most of our devices can’t even differentiate foreground from background without depth sensing cameras. They are not aware, despite the tons of sensors they have in them. Making sense of all the content coming in through the several sensors in a meaningful manner and knowing what to act upon, is still decades away.
We’re far away from the doomsday stories of AI becoming our overlords. That’s not to say it will never come to that. But it’s essential to move beyond this unhelpful rhetoric which is causing more alarm than helping us chart meaningful ways forward. What if we can put all this fear mongering behind us and chart different ways forward for all of us to have these conversations? Here’s a start below — a set of points for us to move this discussion forward.
- We must ask ourselves, who do we, as people of different societies, cultures and religions, protect and destroy with AI (aka how is AI power to some)
- We must plan what we, as makers, consumers and traders of AI, want to change in the world we live in
- We must design intentionally how we, as entities sharing space with these AI, envision our relationship with these machines that will inhabit our world.
- And lastly, as creators of these machines, we must take the time to imagine the world they live in, the relationship they share with each other, the laws they abide by? We must take the time to chart out the nature of their world.
Mad Street Den’s next blog post will dive deep into the founders’ views on the future of Machine intelligence. Stay tuned!