They say you can’t halt progress. But that doesn’t mean we can’t redefine it.
Back in January of 2014, Google acquired the Artificial Intelligence (AI) company, DeepMind. As part of the announcement, both companies promised the formation of an AI Ethics Board to guide the creation of AI technologies. However, as a Huffington Post piece about the acquisition noted, Shane Legg from DeepMind in an interview from 2011 was quoted as saying:
“Eventually, I think human extinction will probably occur, and technology will likely play a part in this…(AI) is the “number 1 risk for this century.”
First off, props to DeepMind for making the formation of an ethics board part of the agreement when being acquired by Google. However, since the acquisition announcement was made in January, I haven’t been able to find any updates about the Board whatsover — who is on it, what it may cover, or any other information. Please note — if I’ve missed recent announcements, please let me know, and it’s also fair to assume that if the Board was created, Google/DeepMind may not be announcing any updates about it until it’s actually got something to share.
That said, especially considering Legg’s concerns, why wasn’t the Board, or at least some basic rules about the ethics of creating AI, created by this point? I can think of a number of reasons:
- Google isn’t legally responsible to create the Ethics Board, unless there’s an aspect of their contract with DeepMind they’re flagrantly violating we don’t know about.
- While ostensibly this Ethics Board and its revelations would be revealed to the public (if only for the potential positive PR), I’m not aware that this is a condition of the request DeepMind made towards Google.
- Revealing any guidelines regarding ethical considerations around Google’s AI efforts would create a firestorm of negative PR from academic and privacy circles among others. It’s easier to simply move forward with their work, launch an AI product at some point, and deal with the consequences at that time. Like the scandals Google continues to face with technology like Street View, Google’s modus operandi is to push the envelope technologically and ethically without worrying about the ramifications because of their deep pockets and lobbyist army.
An Appeal to Google
I’m tired of being angry at Google. It’s an amazing company, and I utilize their technology every day in the form of search and other products. But I don’t agree that creating technologies that take away people’s freedoms is good business. It may be smart business — meaning, savvy and money-making. But it’s not good business, in the sense of fostering relationships with others that’s transparent and builds trust.
So here’s my appeal — Googlers: please announce some specific about the AI Ethics Board asap. Delight us all with the experts you’ve asked to be a part of this ground-breaking group that aren’t all on your Board, Silicon Valley types, or technocrats. Surprise us with your choices of multi-racial, women heavy, globally represented voices that can spark a conversation about humanity and ethics that will captivate culture as much as reap profits. While defining our technological future, incorporate a breadth of wisdom not grounded solely in ones and zeroes to acknowledge humanity’s spiritual nature along with our foibles, passions, and desires.
Just because you can build something, doesn’t mean you have to, or that you should. Just because “you can’t stop progress” doesn’t mean we can’t consider how to manage AI in the near and present future within a context not driven solely by revenue. The majority of the research I’ve done on AI proves that nobody truly knows if or when AI will reach human level sentience, but automation is already amongst us. Militarized AI is already here. Rules, or ethics surrounding AI don’t exists in any common form beyond Asimov’s outdated and fictional rules that didn’t work even in the short story where they were first envisaged.
And, not that you’ll likely ask, but if you’re looking for volunteers to be on your Ethics Board, I’d love to throw my hat in the ring. I’m a big fan of humanity, and I’d be keen to be a part of the discussions making decisions about how it will or won’t move forward.