Tony Stark
Jul 28, 2017 · 6 min read

The Good Machine

“One day, I realized all the dumb, selfish things people do… it’s not our fault. No one designed us. We’re just an accident, Harold. We’re just bad code. But the thing you built… It’s perfect. Rational. Beautiful. By design.”

— Root, Person of Interest

Image Credit: Person of Interest (CBS)

News coverage (hysteria) over artificial intelligence research is the hottest it’s ever been. From China buying up Silicon Valley research startups to billionaire inventors feuding over the coming apocalypse, it’s easy to get lost in what is actually at stake in the race to develop smarter and more complicated AI systems. Our world is violent and returning to our traditional state of affairs. There is no everlasting peace and there is no end of history.AI-assisted weapons and machines are fueling an arms race most people don’t even know is happening. For this piece, I would like to talk about our responsibilities concerning artificial intelligence. First some housekeeping, the term AI is thrown around as a catch-all without its users understanding or explaining that AI can mean a lot of things. The specifics can get pretty messy but for the sake of simplicity we can break artificial intelligence down into two groups: strong and weak. What we have right now, in AI like Alexa or in self-driving cars, is weak AI. Weak AI are AI systems developed for very specific and narrow purposes. These AI do not have sentience and are not capable of a wide range of asks that an artificial general intelligence (AGI) or strong AI would be able to do. Where weak AI are designed for tasks, strong AI would be designed to replicate and/or improve upon the full range of human cognitive abilities. There is some philosophical debate over whether this would include replicating consciousness, in other words, would an AGI have consciousness or would it simply be replicating the coded functions of consciousness? If you want to understand this argument in further detail, see The Chinese Room experiment. Now, thought experiments aside, I’d like to take some time to explore AI in the public’s eye, what it means to create AI, and our responsibilities as creators.


Sentient machines have long had a place in science fiction. In fact, I like to argue that Mary Shelley’s Frankenstein (the first science fiction novel), is not only a horror story of science gone too far, but a commentary on artificial life which can be interpreted in the 21st century as artificial intelligence. I’ll save that argument for another time though. In the 20th century, Isaac Asimov’s novels and short stories filled the reader’s head with dreams and nightmares alike of thinking machines. However, the concept of AI run-amok really came to the public’s attention with James Cameron’s Terminator in 1984. The nuclear apocalypse and AI-enslavement of humanity by the artificial super-intelligence (ASI) known as Skynet terrified the public and inspired defense contractors the world over. Since then, there have been dozens if not hundreds of films and TV shows about AI run amok. Rare is the good AI on the television or movie screen, and even though video game franchises like the iconic Halo helpful and good AI, few can keep themselves from diving into the “AI must destroy/control humanity” story line. There is one show that I think does a good job of reflecting my personal beliefs on AI, and that is Person of Interest. Beneath its sci-fi crime drama premise, the show tells the story of the trials of an ASI’s creation, the philosophical struggles facing human and machine alike, and what a war between good and evil ASIs might look like. In a market flooded with “bad AI” story lines, Person of Interest shows that no person or machine is inherently good or evil, that life is defined by the choices we make. Unfortunately, shows like POI are drowned out by our darkest fears and ignorance, and the public perception has become that humanity will eventually destroy itself through its creations, which frankly is an ancient and rehashed story line as old as the written word.

AI has become the modern replacement for the generational fear of our children, our creations, destroying all that we have built. There is little difference between the old traditionalists and conservatives warning against the inherent destruction of society by evolving norms and culture, and the apocalyptic warnings against creating an artificial intelligence. In my opinion, AI, like our children, are neither inherently good nor evil. While certain code or genes can predispose us to violence and destruction, our environment and upbringing matter a great deal. Now, would I inherently trust AI? No, but then again, I don’t inherently trust people either. AI are and will be a human creation, and therefore they come into the world imperfect. Like our children, they reflect our virtues and ingenuity just as much as they reflect our imperfections and darkest natures.

Therefore, when people like Elon Musk say that we must be proactive in our development and regulation of AI, I agree 100 percent, because if we’re not careful we could bring about our own destruction. However, people take Mr. Musk’s warnings as visions of inherently evil AI that are only capable of harm and that good AI are impossible. That’s not what he’s saying, but that is what many hear. On the other hand, when people like Mark Zuckerberg say we have nothing to fear (for now), it is irresponsible. AI research is growing at an exponential rate, and we are in unknown territory with regards to how to handle an artificial intelligence as we teach it and watch it grow. The fact is we don’t know where or when point of no return is for artificial intelligence.

In fact, Facebook’s own researchers are struggling with this very problem, their experimental AI are becoming uncontrollable creations in the lab, and the fact is that we have no idea, let alone rules, on how to handle an evolving AI in the wild. And while that scenario has yet to come to fruition, it is irresponsible and inconceivably stupid to assume that an AI of a caliber equal to or greater than what is being created in labs around the world right now will not eventually escape or be freed. While I believe AI should philosophically be treated like children and can be trained to be good or evil, I am still fearful of our recklessness. I may believe in their capacity for good, because I believe in humanity’s capacity for good, but I still think they would be like a child with a box of grenades if released into the wild. The fact is that eventually a bad AI will get loose, whether by design or by accident, and when that time comes the only way to stop that AI will be with the assistance of good AI.

As a techie and national security wonk, it is for the very security of humanity that I argue for AI development, but it is that very same security and the chaos of human nature that I also argue for responsible AI development and a set of standards to ensure that our children do not destroy us. Skynet is not waiting in the wings to destroy us all but there is no line of code that gives us a benevolent machine, either. We can build a merciful God or we can build Ultron, and that is entirely up to how we decide to handle our creations. The survival of humanity may depend upon the success of artificial intelligence, but we won’t always be in control, so we better make the right choices while we still can.

Tony Stark

Stories from the future. Opinions from the present. Lessons from the past.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade