Rossum’s Universal Robots: An AI Story

True Artificial Intelligence is coming sooner than we thought.

ETEKLY
Etekly
9 min readMay 17, 2020

--

Source: Forbes

R.U.R.: An AI Story

Harry Domin, General Manager of Rossum’s Universal Robots, speaking with Helena Glory, President of the Humanity League:

DOMIN: My dear Miss Glory, the Robots are not people. Mechanically they are more perfect than we are, they have an enormously developed intelligence, but they have no soul.

HELENA: How do you know they’ve no soul?

DOMIN: Have you ever seen what a Robot looks like inside?

The Conversation

The word “robot” was first iterated in public on January 25th, 1921, in Prague, during the premiere of Karel Capek’s science fiction play, “R.U.R.”. The play is set in the year 2000. Harry, lead male, explains to Helena, lead female, what his company makes. He tells her that an inventor named Rossum first created robots because he intuited that man could be distilled down to his simplest parts and re-constructed in a laboratory. Rossum then turned his invention into an industry. Within the play, R.U.R. robots are near-indistinguishable from ordinary people, and they’re being employed en masse to help people with their daily lives.

Capek’s play would’ve been challenged by Rene Descartes, the seventeenth-century French philosopher, who sought the underlying essence that separated humans from non-humans. Descartes stipulated that every aspect of the human body could be replicated, or at least described, in mechanical terms. However, our ability to reason, communicate, and think for ourselves was so distant from anything that could be re-created in nature — so outside the realm of engineering — that only terms so abstract as “soul” could be used to account for it.

Many today share the belief espoused by Descartes some 400 years ago. Yet, his arguments are under threat due to rapid advancements in what we call “artificial intelligence” technology.

Robot with a brain

Any computer program that displays human abilities is considered A.I., whether it’s as complex as a full-fledged talking robot, or as simple as a spam filter for your email inbox.

Today’s AI may feel rather invisible, but its influence in our society is on par with the world Capek invented in his play. Our cell phones are “smart” now; so are our televisions, our homes, and our kitchen appliances. Companies worldwide tout new “AI-driven” tools for applications in finance, marketing, medicine, industry, and about every other field out there. You’re probably reading this, right now, on a device featuring more than one AI-infused software program.

Yet, we can trace all the most advanced AI of today back to a single machine first designed over sixty years ago.

The Perceptron

DOMIN: But do you know what isn’t in the school books? That old Rossum was mad. Seriously, Miss Glory, you must keep this to yourself. The old crank wanted to actually make people.

HELENA: But you do make people.

DOMIN: Approximately, Miss Glory. But old Rossum meant it literally. He wanted to become a sort of scientific substitute for God. He was a fearful materialist, and that’s why he did it all. His sole purpose was nothing more nor less than to prove that God was no longer necessary.

When the psychologist Frank Rosenblatt designed his “Perceptron” machine in 1958, he was thinking about the human brain. Ten years earlier, the neuropsychologist Donald Hebb had discovered a particular mechanic of how learning occurs in the brain. In short: if one neuron (A) sends regular electrical impulse signals to a connected neuron (B), then that connected neuron will become more efficient in receiving and reacting to those signals over time.

This process reflects itself on a larger scale, as well. For example, if your roommate constantly nags you about doing your dishes, then you’ll recognize that dishes are of high importance in your household. If you’re a good roommate, you’ll receive this information and translate it into doing your dishes sooner, and more effectively, over time.

Several of Rosenblatt’s Perceptrons would connect to form networks. Perceptrons were boxy machines that had multiple inputs and output slots. Every input slot connected to a light receptor and was associated with a dial representing values from zero to one. Rosenblatt presented images before the Perceptrons’ light receptors and adjusted their input dials until the Perceptrons reached a correct output. Turning any given dial towards zero, would give less weight to any signal coming through that input slot; turning the dial towards one, would give more weight.

The first breakthroughs came when the Perceptron began recognizing simple letters and shapes. Before long, Perceptrons could parse photographs of human faces to determine the gender of the person depicted, with some reasonable level of accuracy. This was humankind’s first instance of machine learning through image recognition.

The Perceptron was the first step towards a radical, new approach to computer processing. Typically when we talk about computing, we’re talking about machines executing actions according to pre-written rules that have been defined in code, expressed as mathematical functions. But the Perceptron didn’t follow rules. Instead, it received input information, processed that information through a network of interconnected nodes, and presented an output according to the particular arrangement of signals that made it through.

The Perceptron wasn’t given a rulebook; it wasn’t told that a square is an equilateral shape with four edges and four corners all at ninety-degree angles. Instead, the Perceptron was presented with many different squares of different sizes, shades, and orientation. After some tinkering, the Perceptron would build a “path” to recognizing these and all other future squares, based on the traits they shared. In other words, it learned.

Deep Learning

DOMIN: Then up came young Rossum, an engineer. He was a wonderful fellow, Miss Glory. When he saw what a mess of it the old man was making, he said: “It’s absurd to spend ten years making a man. If you can’t make him quicker than nature, you might as well shut up shop.” Then he set about learning anatomy himself.

HELENA: There’s nothing about that in the school books.

DOMIN: No. The school books are full of paid advertisements, and rubbish, at that. What the school books say about the united efforts of the two great Rossums is all a fairy tale. They used to have dreadful rows. The old atheist hadn’t the slightest conception of industrial matters, and the end of it was that young Rossum shut him up in some laboratory or other and let him fritter the time away with his monstrosities, while he himself started on the business from an engineer’s point of view.

Any form of intelligent activity displayed by a machine can be considered artificial intelligence. Yet not all AI displays machine learning. Machine learning is a subset of AI, encapsulating any program that can demonstrate the ability to take in new information, update its predictions, and reflect those updates in its output algorithms (in other words, learn). For example, the robot that answers your call to customer service is an AI.

Customer service bots always take in new information, but they don’t display pattern recognition or the ability to diverge from pre-programmed responses. So, while customer service bots do exhibit the basic properties of AI, they are not yet advanced enough to improve based on new incoming data. On the other hand, your credit card issuer uses machine learning towards fraud protection by detecting and flagging anomalies in your card’s spending patterns.

Deep learning is a subcategory of machine learning, and it is the most complex form of AI development in the world today. Deep learning describes a specific technical paradigm.

AI and machine learning come in all shapes and sizes. Deep learning algorithms take the form of complex, layered neural networks — picture lots and lots of Perceptrons in a big web. Our brains, with neurons upon neurons so interconnected that you simply can’t parse the logic of information flow.

To understand what makes deep learning so groundbreaking, consider two robots built to play board games: DeepBlue, from IBM, and AlphaGo, from DeepMind. In 1997, the chess grandmaster Gary Kasparov walked off the set in an internationally televised chess match against the DeepBlue robot. He threw his arms out as if to say: “What am I supposed to do?”

The DeepBlue robot had beaten the world’s greatest chess player of his day through sheer computing power. It had been programmed to understand all the rules of chess. For each of its turns in a game, it calculated the success of any possible move it could’ve made under the given circumstances of the board.

On March 10th, 2016, Lee Sedol faced much the same fate as Kasparov did before him. This time, the robot was a deep learning algorithm named AlphaGo. Sedol was the world’s number one ranked player in the ancient Chinese board game of Go. Now, chess is complex, but Go can make chess look like checkers — the game is non-directional, meaning players can land their stones anywhere on its 19x19-place board. The aim of the game is to capture territory on the board by shutting out your opponents’ pieces.

Because the Chinese board game Go does not have rules about what pieces can do or where they can go, a computer cannot consider every possible outcome on the board. The number would be incomprehensible, even to a machine. So, machine learning on the level of DeepBlue would not suffice.

AlphaGo was given the rules of Go, but then it was left to its own devices. Instead of computing every possible move in every possible scenario, AlphaGo’s programmers fed it thousands of past Go games played by human professionals. The algorithm processed all that information and used it to update its predictions.

As it was exposed to more game scenarios, AlphaGo became better at guessing what moves would work best in any given scenario. After parsing all that human data, AlphaGo was set to play itself. AlphaGo played itself hundreds of thousands of times. It learned how to improve by attempting to beat its own best strategies. By the time it met Lee Sedol, AlphaGo had played more games than one human could fit in their lifetime. Sedol didn’t have a chance.

In the second game of their five-game series, Lee Sedol walked off the set of his internationally televised match, as if to say: “What am I supposed to do?”

What’s to Come

We’ve only gotten a glimpse of how artificial intelligence works. It’s a hugely dense, still nascent, yet incalculably influential field. But its complexity and the boom in its popularity, has led to major gaps in our collective understanding of what AI is. Today, “AI” and “machine learning” are often buzzwords, tossed about by brands that intend to position themselves as modern and tech-savvy. Facebook has advanced AI mechanisms underpinning its services. But is your television really “smart”?

AI will become a larger part of our everyday lives in the future. It’s already in our email inboxes, our internet, and our airplanes. Google’s Duplex AI, first previewed in May of last year [2018], can now conduct phone calls indistinguishable from a human. Nvidia, the GPU designer, previewed a software in December that constructs artificial, photorealistic human faces about indistinguishable from real ones.

At this rate of progress, it’s not possible to accurately imagine AI fifty, even twenty years from now. The prospect is exciting and terrifying, in equal parts. After all, we are talking about machines that surpass human capabilities, in ways we can’t always fully understand. The name for this phenomenon is the “black box” problem. When we’re not able to discern the process by which an algorithm reached a conclusion, it’s called a black box.

What happens when we apply AI technology beyond board games and image recognition, to major industries and life-or-death applications? What happens when more algorithms start producing outcomes optimized past our understanding? The implications of machines that exceed human control have been speculated over in many a science fiction story.

In Karel Capek’s 1929 play R.U.R., Harry Domin shrugs off a chilling omen about the dangers of building artificial people.

DOMIN: (laughing it off) “A revolt of the Robots,” that’s a fine idea, Miss Glory. It would be easier for you to cause bolts and screws to rebel, than our Robots. You know, Helena, you’re wonderful, you’ve turned the heads of us all.

The play ends when the robots revolt, killing off their human creators.

Artificial intelligence may be the defining technology of our age. If we wish to keep it aligned with our best interests and under human control, we must first understand how it works.

This story was originally written by Nathaniel Nelson and published in Etekly. Nathaniel is a writer and podcast producer based in New York City. He writes the internationally top-ranked “Malicious Life” podcast on iTunes, hosts programs on SCADA security and blockchain, and contributes to tech websites.

--

--

ETEKLY
Etekly

We write about how tech impacts the human experience.