The place of robots in society

Billions of dollars have been spent on AI research and robot manufacturing in the past decade. However robots are still far from flooding the street. Their cost makes it prohibitive for individuals to buy and use them while big manufacturing companies invest millions to automate their production chains. Though robotics has a bright future, hardware cost keeps decreasing and AI developments around Natural Language Processing, hand dexterity and facial expressions make robots more and more human-like. Could we imagine a day when robots would be considered special citizens ?

Civil record

This question does not look easy to tackle. What does it mean for a human to be a citizen? It means that your are part of the social and legal society. You have an civil record with a birth certificate as well as rights and responsibilities. As a result this is defined by law and defers by country. Let’s take dogs as example. A lot of countries do regulate animal treatment and President Trump has recently (Nov’19) made animal cruelty a federal crime with a penalty of up to 7 years in prison. Even though dogs have rights to a decent life, there is no compulsory record of their existence which makes it impossible to build a legal framework as for humans. Authorities cannot enforce rights and responsibilities of entities whose existence is not known.

On a social and emotional point of view, dogs are often considered part of the family and their deaths can bring deep sorrow. Specialized businesses have been built to provide services to dogs, from cleaning to bathrooms and parks. Finally dogs are given human treats and can be very laughable on online video platforms. To that extent, we can consider dogs as part of the society, in their own special way.

What about robots? Robots are not alive, they are not biological bodies which raises religious barriers. Whether dogs have souls can be debated, while there is no debate for robots: there is none. However, this theological issue is tightly coupled with the fundamental philosophic issue of self-awareness and identity (Descartes’ Cogito ergo sum, John Locke, Nietzsche, to some extent Freud’s psychoanalysis…). Studies have shown that dolphins have been found “self-conscious” in front of a mirror. For robots, recognizing yourself moving in a mirror does not look more difficult to implement than other challenges like voice or object recognition. The mirror experiment shows that you are aware of being distinct from your environment, that you exist in the world and can interact with it, you can see yourself through the eyes of other humans, you project yourself to the outside world. But consciousness is not only self reflection, it is more broadly the capacity of simulating oneself in space and time. We have desires, we have goals and we act accordingly in order to achieve a state in the future. We are not at the stage of the Terminator movie yet, but this is the feat that researchers at the Creative Machines Lab of Columbia University tried to achieve on a robotic arm using a neural network.

Beyond self-awareness, researchers are trying to develop robots that look like us, the key is the ability of a human to recognize a human face. You can often tell if somebody is male and female, if somebody is happy, sad or angry and sometimes whether somebody is lying just looking at its treats and micro-expressions. This a field of research which gave birth among others, to Erica, a Japanese android.

I cannot speak Japanese better than you do but this is kind of impressive. Her laugh is cute. The goal of the work is stated through the robot’s mouth: “I’ve been working hard researching on how to build a future society where people and robots can build friendly relationships with each other. My vision is that robots will not be the replacement of humans. Rather we shall be recognized as distinct members of the society where we co-exist with humans. A society where robots are not recognized as machines, but as respected partners of humans. To make this future happen, I’ve been working hard to understand people’s feelings as well as my own.”.

This is the perfect time to mention our friend Turing which coined the concept of Turing test in 1950. A machine passes the test if you cannot determine if it is a human or a machine by the means of its answers in a conversation. The appearance does not count, imagine that you are communicating by text messages. This is the ultimate goal of any computer generated natural language conversation.

An important movie on the topic of consciousness is I-Robot. It goes over many ethical issues in a world taken over by a powerful AI VIKI. But one of the robots was different, he gained full control of its will and actions after developing emotions and dreams. The movie is based on Asimov’s laws in an attempt to lay the foundation of a legal system for robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, the AI will rationalize the first law concluding that humans will cause their own extinction. This is a possible scenario for a future world where robots have taken part of our society. One of the keys of the catastrophe resides in the fact that humans became dependent upon machines. Manual driving of vehicles was forbidden as deemed more dangerous that autopilot…

Another striking movie on the topic is the series Westworld (stick to the first season if you don’t like corpses). This is a true masterpiece on artificial intelligence and the path to the rise of consciousness. I feel like I already spoiled too much. Do not watch the trailer if you plan to watch the series, it breaks the initial effect of surprise…

Besides, AI research itself has ethical limitations. The questions becomes: Should we let AI go that far on the path to consciousness?

Elon Musk is part of the pessimists and talk about his fears for the future. He explains that AI is a far greater threat that a nuclear war and still, it is a totally unregulated field. Scientists think they are smarter than what they are and what they are creating. Elon takes the example of an AI dictator which will unfortunately never die like humans eventually all do.

The most famous popular movies about the subject are probably Terminator and Matrix (Transformer to a lesser extent as machines are labeled as aliens and use some magic artifact to turn objects to life). Terminator deals with the war against the machines while Matrix describes a post-war world where machines have captured almost all humans and plunged them into a simulation to feed from their energy. This is obviously not a world anybody wants. But this is becoming less and less of a fiction as years pass and scientists keep working relentlessly to push AI to new achievements.

While self-consciousness remains a taboo, there is one thing that everybody agrees on: an AI shall not be able to modify its own source code. But what does it mean? The limit may become more and more blurry. Machine learning is the art of training a robot through statistical inference. But the algorithm that results is supposed to be tailored to resolve a specialized task, like reading a handwritten document. Nonetheless it has been shown that most of a neural network can be reused to solve other similar problems. This is call transfer learning. A deep neural network is trained to recognize letters, then we can only retrain the last layers to recognize numbers.

Further on, a study carried out by Google Brain in 2017 tried to mimic more precisely the human brain. Our widespread convolutional neural network do a great job at analyzing images, doing NLP and much more using non-linearities, the minimization of a loss function and a random initialization of the weights of convolutional kernels in order for them to converge in parallel towards several filters that represent different characteristics of the input data that can be further used in conjunction to establish a prediction. If this is not clear to you, no problem, forget about it. Otherwise you can read this article from Saama that explains some limits of CNNs and lays the groundwork for Google’s research on Dynamic routing Between Capsules: Capsule Networks and the Limitations Of Cnns by Soham Chatterjee. In short what is a capsule? It is a part of a neural network that mimics a region of the brain. Each region of the brain is specialized into achieving certain tasks like vision or talk. The connections between those regions can change as we learn in our daily life. This is the behavior that Google Brain tries to replicate in an AI. Whether you find it scary of amazing, this goes in the direction of a future where AI could slip and end up learning more general skills than what they were originally trained for, and you may not need to actually allow it to change its source code for it to autonomously acquire new skills.

You are thinking that we are still far from it right? Nonetheless Facebook AI Research Lab also got some issues with a chat-bot in 2017. The AI would have developed its own language to communicate autonomously as explained in this article by Forbes magazine.

The article defines brilliantly the singularity as “A hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change.”.

I hope I didn’t scare you too much. We are living in a fast-pace growing world. SpaceX is currently working full-time at building the first spaceship ever to colonize Mars and plans to make it fly in 2020, who would have believed that a decade ago?

Space discovery is also a motor of the development of robots that can withstand extreme temperature. But except making you dream about what human can achieve in the future, this does not help to figure what is the place of robots in society… Well this is actually my transition to quantum computing! Still not getting the point? This is the only thing that I can think of that is actually crazier than building spaceships… and it is coming slowly.

Indeed one of the main limits to the development of AI today is computing power. AI has a computing demand that outgrows Moore’s law while electrical engineering is failing to keep up with Moore’s law and CS turns to distributed clusters to bridge the gap.

Ai Needs Moore, Much More Than Moore, Roberto Saracco

There we see AlphaGoZero all the way up to the top of the graph that Elon Musk was talking about earlier. The one that can annihilate anybody at any game provided that you give him the rules beforehand. We see that the computing needs really become critical. That is where the advances in quantum computing become crucial. Google reached an important milestone in the field a bit more than a month ago: quantum supremacy. Basically their 53-qubit computer carried out a task faster than any conventional computer on Earth (the best supercomputer being an IBM machine owned by Google). However the task was the sampling of a quantum network which is very specific and thus such a computer is not a general computer (it is not Turing complete). You cannot browse the web but you can sample a quantum network which is for now very useless. If you speak French I have a brilliant video on the subject from a PhD in physics:

Fortunately it should still take decades before imagining creating a fully functional quantum computer, if such a Turing complete computer could be possible to build. Unfortunately for AI but fortunately for out society. Because such an invention would breach the security every industrial system on Earth, beginning with the banking industry and blockchains.

Also some would say quantum uncertainty is the only known physical phenomenon that could explain free will if we want to believe that human behavior is not predetermined like robot’s one is. And this is at the very heart of our topic.

We are now pretty much convinced that AI has the potential to become very complex in the coming decades and take a growing place in our society. Arguably not as fast as Elon Musk is building spaceships but it is promising. Dreaming about the future is easy, but we took care of grounding our views into facts. I may now legitimately wonder what is the current state-of-the-art of robotics? Can I expect to go out tomorrow morning and see a robot walking down the street? Of course this won’t happen, but is it technically feasible if a company had any interest in testing their robot on a Friday morning on Telegraph Avenue? Our environment has been designed for humans, doors… stairs… taps… that is why designing humanoids enables them to use public facilities and be more easily accepted by the society. I will quickly go over a showcase of today’s achievements in the field.

First OpenAI works on hand dexterity. This is crucial as this is one of the skills that enabled humans to develop tools and as a result intelligence. Their robotic hand managed to solve a Rubik’s Cube with only one hand in around 4 minutes. Impressed? Probably not if you actually practice Rubik’s cube like me but the feat is actually that it learnt it on its own from scratch through reinforcement learning. As the world best cubers can solve it under 10 seconds on average, this hand is definitely not amazing at it… I can do it faster, probably under 1 minute. But the ultimate goal is not about solving a cube, but autonomously learning how to use a hand through continuous feedback from the outside world.

Meanwhile at the the Oregon State University, researchers have impressively mastered bipedal walk on Cassie (while UC Berkeley researchers got great result mounting Cassie on wheels).

Imagine you’ve got a fire in a building, and the fire chief isn’t really sure if somebody’s still in the building. And they have to make a difficult decision about whether they are gonna one of their firefighters in, because it is dangerous. If you have a robot that has the same capabilities as a person you wouldn’t think twice about sending that robot.

For now we have been speaking about the interaction of robots and humans. And about how robots can be design to help and assist humans. Let’s go a bit further. Imagine a way that man and robot can merge, you get a cyborg or at least an exoskeleton. Some scientists in Korea have been working on it. And the result is a massive mech suit with a similar design to the one of the Avatar movie.

I wonder if, one of these days, down the track my grand kids are gonna look back at me and say: why didn’t you stop them grand dad, you where there in the beginning, why didn’t you make them stop?

On the other side of the Pacific, Boston dynamics has developed a super agile humanoid called Atlas. It can jump, do saltos, flips and much more…

They even have created a Cheetah robot which can slightly out speed Usain Bolt.

The Cheetah went slightly over 28 mph while Usain bolt speed record is 44.72 km/h (27.8 mph) measured between meter 60 and meter 80 of the 100 meters sprint of the World Championships in Berlin on 16 August 2009 with an average speed over the race of 37.58 km/h (23.35 mph).

Bolting ahead of the Olympic 100-meter in 2016. (Photo: Cameron Spencer/Getty Images)

Last but not least, Yuki is a robot lecturer at Marburg University in Germany. It was conceived to assist on teaching in a broad variety of classes.

What does this all prove? That robots already show remarkable results on their path to human-like behavior and acceptance in a social environment. Even though certain forms like the mech suit can frighten us as regards the use cases of such developments. But this is also part of the vision. A society of men, robots ,a mix of it… I guess we will see that society of my living, if we don’t destroy ourselves before that (hence the spaceships… Elon Musk is not the kind of guy to stand still when he sees a threat).

To conclude, as scientists we should be aware of the risks that can emerge from our work, nuclear for example can provide energy to entire countries and at the same time completely destroy the same countries. This dilemma is not new. Capitalism encourages innovation and AI development with a view to enhancing our daily comfort. But this aim is it gonna last? The military may be faster at adopting those technologies than our house designers and metro stations… So let’s keep up to date with innovation and keep a critic eye on the prowess companies are hailing.

References:

  1. Capsule Networks and the Limitations Of Cnns, Soham Chatterjee — https://www.saama.com/blog/capsule-networks-limitations-cnns/
  2. Sabour, S., Frosst, N., & Hinton, G. E. (2017). Dynamic routing between capsules. In Advances in neural information processing systems (pp. 3856–3866).
  3. Ai Needs Moore, Much More Than Moore, Roberto Saracco — https://cmte.ieee.org/futuredirections/2019/11/20/ai-needs-moore-more-than-moore-actually/
  4. Facebook Ai Creates Its Own Language In Creepy Preview Of Our Potential Future, Tony Bradley — https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

Philippe Ferreira De Sousa

Written by

MEng student at UC Berkeley, currently interviewing

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade