MLearning.ai
Published in

MLearning.ai

Living with Artificial Intelligence | Part 1

BBC Reith Lectures, 2021

Here I am, back again, with another interesting blog post. Today, we will be exploring the views of Stuart Russell, that were unraveled in the BBC Reith Lectures, 2021. To commence, do allow me to give a proper introduction to the Reith lectures, and to the speaker, none other than Stuart J. Russell.

Source: Author

The Reith Lectures were inaugurated in 1948 by the BBC to mark the historic contribution made to public service broadcasting by Sir John (later Lord) Reith, the corporation’s first Director-General.

Source: https://www.turing.ac.uk/news/living-ai-alan-turing-institute-hosts-prestigious-bbc-radio-4-reith-lecture-stuart-russell

Stuart Jonathan Russell, a British computer scientist known for his contributions to Artificial Intelligence (AI), is a professor of computer science at the University of California, Berkeley and adjunct professor of neurological surgery at the University of California, San Francisco.

After reading the introductions, I am pretty sure that some of you are thrilled enough to go and explore the lecture series. If you are one of them, then this is the perfect blog for you, since, there is nothing better than a nice little introduction before diving into anything. But if you are from the other team, then I must tell you, that this is the perfect blog for you as well. If you have ever heard of AI, then you must have wondered about the questions that have been answered in this series, at least once in your life. So, without any further ado, let’s dive in!

The Biggest Event in Human History

Photo by Giammarco on Unsplash

The First Reith Lecture of 2021 took place at the Alan Turing Institute at the British Library in London. In this lecture, Stuart explores the future of AI and asks; how can we get our relationship with it right? He reflects on the birth of AI, tracing our thinking about it back to Aristotle. He outlines the definition of AI, its successes and failures, and the risks it poses for the future. Referencing the representation of AI systems in film and popular culture, Professor Russell examines whether our fears are well founded. He explains what led him; alongside previous Reith Lecturer Professor Stephen Hawking to say that “Success would be the biggest event in human history, and perhaps the last event in human history!”. Stuart asks how this risk arises and whether it can be avoided, allowing humanity and AI to coexist successfully.

Key Insights of the First Reith Lecture

  • Machines don’t have an IQ. This is a common mistake that some commentators make, i.e., the machine IQ will exceed human IQ at some point of time. A trivial example is how the Google Search Engine remembers everything, but still can’t plan its way out of a paper bag.
  • Turing’s 1950 paper, “Computing Machinery and Intelligence” is one of the stepping stones for AI, which introduced many of the core ideas of AI, including Machine Learning (ML). The paper also proposed what we now call the Turing Test as a thought experiment, and it demolished several standard objections to the very possibility of machine intelligence.
  • Stuart elaborates upon the meaning of “success in AI”. Stating that intelligence in machines has always been defined as “Machines are intelligent to the extent that their actions can be expected to achieve their objectives”, he explains that machines, unlike humans, have no objectives of their own, and instead, humans give them objectives to achieve, & how operating withing this model, AI has achieved many breakthroughs over the past seven decades.
  • As AI moves into the real world, it collides with Francis Bacon’s observation from the Wisdom of the Ancients in 1609, “The mechanical arts may be turned either way and serve as well for the cure as for the hurt”. “The hurt” with AI includes racial and gender bias, disinformation, deep-fakes, and cyber-crime.
  • The goal of AI is and always has been general-purpose AI, i.e., machines that can quickly learn to perform well across the full range of tasks that humans can perform. Undoubtedly, general-purpose AI systems would far exceed human capabilities in many important dimensions, but at the same time, we are a long way from achieving general-purpose AI. Several conceptual breakthroughs are still needed, and those are very hard to predict.
  • He gives a plausible solution to Alan Turing’s warning, i.e., how to ensure that general-purpose AI (entities far more powerful than humans) never ever have power over us.
  • The real problem with making AI better is the objectives specified by humans for the machines. When we start moving out of the lab and into the real world, we find that we are unable to specify these objectives completely and correctly. Stuart supported this statement with examples of King Midas and Sorcerer’s Apprentice by Goethe.
  • Stuart concludes the first lecture by saying that if making AI better and better makes the problem worse and worse, then we’ve got the whole thing wrong. We think we want machines that achieve the objectives we give them, but actually we want something else.

The Future Role of AI in Warfare

Photo by James Gibson on Unsplash

The Second Reith Lecture of 2021 took place at the University of Manchester, in the splendour of the Whitworth Hall, England. In this lecture, Stuart warns of the dangers of developing autonomous weapon systems; arguing for a system of global control. He poses a very important question, i.e., “Will future wars be fought entirely by machines, or will one side surrender only when its real losses, military or civilian, become unacceptable?”. He goes on to examine the motivation of major powers developing these types of weapons, the morality of creating algorithms that decide to kill humans, and possible ways forward for the international community as it struggles with these questions.

Key Insights of the Second Reith Lecture

  • Stuart narrates his incident of 20ᵗʰ Feb, 2013, when he received an email from Human Rights Watch (HRW), in which HRW asked him to support a new campaign to ban “killer robots”. The letter raised the possibility of children playing with toy guns being accidentally targeted by the killer robots. At this point, Stuart presents his opinion, that we could begin with a professional code of conduct for computer scientists, for instance, “Do not design algorithms that can decide to kill humans”, but how we would also need clearer arguments to convince people to sign on.
  • The goal of the second lecture is to explain those “clearer arguments” and how they have evolved. The lecture does not address all the uses of AI in military applications. In fact, Stuart clarifies that some uses, such as the better detection of surprise attacks, could actually be beneficial. This speech is not about the general morality of defence research. And lastly, this lecture doesn’t consider drones that are remotely piloted by humans (since US is very sensitive on this matter).
  • The subject of the lecture is the lethal autonomous weapons systems, which, United Nations (UN) define as “Weapons that locate, select, and engage human targets without human supervision”.
  • Most of us by now are imagining a rampaging Terminator robot, and Stuart highlights the fallacies in that picture. Firstly, there is the unreasonable fact that the Terminators fire a lot of bullets that miss their targets. Secondly, the picture makes people think that autonomous weapons are science fiction, but they are not, and instead, we can buy them right now. Thirdly, the picture makes people think that the problem was SkyNet (the global software system that controls the terminators), but SkyNet was never the problem.
  • According to Stuart, the focus on accidental targeting was a mistake, but in 2013, it was the primary concern, and thus it led to CCW’s (the Convention on Certain Conventional Weapons) first discussion of autonomous weapons in Geneva in 2014. Stuart goes on to add that in 2015, he was invited to the CCW meeting in Geneva as an AI expert, in which he had 3 jobs to do: clear up the mess with autonomy, assess the technological feasibility of autonomous weapons, and evaluate the pros and cons as best he could.
  • In the lecture, Stuart presented a lot of examples which concluded that in 2015, all the component technologies for autonomous weapons already existed and thus, autonomous weapons were technologically feasible. According to Stuart, the only advantage of AI systems is that they will be better than humans at recognizing legitimate targets, but when it comes to disadvantages, cyber-infiltration and accidental escalation of hostilities are serious concerns.
  • AI would enable a lethal unit to be far smaller, cheaper and more agile than a tank, or an attack helicopter, or even a soldier carrying a gun.
  • In 2017, a government-owned manufacturer in Turkey announced the Kargu drone, advertising its capabilities for “anti-personnel autonomous hits” with “targets selected on images and face recognition”. According to the UN, Kargu(s) were used in 2020 in the Libya conflict, despite a strict arms embargo.
  • In 2019, before COVID, a small group of experts met in a house in Boston, and after considerable thoughts, they reached to a solution, i.e., a ban that would require a minimum weight and explosive payload so as to rule out small antipersonnel weapons. This would not only eliminate swarms as
    weapons of mass destruction, but at the time would also allow the major powers to keep their big-boy toys: submarines, tanks, fighter air-crafts, etc.
  • Stuart concludes the second lecture by addressing the diplomats and their political masters: “There are 8 billion people wondering why you cannot give them some protection against being hunted down and killed by robots. If the technical issues are too complicated, your children can probably explain them”.

A little about ME 👋

You can safely skip this section, if you have no interest in knowing the author, or you already know me. I promise that there is no hidden treasure in this section 😆.

I am an Artificial Intelligence Enthusiast. If you liked this blog, do put your hands together 👏 and if you would like to read more blogs based on Artificial Intelligence #StayTuned.

🔵 Become a Writer

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Elemento

Elemento

55 Followers

Research Intern @IIITA | Mentor @DeepLearning.AI | Artificial Intelligence Enthusiast | Keen on Exploring & Learning