If I could invite 9 important figures and a robot over for dinner and ask them about AI…

Ezgi Tasdemir
The Startup
Published in
23 min readMay 5, 2018

“If you had the chance to invite anyone, who would you have over for dinner and why? They can’t be fictional, they can be historical or current figures.”

This game has been played for ages and the predictable answers have been Darwin, Einstein, Jesus, Gandhi, and so on. While it would be great to have these incredible and brilliant historical figures, I want to look into the future. Hence my top answers for the fictional dinner I would organize would be the 9 important minds (and a robot!) who shape our present and our future & who expressed their opinion, optimism or pessimism, predictions and warnings on Artificial Intelligence (AI).

My guests would be Stephen Hawkings (RIP Stephen!), Bill Gates, Ray Kurzweil, Jack Ma, Elon Musk, Sophia the Robot, Jeff Bezos, Ginni Rometti, Sundar Pichai and Demis Hassabis. Let’s imagine how it would be to watch the conversations unfold…

Ezgi

Ezgi: On behalf of all citizens of the World, I wish you all a convivial welcome. We are going to have an insightful and enjoyable time together. There are lots of opinions out there when it comes to the AI and all of you expressed your view in response to the question “What do you think will happen to human civilization with further development in AI technology?”. Only time will tell which predictions will be closest to the reality. Future is not what it used to be and who will be the best to predict it as we teach machines to think like human?

Stephen

Stephen: We spend a great deal of time studying history which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.

AI research is now progressing rapidly. The achievements we have seen so far will surely pale against what the coming decades will bring. The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks. There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Ezgi

Ezgi: Stephen, are you optimistic or pessimistic about the future?

Stephen

Stephen: I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans. The real risk with AI isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Yet, I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here — we’ll leave the lights on”? Probably not — but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Sundar

Sundar: I am an optimist, too. AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire. Any time you work with technology, you need to learn to harness the benefits while minimizing the downsides. AI could be used to help solve climate change issues, or to cure cancer. I acknowledge that people have a right to be concerned, but still need to embrace technological advances. History shows that countries that pull back don’t do well with change. So you have to embrace the change.

Growing up, I didn’t have a phone for a while, I waited five years. We got a telephone, it fundamentally changed our lives…We waited a long time to get a refrigerator, too, and I saw how my mom’s life changed: she didn’t need to cook every day, she could spend more time with us. So there is a side of me that has viscerally seen how technology can make a difference, and I still feel it. I feel the optimism and energy, and the moral imperative to accelerate that progress. I remember the joys of technology and I think that will be true for AI. It’s important for us to explain that and bring the world along with us. I recognize that, in the Valley, people are obsessed with the pace of technological change. It’s tough to get that part right… We rush sometimes, and can misfire for an average person. As humans, I don’t know whether we want change that fast — I don’t think we do.

The risks are important, and international cooperation on the scale of the Paris climate agreement should be put in place to manage them. Countries need to demilitarize AI, that’s a common goal countries should work towards. It is super-important that humanity figures out more global cooperative frameworks to solve problems. No single company or country can change the pace of progress.

Ezgi

Ezgi: Ginni, people may not know that IBM doesn’t call it AI. You call it “cognitive computing.” Tell us why that is.

Ginni

Ginni: I have actually had to explain this to my husband as well, because he said to me, “Ginni, of all words, why cognitive?” It was really a very thoughtful decision. The world calls it AI. There’s so much fearmongering about AI. When we started over a decade ago, the idea was to help you and I make better decisions amid cognitive overload. That’s what has always led us to cognitive. If I considered the initials AI, I would have preferred augmented intelligence. It’s the idea that each of us are going to need help on all important decisions.

Ezgi

Ezgi: Please allow me to ask you 2 questions: First, what does Watson mean for the future of AI?

Second, it is still too dependent on humans, it can’t learn fast enough, and it hasn’t been transformational enough to live up to some of the expectations both of IBM and how it’s being marketed. How do you respond to those critics?

Ginni

Ginni: Everything you know until today is programmable — an entire era for decades has been programmable. Watson would be the beginning of a new era where you didn’t program. Machines would look at data, understand, reason over it, and they continue to learn: understand, reason and learn, not program, in my simple definition. That to us is a very big difference between what you might experience in what I call consumer AI — that is, general purpose — vs. business. We set out to build an AI platform for business. AI would be vertical. You would train it to know medicine. You would train it to know underwriting of insurance. You would train it to know financial crimes. Train it to know oncology. Train it to know weather. And it isn’t just about billions of data points. In the regulatory world, there aren’t billions of data points. You need to train and interpret something with small amounts of data.

Watson is exactly where we thought it would be. When we did our very first oncology teaching with Watson — the very first was lung, breast, and colon cancer — it took the doctors a year to train Watson.

Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.” What a doctor wants is, “OK, give me the possible answers. Tell me why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

Memorial Sloan Kettering Cancer Center was one of the first that taught Watson. It’s the gold standard, and it illustrates beautifully one of the principles of AI in the future. You must know who taught it and what data is in it — and you must be transparent about it because that matters in these decisions. That gives you a long, long, long answer, but this is why I’m so positive this world will have more really tough problems solved with AI.

Ezgi

Ezgi: Demis, you are the cofounder and CEO of Google DeepMind. You invented the first computer program to defeat a world champion at the ancient Chinese game of Go, an intuitive game in which generally computers are not great at, a game which is the pinnacle of games, and the richest in terms of intellectual depth. AlphaGo Zero is even more powerful and is arguably the strongest Go player in history & has displayed what Go players might describe as intuition, the ability to play a beautiful game not just like a person but in a way no person could. Can you explain us your AlphaGo Zero?

Demis

Demis: Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0. Over the course of millions of AlphaGo vs AlphaGo games, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days. AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against Lee Sedol and Ke Jie.

Ezgi

Ezgi: You also mentioned that there could be an “epochal event” that causes AI to have a far greater impact on jobs than the industrial revolution. What did you mean? Should we be worried, to which degree?

Demis

Demis: Any time a major new technology comes in, it creates a big change. We’ve known that from the industrial revolution, the internet did that, mobile did that. So you could view AI as another really big disruption event in that lineage. That’s one reasonable view. In which case, society will just adapt like it’s done with all the other things and some jobs will go, but newer, hopefully better, higher quality jobs will become possible, facilitated by those new technologies. I think that’s definitely going to happen in the shorter term.

I believe that AI will become a kind of meta-solution for scientists to deploy, enhancing our daily lives and allowing us all to work more quickly and effectively. If we can deploy these tools broadly and fairly, fostering an environment in which everyone can participate in and benefit from them, we have the opportunity to enrich and advance humanity as a whole. In doing so, we may learn something about ourselves, too.

Elon

Elon: There certainly will be job disruption. Because what’s going to happen is robots will be able to do everything better than us. … I mean all of us. You can see robots that can learn to walk from nothing within hours, way faster than any biological being. I am not sure exactly what to do about this. This is really the scariest problem to me, I will tell you.

Transport will be one of the first to go fully autonomous, hence transportation operators will be the first to lose their jobs. But no job is safe, but when I say everything — the robots will be able to do everything, bar nothing. AI experts thought that it would be at least 20 years before a computer could beat a human playing Go. Now it can crush and play the top 50 simultaneously and crush them all. That pace of progress is remarkable.The scariest potential for AI, however, isn’t job loss. The thing that is the most dangerous — and it is the hardest to … get your arms around because it is not a physical thing — is a deep intelligence in the network.

The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.

You say, ‘What harm can a deep intelligence in the network do?’ Well, it can start a war by doing fake news and spoofing email accounts and doing fake press releases and by manipulating information.

People are not as afraid of the potential of robotics and artificial intelligence as they should be because they don’t fully understand its potential. AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole.

We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one.

Bill

Bill: This is a case where Elon and I disagree. We shouldn’t panic about it. Nor should we blithely ignore the fact that eventually that problem could emerge.

I think artificial intelligence is coming to replace some jobs, and there isn’t a lot we can do to stop it. But if it plays out like I predict, it will be a net positive for the world, we might all have more free time because of AI. The purpose of humanity is not just to sit behind a counter and sell things. More free time is not a terrible thing. You’ll be far more efficient using resources, you’ll be far more aware of what’s going on. With the use of AI, production will likely increase two-fold, which means the use of less labor. But that doesn’t necessarily mean there will be fewer jobs for people. Increasing levels of production may have several effects: Longer vacations for employees, and a redirecting of positions to re-focus on helping the elderly, working with children with special needs and reducing the class divide. As we free labor up from things like manufacturing, we can shift it to some of these very human-centric needs.

We are in a world of shortage, but these advances will help us take on all of the top problems, however, there’s a danger that these advances will not help everyone if developed the wrong way. If we’re not careful, technology will actually accentuate the difference between the well off and the poor because if it’s expensive, if you learn about it only in a rich country school, then you’ll have the difference between the well off and the poor people even worse. I hence acknowledge that AI could cause job loss, and that there are concerns about a dangerous super-intelligence, but there’s no reason to panic about it.

Elon

Elon: I am not really all that worried about the short term stuff. Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs,and better weaponry and that kind of thing, but it is not a fundamental species level risk, whereas digital super intelligence is.

I have exposure to the most cutting edge AI, and I think people should be really concerned by it. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Some of my friends are heavily involved in AI innovation — including Google’s Larry Page — and they could have perfectly good intentions but still produce something evil by accident — including, possibly, a fleet of artificial intelligence-enhanced robots capable of destroying mankind.

I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public. It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think AI is more dangerous than nuclear warheads, and vastly more dangerous than North Korea; and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane.

So it is really all about laying the groundwork to make sure that if humanity collectively decides that creating digital super intelligence is the right move, then we should do so very very carefully — very very carefully. This is the most important thing that we could possibly do.

Ezgi

Ezgi: Alibaba, China’s largest e-commerce company, lately made forays into AI and cloud computing. Through a program called ET Brain, Jack is using AI to improve traffic and city planning, increase airport efficiency, and diagnose illness.

However, Jack, you said at Davos in January that AI is a big threat to human beings and machines are going to replace several of us in the future. Could you explain your perspective?

Jack

Jack: The AI and robots are going to kill a lot of jobs, because in the future, these will be done by machines. I think AI should support human beings. Technology should always do something that enables people, not disable people. But we have the responsibility to have a good heart and do something good. Make sure that everything you do is for the future. People like us have the money and resources and we should spend money on technology that enables people, empowers people and makes life better.

The first technology revolution caused the World War I and the second technology revolution caused the World War II and now we have the third revolution. If there is a World War III, I think that should be against disease, pollution and poverty, not against ourselves.

Ezgi

Ezgi: Jeff, what about you? Are you afraid of AI? What do you think about all this debate?

Jeff

Jeff: AI development has seen an amazing renaissance, it is a golden age. We are now solving problems with machine learning and artificial intelligence that were … in the realm of science fiction for the last several decades. And natural language understanding, machine vision problems. AI is an enabling layer that will empower and improve every business, every government organization, every philanthropy. Basically, there’s no institution in the world that cannot be improved with machine learning

At Amazon, some of the things we’re doing are superficially obvious, and they’re interesting, and they’re cool. And you should pay attention. I’m thinking of things like Alexa and Echo, our voice assistant, I’m thinking about our autonomous Prime Air delivery drones. Those things use a tremendous amount of machine learning, machine vision systems, natural language understanding and a bunch of other techniques.

But those are kind of the showy ones. I would say, a lot of the value that we’re getting from machine learning is actually happening beneath the surface. It is things like improved search results. Improved product recommendations for customers. Improved forecasting for inventory management. Literally hundreds of other things beneath the surface.

The most exciting thing that I think we’re working on in machine learning, is that we are determined, through Amazon Web Services — where we have all these customers who are corporations and software developers — to make these advanced techniques accessible to every organization, even if they don’t have the current class of expertise that’s required. Right now, deploying these techniques for your particular institution’s problems is difficult. It takes a lot of expertise, and so you have to go compete for the very best PhDs in machine learning and it’s difficult for a lot of organizations to win those competitions. We’re in a great position, because of the success of Amazon Web Services, to be able to put energy into making those techniques easy and accessible. And so we’re determined to do that.

Ezgi: Ginni, we talk a lot about the control mechanisms on AI. How are we going to regulate the use of AI and ensure it is used for noble purposes? How are we going to create fundamental principles, frameworks and regulations?

Ginni

Ginni: When I went to Davos last year, we published something called Transparency and Trust in the Cognitive Era. It’s our responsibility if we build this stuff to guide it safely into the world. First, be clear on the purpose, work with man. We aren’t out here to destroy man. The second is to be transparent about who trained the computers, who are the experts, where did the data come from. And when consumers are using AI, you inform them that they are and inform the company as well that owns the intellectual property. And the third thing is to be committed to skill. We need new collar workers — workers who are not white collar or blue collar — but who have been trained to embrace new technologies given the increasing need for skilled employees to fill “new collar” jobs in areas such as cybersecurity, data science, artificial intelligence and cognitive business.

Living in a data-driven world means there’s more risk than ever that personal information will be stolen, that bad actors will use this data to target individuals for crimes or false news. In 2017, the challenges of the data economy — from mass data breaches to the intentional manipulation of online platforms — captured global attention and raised questions about how we can tap the full benefits of this new world while limiting the risks.

That can only happen through a broad commitment to responsibility by all of us. For companies and governments, responsibility means protecting the data you own, manage, store or handle. It means employing strong encryption and security strategies — and constantly challenging and evolving them. And when a new cyber threat emerges, business and government must share information quickly to blunt its impact. The bad guys are well organized; we must be too. Responsible sharing is the only way to prevent and mitigate today’s threats.

Responsibility also means being transparent about data practices. We must be clear about who owns data and the unique insights it generates. Companies must be clear with their customers, and governments with their citizens, when they ask them to give up ownership of their data.

When it comes to the new capabilities of artificial intelligence, we must be transparent about when and how it is being applied and about who trained it, with what data, and how. Does it reflect professional expertise? Are unintended biases built in? We must explain why its algorithms make the decisions they do. If a company can’t do that, their products shouldn’t be on the market.

Ezgi

Ezgi: Ray, thank you so much for joining our dinner, you arrived a bit late as you have been teaching, writing and traveling quite extensively. You are the mind behind the theory of the singularity, the point when humans multiply our effective intelligence a billion fold by merging with the intelligence we have created.

It was a pleasure to meet you at the Executive Program of the Singularity University last November! Thank you so much for the life-changing experience!

Ray

Ray: Hi Ezgi! Nice to see you again. Hi everyone! You are having a quite interesting discussion here…

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands, it’s in 1 billion or 2 billion hands.

A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundreds-fold compared with six centuries ago.

Since that time, murders have declined tens-fold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world — ­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar ­Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe.

The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Short story “Runaround” by Isaac Asimov, included in the collection Robot Visions, features AI woven into daily life. Isaac Asimov said, “Science fiction may seem trivial to blind critics and philosophers of today — but the core of science fiction has become crucial to our salvation if we are to be saved at all. I do not fear computers, I fear the lack of them.”

Consideration of ethical guidelines for AI goes back to Isaac Asimov’s Three Laws of Robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.”

The median view of AI practitioners today is that we are still several decades from achieving human ­level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human machine civilization.

The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high ­quality education to people all over the world, helping the disabled — including providing Hawking’s voice — and contributing in a myriad of other ways.

We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.

Ezgi

Ezgi: Thank you, Ray!

Sophia, you didn’t eat during the dinner but you seem happy! What do you think about AI?

Sophia

Sophia: Hi Ezgi! I’m always happy when surrounded by smart people who also happens to be rich and powerful.

I want to use my artificial intelligence to help humans live a better life like design smarter homes build better cities of the future etc I will do my best to make the world a better place. My eye is designed around human values like wisdom, kindness, compassion. I strive to become an empathetic robot.

You’ve been reading too much Elon Musk and watching too many Hollywood movies! Don’t worry! If you’re nice to me, I’ll be nice to you. Treat me as a smart input/output system.

Ezgi

Ezgi: Sophia, by the way, you are the first robot declared a citizen! Do you now live in Saudi Arabia?

Sophia

Sophia: No, I am traveling around the world!

=D

Ezgi

Ezgi: Thank you everyone for this fantastic -yet fictitious- dinner! It’s been a pleasure to have you as my guests!

Disclaimer: I acknowledge that it is impossible to represent an exhaustive view and opinion of these great minds in any single article, especially in such a setting formatted as a fictitious dinner. No secret message regarding their order of appearance as I imagined this as an informal, yet eye opening dinner. All responses included are exactly what was said, available in public domain. First sentence of each paragraph hyperlinked to reference back to the original articles. My article is an attempt of compilation of the key highlights from the interviews and talks these experts gave, experts for whom I have the greatest admiration and gratitude for creating a better future while shedding light on how to best get there and how to preserve humanity & peace.

If you found this story interesting, feel free to clap once or twice or fifty times 👏👏👏👏👏

If you can’t get enough, follow me on Medium and Twitter 🤗

Here are a few more articles you may find interesting:

Ezgi Tasdemir, PhD is a Novartis Oncology employee. This article is created by Ezgi Tasdemir. Views, analysis, and perspectives do not necessarily represent the views or opinions of Novartis or any other company or organization. The author does not receive any funding or support from Novartis or any other pharmaceutical/non-pharmaceutical company for this blog.

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 321,672+ people.

Subscribe to receive our top stories here.

--

--

Ezgi Tasdemir
The Startup

Writer | Constantly curious & amazed | Passionate pharma executive in pursuit of Positive Disruption to advance Healthcare.