Fei-Fei Li & Yuval Noah Harari in Conversation — The Coming AI Upheaval

Chi-Jui Wu 吳啟瑞
7 min readJun 7, 2019

--

The Historian and The Technologist: Yuval Harari and Fei-Fei Li in Discussion. Source: Stanford, https://hai.stanford.edu/news/historian-and-technologist-yuval-harari-and-fei-fei-li-discussion
My follow-up blog post after reading the book is online:   https://medium.com/@chijuiwu/coming-to-know-thyself-on-21-lessons-for-the-21st-century-by-yuval-noah-harari-68b87761b692

Before reading the book 21 Lessons for the 21st Century, I watched a recent conversation between Fei-Fei Li (computer scientist) and Yuval Noah Harari (historian) on Youtube @ https://youtu.be/d4rBh6DBHyw. I thought about two sides of humanity’s future: dictatorship and freedom.

Fei-Fei Li is optimistic about the future of AI, and we could easily see the benefits of AI technologies, in health care, transportation, and social services like elderly care. On the other hand, Yuval Noah Harari is cautious about the impacts of AI, in particular, how powerful corporations and governments will use AI technologies to hack humans.

I agree with Yuval’s sentiments, and I believe we are entering a point in history where questions of philosophy and free will are starting to matter and shape our everyday lives, through the technologies that we will inevitably use, from social media to self-driving cars.

This blog post outlines some ideas from this conversation.

The ability to hack humans

Photo by NASA on Unsplash

Yuval set forth a simple equation: B * C * D = HH.

Biological knowledge * Computing power * Data = The ability to hack humans.

A lot of the big data come from our external behaviors: what we see, hear, touch, write, say, and do in online and offline worlds. It’s not hard to track which websites we visit, what kind of content we prefer, and who we talk to about what, wherever and whenever we are. This is the kind of fucked-up world we live in. The large-scale information systems which generate and use these human data are in the hands of very few people in corporations and governments. We are in a surveillance capitalism. Information privacy is non-existent. We are the product.

But this is only the beginning. Scientists have always been interested in the human biology and chemistry. Thus, it’s not hard to imagine that eventually we will use biotechnology that track physiological systems inside our body, our feelings and emotions. The future seems doomed when infotech and biotech are merged to manipulate, exploit, and hack humans.

The AI doesn’t have to be perfect. It just has to appear better than us. Personal decisions are already being outsourced to algorithms. Google Maps takes us to places we want to go. Netflix shows us movies and tv series we want to watch. Facebook, Instagram, and Twitter keeps us engaged and persuade us not only what to eat and what to do, but also where we work, what to study, and who to vote for. Indeed, we have made our lives much easier through technology, and communities have come together to achieve great things, but are we living a better life? Information technology is not just about selling us advertisements but also fundamentally changing our behaviors, values, and beliefs.

In the past, we believe in free will, but will we keep this luxury in the future? As the development of technologies accelerates, there will be very few places in us that are unhackable (love is also a biological process). Technologies can be made to manipulate or enhance us, but who decides what is a good or bad enhancement? What are the good qualities we want to enhance? Who do we trust? Who can we trust? This is precisely the point in history when we need technologists, social social scientists, humanists, and everyone to understand the implications of disruptive technologies and agree on the ethics.

How can we develop human-centered AI?

Photo by Franck V. on Unsplash

AI is a transformative technology, but it has many problems, including diversity, bias, fairness, explainability, privacy, inequality, and labour market. This is why Stanford is bringing humanists, ethicists, psychologists, neuroscientists, policy makers, etc to the study and development of AI. The future of AI much depends on such interdiscilplinary research and collaboration. AI technologies should augment and reflect the human intelligence that we would like. We need to understand the impacts of AI, and we need rules, moral codes, and regulations. Many things need to happen.

Technological disruption is a global issue which requires a global solution. Unfortunately, I am skeptical that the technologists and humanists will ever reach an agreement in the real world. How do we maximize human flourishing? how do we maximize universal love? Everyone will have a different answer, and not everyone cares. How about the trolley problem? Philosophers have not reached an agreement after centuries, but engineers will not wait. How do we eliminate racial and gender inequality in big data algorithms? What does it mean to own data? We have a poor understanding of what it means to own data, or we willingly give up data and privacy for improved user experience. The corporate world is not human-centered nor user-centered, it’s profit-driven.

In Human-Computer Interaction (HCI), a field concerned with how people use technologies and how to make technologies with and for people, many researchers look down on AI as the only solution. The continued hostility and ignorance will separate the two disciplines further apart from what once had a common goal of supporting and augmenting the human intellect. In the field’s latest flagship conference, CHI 2019, there are at least two papers that advocate for human empathy and meaning. Accessibility will probably have the most impact on people’s lives (for under-represented population and people with impairments and stigma), but as a field I think people have become too obsessed with novelty that rarely do they ask the important and difficult human questions. What is the human condition and the role of technology for humanity? How do we actually make technologies that serve us in personal growth, self-fulfillment, depression, and life challenges, etc? HCI is rather limited at solving human and social issues. Do people need more technologies that can track their bodies, devices, and environments? To be truly human-centered, we need to value people for who they are as a whole person with an unique identity and life stories.

In reality, humans are better at looking outward (e.g., sensing and judging others) than at looking inward (e.g., understanding who we are). The bar for hacking humans is low, because we can manipulate what people see and do through ubiquitous mobile devices and internet connection. To hack us humans, we just need AI to know us better than we know ourselves, which is not so difficult because most people don’t know themselves very well and often make huge mistakes in critical life decisions (finance, career, and love). People might shift the authoritative voice from themselves to AI if it appears better at understanding ourselves (personality tests and confirmation bias).

Who is in control? How can we avoid AI dictatorship?

Photo by Samuel Zeller on Unsplash

Will you give up personal data for better health care? What happens when this information is not shared with you but with advertisers, your company, the government, or even your family? There is a great promise in technology that it will predict, if not cure, cancers and Alzheimer’s disease. Therefore, we cannot resit the temptation to invest in health and AI research. When AI scientists collaborate with biochemistry and neuroscience scientists, we will witness great progress in technology, but at the same time we as a society will have to come to consensus about the applications of bio-info technologies, much like for nuclear weapons. In 2018, the CRISPR technology has been used to edit the human genome in newborn babies. The future is here.

Is global collaboration possible? The rise of nationalism is happening before our greatest problems: nuclear war, climate change, and technology disruption. We are building walls in the digital age of hyper-interconnectivity. In many places around the world, we are small citizens against big corporations and governments. The dystopia is here.

Hope and optimism

Photo by Aleksandr Ledogorov on Unsplash

It’s natural to feel hopeless, powerless, and lost. Despite the challenges ahead, Fei-Fei and Yuval are optimistic. With optimism, change is possible. We can do whatever we want in the free world (although nothing is free). We can build AI to serve and protect us humans. Or we can get to know ourselves and the human condition better. We can decide for ourselves what to think and value. We can create the world we want to live in, and we can become the person we want to be. The path is unclear, but we will get there.

If this article has made you think a little, please press the clap button.My follow-up blog post after reading the book is online:   https://medium.com/@chijuiwu/coming-to-know-thyself-on-21-lessons-for-the-21st-century-by-yuval-noah-harari-68b87761b692peace.- 2019/06/07 CJW (https://chijuiwu.space/)

--

--

Chi-Jui Wu 吳啟瑞

I read, write, and reflect on human lives. Previously HCI Researcher @ Lancaster, UCL, and St Andrews. Website: https://chijuiwu.space/