The Worlds I See: Navigating the Intersection of Identity & AI — A Book Review

Joanne Lee
Writ340EconSpring2024
8 min readApr 29, 2024

Better known to the world as the creator of ImageNet — the early manifestation of neural network technology as well as the catalyst to what we now know as the AI boom — author, Fei Fei Li writes out her heartfelt and insightful memoir titled, The Worlds I See. In this memoir, Li unveils a new side to herself, highlighting the unsteady nature of her immigrant journey from the daughter of a Chinese middle-class family to grappling with the hardships of American poverty. Li shares the commonalities and influences of her upbringing with her passions in physics and technology. She advances the perspective that when looking towards the future, the use and development of AI should be centered around the good of humanity.

She shares that to steer AI technologies and machine learning in a maximally beneficial direction, it’ll require larger figures than just a handful of influential scientists and researchers. To really make pivotal changes in these rapidly growing fields, we’ll need to have our governments, regulators, and major companies to work together to inch towards a common goal/standard of measurement for human-centered AI use. Rather than AI being used for the benefit or good of ulterior motives and advancements of business, etc. it should strictly be focused on improving the quality of human life, inquisition, and creativity. With topics such as algorithmic bias still looming over the AI technologies alongside a string of problems related to representation so often associated with it, there is a lot of work to be done in the field. Li shares that what she had once viewed as purely as science is now taking on a new definition — a responsibility for all of us.

Li parallels her learnings as a professor, founder of AI4ALL, principal AI researcher at Google Cloud with her upbringing as a Chinese immigrant to make a compelling case that AI and technology still holds the power to positively impact and change the world when centered around the right principles and values. She comes from a background of robust knowledge in the tech scene having encountered the extreme possibilities as well as dangers of AI and deep-learning. She describes AI technology as ‘an increasingly divisive technology’ that has the power to exclude or include entire communities and populations. Having said that, she also believes that AI could do a lot to close the gaps of opportunity in the world. The book balances both a critical and optimistic view on what the future decades will look like alongside AI developments. We’re living in an exciting, nascent moment. Li characterizes AI to be like a phenomenon that is being observed, cataloged and predicted; however unified models are yet to be formalized with so many unknowns and much more room to explore.

Growing up in China, Dr. Li observed firsthand the cultural and societal pressures that shaped her family dynamics and the aspirations of her parents. Raised in a culture that placed a premium on academic achievement and conformity, she navigated the delicate balance between honoring her cultural heritage and forging her own path in pursuit of her dreams. She encountered challenges in a system where gender biases were prevalent, and opportunities for girls were often underestimated compared to boys. Li’s teachers in China, for instance, exemplified these biases, underestimating the capabilities of girls and setting higher expectations for the boys in the class. As she grappled with the tensions and repressed emotions inherent in immigrant life, Dr. Li found solace and inspiration in her love for science and technology, recognizing the transformative power of education as a means of re-examining the world around her.

At 12, she immigrated to the United States with her parents and discovered a much more supportive system of teachers that eventually pushed her to pursue her passions in the sciences. She reflects on the formative experiences that have shaped her perspective on identity, ambition, and the transformative potential of technology. This formative upbringing laid the groundwork for Dr. Li’s pioneering work in the field of AI, where she has made significant advances in computer vision and deep learning. As one of the creators of ImageNet, what many know as the groundbreaking database of labeled images that revolutionized the field of computer vision, Dr. Li played a pivotal role in advancing the frontiers of AI research. Despite the huge and continuous advances in AI technology, Dr. Li contends that true progress lies not in the pursuit of technological supremacy, but in the creation of AI systems that empower and enhance the human experience.

Seeking to challenge the prevailing notion of AI as a disembodied force, detached from human values and aspirations, Li details her involvement serving as co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). After being founded in 2019, the institute has emerged as a leading hub for AI research, education, and policy, fostering collaboration across disciplines to address the complex challenges and opportunities posed by AI. The human-centered AI framework proposed by Li and her colleagues at HAI encompasses three key aspects. Firstly, it recognizes AI as a multidisciplinary field with far-reaching implications for scientific discovery, economic impact, and education. Secondly, it prioritizes human well-being and dignity, emphasizing the augmentation rather than the replacement of human capabilities. Finally, it acknowledges the nuanced and complex nature of human intelligence, advocating for AI systems that reflect this level of nuance. Through interdisciplinary collaboration and engagement with stakeholders from academia, industry, government, and civil society, HAI endeavors to shape the future of AI in ways that promote equity, inclusion, and human well-being. Furthermore, Li makes a point to educate lawmakers through HAI initiatives and policy efforts that are centered on promoting informed governance and regulation of AI. Through scholarly research, policy discussions, and engagements with governmental bodies, the institute’s faculty have played a pivotal role in shaping AI policy at the state, national, and global level. A few notable engagements have been testimonies before various Senate and House committees, meetings with agencies including the U.S. Commerce and the Federal Trade Association, and a meeting with President Biden. Alongside government engagements, Li continues to fund numerous labs with the belief that “AI should always strive to enhance human capabilities, not compete with them. Now it’s a fundamental value of our lab” (309). Within this institute, she strongly advocates for an approach that places humans at the center of the AI ecosystem, ensuring that technological innovation is guided by principles of inclusivity, equity, and ethical stewardship.

“AI should always strive to enhance human capabilities, not compete with them. Now it’s a fundamental value of our lab.”

However, despite her advocacy for human-centered AI, Li faces resistance from entities within the AI community that prioritize technological innovation over considerations of societal and human impact. This resistance is reflected in the attitudes towards intellectual privacy surrounding AI, where issues such as algorithmic bias and the lack of transparency in data training models persist. Moreover, the disregard for individuals’ rights in the development and deployment of AI technologies perpetuates concerns about the ethical implications of AI-driven decision-making. Against the backdrop of a burgeoning “techlash,” she underscores the importance of addressing issues of bias and representation within AI systems, in hopes to not perpetuate the inequalities of the past. A quote from the LA Times in 2024 captures this sentiment well: “We’re in an almost gold rush where everybody’s building an AI company model.” This sentiment reflects the rapid proliferation of AI startups and initiatives, fueled by the promise of revolutionary advancements in technology. However, amidst the frenzy of activity, there’s also a sobering reality: very little AI is actually deployed in production. This observation underscores the pivotal place we’re standing at — a dawn of job creation rather than reduction, where the potential of AI to transform industries is seemingly close yet still largely untapped. As echoed by Wyne, founder and CEO of Ariglad, a software company in San Francisco, the convergence of AI with practical, real-world applications holds immense promise for enhancing productivity and streamlining daily tasks. The allure lies in the prospect of leveraging AI to augment human capabilities and alleviate the burdens of mundane, repetitive tasks. However, realizing the full potential of AI requires more than just technological innovation; it demands a constant and collaborative effort to ensure that AI systems are aligned with human values and priorities. This is where the concept of Li’s human-centered AI asserts itself as especially important and timely.

Human-centered AI emphasizes the development of AI technologies that prioritize human well-being, inclusivity, and ethical considerations.

Recent events within the AI community, such as the dismissal and reinstatement of Sam Altman, CEO of OpenAI, underscore the complexities surrounding internal AI governance and regulation. Li’s perspective shines through in these discussions, as she emphasizes the importance of nuanced and thoughtful approaches to AI governance that balance innovation with ethical considerations. She advocates for the creation of regulatory frameworks that prioritize human values and rights, while also fostering a good sense of collaboration between policymakers, researchers, and industry giants. In these contexts, corporate policy emerges as a critical factor in bridging the gap between AI development and real-world implementation. Effective corporate policies serve as a framework for guiding AI initiatives, ensuring that they are not only technically robust but also ethically and socially responsible. By fostering a culture of accountability and transparency, corporate policies can help mitigate the risks associated with AI deployment while maximizing its potential benefits.

Looking towards the future, Dr. Li envisions a future where AI serves as a catalyst for positive social change, closing gaps of opportunity and fostering greater equity and inclusion. Yet, she also acknowledges the uncertainties and challenges that lie ahead, from algorithmic bias to the rapid pace of technological advancement. It is within this gray area of possibility and peril that Dr. Li sees AI as a field in the midst of a global storm, brimming with endless possibilities and challenges yet to be explored. As she navigates the complexities of AI, Dr. Li reflects on its evolution from a purely scientific endeavor to a moral and ethical imperative for all of humanity. What was once viewed as a tool for innovation has now taken on a new definition — a responsibility that transcends individual interests and demands collective action. In this unfolding narrative of discovery and innovation, Dr. Li reminds us that the future of AI is not merely a product of technology, but of something far deeper and more consequential — the boundless potential of the human spirit.

Works Cited

Li, Fei-Fei. Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI. Flatiron Books, 2024.

Lynch, Shana. “Stanford Hai at Five: Pioneering the Future of Human-Centered AI.” Stanford HAI, hai.stanford.edu/news/stanford-hai-five-pioneering-future-human-centered-ai.

Masunaga, Samantha. “AI a Job Killer? In California It’s Complicated.” Los Angeles Times, Los Angeles Times, 20 Mar. 2024, www.latimes.com/business/story/2024-03-20/for-hard-hit-tech-workers-ai-is-a-silver-lining.

“What Do We Do about the Biases in AI?” Harvard Business Review, 17 Nov. 2022, hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

--

--