Google’s Doodles: AI Brain Drain, Med-PaLM 2 Challenges and Implications for AI in Healthcare

Recent high-profile AI departures have sparked discussions about Google’s internal dynamics and have drawn comparisons to IBM’s past, where the decline of its Watson technology followed the dismissal of its original research team.

Sergei Polevikov
9 min readSep 8, 2023
© Andreas Prott — stock.adobe.com

Google has recently experienced a series of high-profile departures in the realm of artificial intelligence (AI). Three prominent AI scientists — Timnit Gebru, Geoffrey Hinton, and, most recently, Cassie Kozyrkov — have exited the company in quick succession. This trend has sparked discussions about Google’s internal dynamics and has drawn comparisons to IBM’s past, where the decline of its Watson technology followed the dismissal of its original research team.

Yesterday’s exit of Cassie Kozyrkov, Google’s Chief Decision Scientist, was a significant blow for the company, especially as it aims to (re)establish a foothold in critical sectors like healthcare. Kozyrkov is not merely a data scientist. She brings a unique blend of expertise that spans multiple disciplines, making her irreplaceable, in my view. More crucially, she has become a symbol of AI ethics in the industry. It’s worth noting that many, numbering in the millions, have wrongfully equated Google with ethics, largely due to prominent AI figures like Cassie Kozyrkov.

1. The Key AI Departures at Google

  • Timnit Gebru:

Timnit Gebru’s departure from Google stirred significant controversy. As a co-leader of Google’s Ethical AI research team, she was renowned for her insights into algorithmic bias. Her exit was reportedly connected to a research paper she co-authored, which voiced concerns about Google’s new AI language models. Beyond this, she highlighted workplace discrimination issues. Google terminated her employment in December 2020. Following this, Dr. Margaret Mitchell, a colleague of Dr. Gebru, was also dismissed after probing into the treatment of Gebru. This incident underscored broader concerns about academic freedom, AI ethics, and corporate influence on research. Read more about it here.

  • Geoffrey Hinton:

As a pioneer in deep learning and neural networks, Hinton’s influence on the AI community cannot be overstated. His departure dealt a significant blow to Google’s AI capabilities. While the reasons for his exit might not have garnered as much attention as Gebru’s, the departure of such an esteemed figure prompted queries about Google’s internal dynamics and its compatibility with academic and research-focused individuals. Read more about it here.

  • Cassie Kozyrkov:

Serving as Google’s Chief Decision Scientist, Cassie Kozyrkov was a prominent figure at major AI and Data Science events, serving as an inspiration and a role model for both seasoned AI professionals and newcomers. She epitomized the essence of AI ethics and the ‘correct’ approach to AI. Consequently, many mistakenly believed that Google was the gold standard in AI ethics and practices. Read more about it here.

The departure of these pivotal AI figures might lead to doubts about Google’s dedication to ethically sound AI solutions, which is especially vital in industries like healthcare.

While Google might counter by emphasizing its ‘deep bench’, suggesting that for every AI leader who departs, another equally talented one steps in (read more about it here), it’s undeniable that such departures impact Google’s image and potentially its prospects for long-term success.

2. Google’s Internal Politics and Authoritative Corporate Culture

Historically, Google has been known for its open culture, championing ‘moonshot’ ideas and fostering internal debates. Yet, as the company expanded, the intricacies of overseeing its vast operations intensified. This growth has resulted in a more authoritative management style, which could potentially hinder innovation and spark internal conflicts. Such a setting may not be ideal for pioneers who thrive in environments of autonomy and academic freedom.

From Google’s inception, the work discrimination Dr. Gebru highlighted was deeply rooted in the company’s ethos. If you were not an engineer, you were nobody. Your voice didn’t matter. Such a mindset was propagated from the company’s pinnacle, notably by founders Larry Page and Sergey Brin. Read more about it here.

In recent years, Google has been embroiled in allegations of biased hiring and promotional practices. For instance, in 2018, the tech giant faced a lawsuit from ex-employees accusing it of gender pay disparity. Moreover, some staff members have alleged retaliation after reporting workplace concerns or engaging in activism.

Furthermore, Google has also been cracking down on its employees’ political speech at work and restricting its historically open work culture. Read more about it here.

While some might contend that this authoritative culture and cutthroat atmosphere were instrumental in Google’s rise as a search technology leader, numerous studies indicate that companies which champion diversity and equal opportunities tend to achieve sustained success. In this regard, Google still has a lot of room for improvement.

3. (Kind of) Historical Parallels: Watson Health and Google Health

Drawing parallels between Google’s recent talent exodus and IBM’s management of its Watson team offers some intriguing insights. To be clear, I’m not suggesting that Google’s AI journey will mirror the rather shocking downfall of IBM’s Watson. Instead, I aim to highlight the potential pitfalls that arise when senior management loses touch with the rapid pace of AI development.

Years before the inception of Watson Health, IBM took the contentious step of dismissing the entire research team responsible for the foundational Watson AI technology. This decision suggested a potential disconnect between the company’s leadership and its pioneering minds. The challenges faced in adapting Watson for the healthcare sector, combined with technical setbacks, marked the gradual decline of Watson Health. This decline culminated in IBM selling this AI-centric division at a significant loss on January 21, 2022, shortly after the dissolution of Google Health.

Healthcare has proven to be hard for Big Tech. I discuss the challenges and failures of the digital health industry, including Watson Health and Google Health, in great detail (spanning 36 pages) in my recent article:

4. Google’s Med-PaLM 2: A Skeptical Perspective

Google’s endeavor to revolutionize healthcare with generative AI is commendable, but the medical realm presents numerous challenges for such technology. A primary concern is the algorithm’s struggle with transparency and ‘hallucination’. Read more about it here.

As Google gears up to introduce Med-PaLM 2, its latest large language model (LLM), to a broader healthcare audience, I have my share of skepticism. While Google’s PaLM might trail far behind OpenAI’s GPT-4 in various metrics, my apprehension stems from the broader acceptance and performance of LLMs within the healthcare industry.

Many clinicians have reservations about Med-PaLM 2 and similar medical LLMs, even before diving into the intricate technicalities.

Without delving too deeply into the technicalities, which might not appeal to everyone, here’s a brief overview:

The Med-PaLM paper left me underwhelmed, except for the model’s sheer size, boasting 540 billion parameters and 780 billion tokens. The ‘Med’ component is integrated through fine-tuning, utilizing a mere 40 examples of the most frequently Googled symptom and disease queries, reviewed by 5 clinicians. The limited sample size for fine-tuning might not be inherently problematic, but the potential bias within this sample is highly worrisome. The fine-tuning sample is likely biased towards the most searched or extreme conditions, overshadowing the routine issues patients face daily.

Moreover, the ‘hallucination’ problem, inherent to most large language models, remains a significant concern. Doctors observed Med-PaLM 2 producing more erroneous or unrelated content in its replies compared to its counterparts, indicating it shares the common pitfalls of chatbots that often generate unrelated or false information with undue confidence.

Despite its prowess as an LLM, Med-PaLM 2 isn’t quite ready to assist in daily medical operations, a sentiment I regretfully share.

On a brighter note, kudos to Med-PaLM 2 for acing a multiple-choice medical test. This accolade will undoubtedly enhance its credentials. However, given the aforementioned concerns, its journey in the practical medical world might be arduous.

Any esteemed clinician, dedicated to genuine patient care, would currently hesitate to depend on an AI prone to hallucinations and untested in real clinical settings. In that regard, I eagerly await a scouting report from the Mayo Clinic and HCA regarding their experience with Med-PaLM 2.

5. Google’s Data Breaches

Google has come under scrutiny for issues related to personal data privacy. A notable example is Project Nightingale, where a whistleblower revealed the covert transfer of personal medical data of up to 50 million Americans from Ascension, a leading healthcare provider in the U.S., to Google. This revelation raised concerns because patients were not informed about this extensive data-sharing arrangement. Read more about it here. The shared data encompassed patients’ names, birth dates, medical histories, lab results, diagnoses, and other confidential details. In response to the backlash, Project Nightingale was promptly discontinued.

Over the years, Google has also incurred substantial fines in the EU due to GDPR-related violations concerning data protection.

6. Google’s Antitrust Issues

In 2021, the U.S. Department of Justice, accompanied by 11 state attorneys general, filed a civil antitrust lawsuit against Google. The complaint alleges that Google holds monopolies in both general internet search and search-related advertising, in violation of Section 2 of the Sherman Antitrust Act. Furthermore, Google’s monopolistic dominance extends to searches for health information and the advertising of health products. Read more about it here.

Implications for AI in Healthcare

Despite the challenges, it’s essential to note that Google remains a formidable player in the tech industry with vast resources. While the departures of key AI figures are significant, the company still houses a wealth of talent. However, to reestablish its foothold in healthcare, Google must address the concerns surrounding its corporate culture, commitment to AI ethics, and data privacy practices. Only by doing so can it hope to regain trust and drive meaningful advancements in healthcare AI.

Google’s recent AI-related challenges serve as a reminder of the delicate balance between innovation, ethics, and corporate responsibility. As the company navigates these turbulent waters, the broader tech and healthcare industries will be watching closely, hoping for a future where AI can be harnessed safely and ethically to improve patient outcomes.

“Healing is a matter of time, but it is sometimes also a matter of opportunity.” — Hippocrates

🌟 Your Support is Crucial! 🌟 If you enjoyed this story, please consider following me and subscribing to my publications. The sustainability and future of my writing depend on readers like you. Without sufficient support and followers, I might not be able to continue delivering the insights you value. As a reader, you play a pivotal role in ensuring the continuity of my content. Every follower contributes to a meaningful cause and the future of this platform. Please help me keep this essential project growing. 🙏

--

--

Sergei Polevikov

Math geek, author & health AI founder. I share raw insights from digital health, drawing from my experiences, mistakes & triumphs, at sergeiAI.substack.com