Karim Galil Of Mendel AI On the Future of Artificial Intelligence

An Interview With David Leichner

David Leichner, CMO at Cybellum
Authority Magazine
14 min readMar 10, 2024

--

Thoughtful deployment of AI in specific domains: Artificial general intelligence is a huge topic in the AI world. While AI can be powerful, it’s not one-size-fits-all. In areas like healthcare, for instance, correctness can be a matter of life and death. This is why explainability and traceability of how AI thinks is so critical. Moving forward, it will be important to ensure AI is designed for the specific needs of particular domains.

As a part of our series about the future of Artificial Intelligence, I had the pleasure of interviewing Karim Galil, MD.

Karim Galil, MD, is co-founder and CEO of Mendel AI. Mendel’s mission is to make medicine objective by enabling the largest index of patient journeys, leveraging AI that understands medicine like a physician. Dr. Galil’s experience as a physician demonstrated that medicine does not advance at the same rate as technology. With Mendel, he aims to bridge this gap, facilitate clinical research at scale, and make medicine truly objective. Dr. Galil is an entrepreneur by spirit; his first company, Kryptonworx, led health tech in the MENA region with customers including Fortune 500 companies.

Thank you so much for joining us in this interview series! Can you share with us the ‘backstory” of how you decided to pursue this career path in AI?

The story of how I became involved in developing a clinical AI begins with a career as a practicing physician. I attended medical school in Egypt and began seeing patients during my residency. And while I found myself drawn to providing care to patients, I also increasingly noticed the ways in which the practice of medicine felt governed by subjective decisions. It never sat well with me that we can and do offer diagnoses and treatments to patients without full certainty that we’re making the right call. In parallel with this, I became interested in how technology could enhance the work physicians do and improve confidence in the decisions we make. While in medical school, I ended up working at a startup medical software company and that experience was pivotal to changing the direction of my career.

Fast forward many years, I was living in the United States and was accepted into a startup development program. I had then come up with the seed of the idea for what would become Mendel AI, and an investor believed in my vision, and together, helped me turn the vision into reality. At Mendel, we have built a unique and novel AI platform designed from the ground up to not only analyze and decipher vast amounts of health data but to apply common sense reasoning that mimics the training and thought process of a physician. Mendel’s innovative combination of generative AI and a symbolic AI layer — developed by a team of top global AI experts and expert physicians — allows Mendel’s system to truly understand the nuances of medicine and produce accessible analysis based on medical understanding of a patient or cohort of patients.

What lessons can others learn from your story?

The lesson that I’ve had to learn — and that I hope others will take to heart — is that there is real power in never giving up. As an entrepreneur, I’ve come to realize that unwavering persistence isn’t just helpful, it’s essential. There were many times when obstacles seemed insurmountable, but it was my conviction that we needed a “Mendel”-like product that carried me through. I had to have a strong belief in what I was doing, especially during the most challenging times. I remember vividly the moment of desperation — I was faced with the possibility that I may have to leave the U.S. — only to receive the life-changing news that I was accepted into a startup program. That was a turning point for me, a testament to the fact that perseverance pays off.

Another critical lesson from my experience is the value of mentors. I wouldn’t be where I am today without the guidance, support, and wisdom of those who have walked the path before me. They provided insights that helped me navigate the complexities of entrepreneurship and helped me grow and succeed.

My journey is a testament to the belief that with determination and the right support, it’s possible to overcome even the most daunting challenges.

Can you tell our readers about the most interesting projects you are working on now?

In October, Mendel launched Hypercube, our groundbreaking tool that sits at the intersection of AI and healthcare. Hypercube allows users in healthcare to analyze large swaths of patient data simply by asking questions in everyday language.

Healthcare and life sciences organizations are sitting on top of mountains of structured and unstructured real-world data they can’t effectively use today. Deciphering clinical data takes physician intelligence, which machines haven’t been able to replicate — forcing use of manual human data curation, which doesn’t scale, or brute force attempts to extract information with code, which is slow and error-prone. Using real-world data to answer a question as simple as identifying cancer patients is extremely difficult, let alone understanding what actually happened to them.

We knew we needed to throw out the standard playbook to build a true clinical AI that could reason and interpret clinical data like a physician. To do that, we combined the power of deep learning with symbolic AI, a set of processes that involve using logic to steer the AI’s responses and provide explainability to decision making. This is a significant technical feat that AI experts believed to be impossible, but we knew this was necessary in order to achieve human intelligence, specifically in the clinical domain. Ultimately, it means that when our AI is right, it’s consistently right — and when there is an error, we can pinpoint the exact issue with the clinical logic and fix it.

At Mendel, we’re betting on domain-specific intelligence. Within domains like healthcare, there are both high stakes for accuracy and a lot we can do on the technical side to ensure AI has the knowledge and understanding of a human, but at the scale of a machine. Now, in seconds, users can get answers to previously unanswerable questions like:

  • Which patients in my population have breast cancer and have tested for ER or PR, and have taken any hormone therapy? What were their outcomes?
  • How many patients with cancer took Aromasin and tested for HER2/Neu?
  • What were the outcomes of patients who had prostate cancer, took Leuprorelin, and tested for the JAK2 biomarker?
  • For patients who have breast cancer with BRCA1 tested and took Tamoxifen, did any of them have a mastectomy?
  • What side effects of my drug make doctors stop prescribing it?

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?

From the onset of my entrepreneurial journey, I embraced the conviction that you need just one person to believe in you. Mark Goldstein — the investor who saw the promise in my vision for clinical AI at IndieBio — embodies that pivotal role in my story. Mark saw potential where others hesitated.

His investment was more than financial — it was deeply personal. Mark extended a level of mentorship that went beyond transactional interactions; he invested his time, wisdom, and unwavering support in me. Treating me like a son, he provided guidance that was instrumental in the development of Mendel, as he helped navigate the early complexities of startup growth and strategy. Mentors like Mark are the unsung heroes behind many success stories.

What are the 5 things that most excite you about the AI industry? Why?

It’s obvious that AI is having a moment in the spotlight, and there are many ways that it will be revolutionary across many industries. In healthcare, specifically, we’ve been talking about the promise of real-world evidence for decades but have yet to make it a reality because we haven’t been able to scale clinical reasoning.

We believe true clinical AI will finally open the door to large-scale real world evidence generation, and we’re excited to be at the forefront with Hypercube. A few areas where we expect AI to revolutionize healthcare include:

  • Unlocking unstructured data: 80% of health data is unstructured and largely unusable until now. Fundamentally, medical records are developed by and for clinical experts: there’s context missing and nuances that only physicians can interpret. The rise of clinical AI that scales physician intelligence at machine scale unlocks exciting possibilities for the future of medical knowledge.
  • Next-generation indexing: Today’s AI is able to read individual medical documents but not whole medical records (which can span hundreds of documents). Next-generation indexing of clinical data means interpreting and organizing the entire patient journey — across the entire medical record — to create a single view. What’s unique about Mendel’s approach is that our AI’s clinical understanding is such that we can reconcile clinical data across different data sources down to the biomarker level.
  • Next-generation querying: The ability to interrogate clinical data at the ease and speed of a simple conversation — regardless of the database size — will unlock the next wave of medical innovation. Clinical AI can not only interpret, organize and understand millions of medical records, it can reason over and answer questions about them in seconds.

It’s important to note that just the querying capability alone is not sufficient for clinical applications. It’s important that answers be timely and accurate. This is why, at Mendel, we not only focused on leveraging large language models, but also on building the reasoning capacity needed to take millions of medical records and organize them in such a way that the system can read and navigate them in seconds.

  • Lowering Total Cost of Ownership (TCO): Because clinical AI automatically understands clinical data and organizes clinical data sets, there’s no need to spend time and resources on manual database mapping and ETL (extract, transform, load) workflows.
  • Democratizing analytics: Replacing coding with chatting means anyone can explore clinical data, regardless of clinical or technical expertise.

What are the 5 things that concern you about the AI industry? Why?

AI is one of the most dynamic and transformative areas of technological advancement today. But it is still an imperfect technology and one that needs to be understood before it’s adopted. Many of the AI tools that have grabbed headlines deliver incredibly impressive results but also need to be looked at with some healthy level of skepticism before we place important tasks in their hands. Here are the five things about AI that worry me most:

  • Thoughtful deployment of AI in specific domains: Artificial general intelligence is a huge topic in the AI world. While AI can be powerful, it’s not one-size-fits-all. In areas like healthcare, for instance, correctness can be a matter of life and death. This is why explainability and traceability of how AI thinks is so critical. Moving forward, it will be important to ensure AI is designed for the specific needs of particular domains.
  • Steering Deep Learning: Deep learning relies heavily on identifying patterns in data to make predictions, which means it’s as good as the data it’s fed. For instance, an AI that is trained on data that says that 2+2=3 will only ever be able to give you the wrong answer to that simple problem. It will never spontaneously discover the rules of math and correct that error. For areas where domain-specific knowledge and intelligence is required, it’s important to pair deep learning with logic and guardrails to keep it from hallucinating (returning fabricated or inaccurate results).
  • General Misunderstanding of AI: There is a widespread lack of understanding about what AI is capable of and its limitations. For the average person, differentiating between AI myths and realities is challenging, and this can lead to misconceptions about the technology’s role and impact. Education and transparent communication about how AI works, the various types of AI and how to check the work of AI are needed to bridge this knowledge gap to ensure that expectations are realistic and grounded in actual capabilities.
  • False Claims and AI Hype: The market is saturated with overblown claims about what AI can do, fueled by a combination of marketing hype and genuine enthusiasm for the technology. Companies may overpromise the abilities of their AI systems, and when that promise falls short, it will lead to a lack of trust amongst consumers and potentially harm the reputation of the field as a whole. It is crucial for there to be honesty and clarity about the current state of AI capabilities and the difference between the companies, like Mendel, that are building their own novel AI systems and the companies that are simply wrapping their application around current technology like ChatGPT.
  • Data compliance and privacy: When it comes to health data, compliance and data privacy is paramount. Many technologies claim they redact data above 99%, but were tested on data that’s cleaner and more well-formatted than typically exists in the real world — creating real privacy and compliance risks. From the very early days of Mendel, one of the first use cases for our clinical reasoning that we heavily invested in was de-identifying patient data for research purposes. We’re incredibly proud that our de-identification capability is certified to perform above the 99% HIPAA threshold across different data types.

As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI poses an existential danger to humanity. What is your position about this?

AI and machine learning are tools — and tools are value neutral. A hammer can build and a hammer can destroy. It’s the people programming AI and using it who we must look to to be ethical. Currently, a machine learning system trained on information that is harmful to humans may well produce results that are harmful to humans because they are simply analyzing patterns and reproducing what they find. Where this becomes more challenging is the move from machines being pattern-based to having reasoning. And again, it’s up to us, as humans, to create systems of reasoning that prioritize ethics and safety. We’re in the very early days of AI, a stage where AI can do some specific tasks very well, but is limited in other areas.

So, when we talk about AI risks, we’re really talking about ensuring it performs safely within the boundaries we set. As AI gets more sophisticated, we’ll need to keep a close eye on its development and make sure we’re building in ethical considerations and fail-safes. But as for AI posing a risk to humanity as a whole? We’re not there yet.

What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?

The exciting thing about being at the forefront of AI, particularly in its infancy, is that we’re shaping not just technology but the future of society. The decisions we make now will determine both the commercial viability of AI and the effects it has on our culture and society.

In healthcare, where the stakes are as high as they get — literally life and death — the need for regulation is especially critical. We’re dealing with highly sensitive personal data, diagnostic tools, and treatment plans that AI is increasingly playing a role in. We need a robust regulatory framework that can keep pace with the rapid development of AI in healthcare.

What does responsible AI development look like in this context? It means ensuring that AI systems are transparent, equitable, and operate within ethical guidelines that protect patient rights and data privacy.

As you know, there are not that many women in your industry. Can you advise what is needed to engage more women into the AI industry?

Absolutely, addressing the underrepresentation of women in the AI industry is a critical issue that requires multifaceted strategies. Challenges such as barriers to entry and advancement, workplace culture, gender gaps in STEM education, and inherent gender biases within AI itself significantly contribute to this disparity. It’s also important to note that AI bias is not just about programming; it’s also about the people behind the technology.

At Mendel, we’re acutely conscious of these challenges and are actively working to mitigate them. Our approach begins with a firm commitment to attracting diverse talent. We understand that diversity is not just beneficial for our workplace culture but is also crucial for fostering innovation and reducing biases in AI solutions. By creating a more inclusive environment, we aim to empower women to pursue careers in AI and feel supported throughout their professional journey. By involving a diverse group of talents in the development process, we hope to scrutinize and challenge underlying assumptions, leading to more equitable and effective AI solutions.

What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?

I like to approach things in life “like a beast” — a mantra that I often teach my kids about. It represents diving in completely and passionately to do the best that they can do — the kind of drive that’s required to start a business like Mendel.

“Like a beast” implies a non-compromising attitude toward achieving excellence. It suggests that the person is willing to go above and beyond the norm to deliver outstanding performance, showcasing resilience, overcoming obstacles and remaining focused and disciplined.

How have you used your success to bring goodness to the world? Can you share a story?

Our work at Mendel is focused on distilling truth from clinical data, making it easier to learn from every patient’s life. While many perceive healthcare and medicine as being data-driven and objective, in reality, we’re not actually learning from the millions of medical decisions that are made every day because we can’t effectively access and use the data. As a physician, the limitations of medical knowledge were deeply frustrating. I first started Mendel AI to match patients to the best research, diagnostic, and treatment options available. It’s been deeply rewarding to be able to help patients be matched to clinical trials more quickly than they would have been without Mendel’s technology.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

The movement would be to democratize access to learnings from every patient journey — a movement that’s already underway at Mendel. Going forward, my hope is that removing the barriers around analyzing large patient datasets will help advance the goal of truly personalized medicine and make medicinal practice more current and objective.

How can our readers further follow your work online?

You can learn more about the innovative work of Mendel at www.mendel.ai. You can also connect with me on LinkedIn at go.mendel.ai/karim-galil-linkedin.

This was very inspiring. Thank you so much for joining us!

About The Interviewer: David Leichner is a veteran of the Israeli high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications. At Cybellum, a leading provider of Product Security Lifecycle Management, David is responsible for creating and executing the marketing strategy and managing the global marketing team that forms the foundation for Cybellum’s product and market penetration. Prior to Cybellum, David was CMO at SQream and VP Sales and Marketing at endpoint protection vendor, Cynet. David is the Chairman of the Friends of Israel and Member of the Board of Trustees of the Jerusalem Technology College. He holds a BA in Information Systems Management and an MBA in International Business from the City University of New York.

--

--

David Leichner, CMO at Cybellum
Authority Magazine

David Leichner is a veteran of the high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications