6 obstacles any AI-for-health startup must overcome
All the money in the world can’t solve these challenges — at least, not yet
By Paul Lee, M.D.
The amount of money flowing into healthcare artificial intelligence startups is staggering. Since 2013, venture capitalists have dumped $4.3 billion into startups looking to improve patient outcomes and reduce health care expenses with AI applications, according to research firm CB Insights.
Healthcare is the most anticipated application of artificial intelligence, judging by the amount of money flowing into its development. But building a healthcare technology startup presents unique challenges that entrepreneurs don’t encounter in other fields.
Even the most well-funded startups will face six key challenges in bringing AI into health care.
With the technology itself, the hype exceeds the capabilities.
In 2013, I was running a startup developing a telehealth platform. Our aim was to help patients get quick opinions and recommendations from multiple doctors all over the world. To make our app even more effective, we wanted to employ artificial intelligence, but quickly found that the technology did not live up to the hype. Current leaders in AI can offer algorithms to help with sentiment analysis, speech-to-text transcription, and some types of image recognition, but we don’t have a system that can tie those tools together and provide real insight.
We wanted to offer doctors a system that could read through all the latest medical journals and studies, and help them make their recommendations, but that technology just doesn’t exist yet.
Signal fatigue can lead to worse patient outcomes.
The dream in many AI-for-health implementations is to create an algorithm that keeps patients healthier by finding correlations between genes, pre-existing conditions, and lifestyle choices and using them to make recommendations for preventative treatment.
In theory, this could dramatically improve health care outcomes. But in practice, there are many ways it could go wrong, and may already be off track. Atul Gawande recently wrote in an article published in the New Yorker that doctors who interact with machine learning algorithms designed to find such correlations are already experiencing signal fatigue. The algorithms, which are black-boxes that give users no insight into how they calculate their findings, are discovering many alarming correlations and connections, with no ability to rank them by urgency.
Gawande writes of current EHR systems’ attempts to employ machine learning: “Just ordering medications and lab tests triggers dozens of alerts each day, most of them irrelevant, and all in need of human reviewing and sorting.”
If AI systems are actually going to improve patient outcomes, the technologies they are built on must become better at distinguishing between random genetic correlations and real indicators of health problems.
Data collection is expensive, and existing data sets are fragmented.
To build algorithms that don’t churn out dozens of false positives for every valid diagnosis, you need good healthcare data—relevant, representative, and accurate data. And that’s the problem. Comprehensive, high-quality health data is hard to come by. As CB Insights points out, the U.S. has no standard format for recording data, and no central repository of patient data.
There are a few publicly available datasets, such as those compiled by Kaggle, a company that organizes data science competitions, and those kept by the CDC. Since medical imaging already captures images digitally, there is a growing database of medical images. In fact, 90 percent of all health care data comes from medical imaging. This concentration of data has led to some compelling advances in AI for health care that aim to help doctors analyze medical imaging results.
As more hospitals migrate to using electronic health records, it will become easier to collect more patient data that can be compiled into high-quality datasets. Epic, one of the largest electronic health record companies in the U.S., has hinted that it is investing in AI-powered tools to improve doctors’ abilities to capture accurate patient data in their systems, which would then improve the kind of datasets that could be compiled from these records.
Some AI startups are also exploring the role of crowdsourcing to compile datasets. At my company, Mind AI, we see an opportunity to use blockchain and cryptocurrency to motivate millions of members of the public to contribute information of all kinds, including health data, to our reasoning engine, giving us an enormous head start in educating our engine compared to working with existing datasets.
People are reactive — not proactive—with their own health.
If your healthcare startup is consumer-facing, you will need to convince patients to download your app and develop a habit of using it. This can prove to be much harder than you think.
We found that doctors immediately embraced our telehealth platform, seeing it as a great way to reach more people with potentially life-saving health information at a reasonable cost.
We found when it was made available, patients were reluctant to substitute a video consultation for an in-person visit.
A survey from late 2017 indicated that patients like the idea of telemedicine in general, but don’t actually trust it when it comes to specific health concerns.
While 77 percent of respondents said they would likely choose a doctor who offered telemedicine over one who didn’t, when asked about specific ailments like skin conditions, respiratory issues, or joint pain, respondents showed a clear preference for an in-person visit.
With our platform, we noticed that users didn’t seem to know what kind of questions were appropriate to ask of the doctors in the system. The space between a traditional in-person visit and just looking up your symptoms on Google is murky. Patients liked the idea of a tool to communicate with doctors, but they didn’t know what questions to ask.
Startups developing telehealth apps — whether they involve AI or not—will need to work hard to educate the public about how to use them.
Snake oil sellers will always beat you to market, eroding public trust.
If the public has learned anything in the last half century, it’s that the internet is full of fraudsters and hucksters who are more than happy to exploit a patient’s desperation or paranoia in the middle of a health crisis.
This is already happening on a small scale with the growing popularity of online DNA analysis companies, as this paper from The MITRE Corp. explains. After discovering that they have a certain gene that has been linked with an increased risk of developing blood clots (and other diseases), some consumers become worried about their health and look for preventative treatments. But if they confine their search to the internet, rather than talking to their regular doctor, they will often end up on predatory websites that offer treatments that are unproven, at best, for outrageous prices.
Entrepreneurs developing AI for health care applications will need to be proactive about winning the public’s trust. This could require everything from developing industry-wide standards or certifications to self-policing bad actors.
Regulations have little to no framework for dealing with AI.
Fraudsters aren’t just harming the public perception of many technological innovations in healthcare, they’re also giving regulators a good reason to be suspicious of the role AI plays in healthcare.
Regulation is important in health care — it’s what’s supposed to keep fraudsters out of the market—but it also slows down innovation.
Even without integrating AI into our application, we hit regulatory roadblocks with our telehealth platform. Regulations prohibit some health services, like prescribing medications on the controlled substances list, without an in-person visit.
To get regulatory approval of a new medical device in the U.S., you have to prove that it is safe and useful and produces predictable, repeatable results. Much of the development around AI is focused on machine learning, which produces new results every time more data is added. Under today’s FDA rules, that device could never get approved.
There are also big questions around liability. Who is responsible if an algorithm makes a mistake? Who is responsible for maintaining and protecting health care databases that all AI for health applications need to have access to?
Designers of any healthcare algorithm will have to tread cautiously to avoid violating regulations.
AI will have a major impact on health care, but will it be a positive one?
AI has the potential to revolutionize health care if it can overcome each of these roadblocks. Without taking a careful approach that recognizes all of these obstacles, AI could have an overall negative impact on patient outcomes by eroding trust in modern medicine, increasing burn-out rates for doctors, and increasing costs due to litigation and regulation. But with the patient collaboration of the tech sector, health care administrators, regulators, and practitioners, these problems can be solved.