6 obstacles any AI-for-health startup must overcome

All the money in the world can’t solve these challenges — at least, not yet

Mind AI
Mind AI
Nov 20, 2018 · 7 min read
AI has promising applications in healthcare, but innovators face an uphill battle to turn them into reality.

By Paul Lee, M.D.

The amount of money flowing into healthcare artificial intelligence startups is staggering. Since 2013, venture capitalists have dumped $4.3 billion into startups looking to improve patient outcomes and reduce health care expenses with AI applications, according to research firm CB Insights.

Healthcare is the most anticipated application of artificial intelligence, judging by the amount of money flowing into its development. But building a healthcare technology startup presents unique challenges that entrepreneurs don’t encounter in other fields.

Even the most well-funded startups will face six key challenges in bringing AI into health care.

With the technology itself, the hype exceeds the capabilities.

We wanted to offer doctors a system that could read through all the latest medical journals and studies, and help them make their recommendations, but that technology just doesn’t exist yet.

Signal fatigue can lead to worse patient outcomes.

In theory, this could dramatically improve health care outcomes. But in practice, there are many ways it could go wrong, and may already be off track. Atul Gawande recently wrote in an article published in the New Yorker that doctors who interact with machine learning algorithms designed to find such correlations are already experiencing signal fatigue. The algorithms, which are black-boxes that give users no insight into how they calculate their findings, are discovering many alarming correlations and connections, with no ability to rank them by urgency.

Gawande writes of current EHR systems’ attempts to employ machine learning: “Just ordering medications and lab tests triggers dozens of alerts each day, most of them irrelevant, and all in need of human reviewing and sorting.”

If AI systems are actually going to improve patient outcomes, the technologies they are built on must become better at distinguishing between random genetic correlations and real indicators of health problems.

Data collection is expensive, and existing data sets are fragmented.

There are a few publicly available datasets, such as those compiled by Kaggle, a company that organizes data science competitions, and those kept by the CDC. Since medical imaging already captures images digitally, there is a growing database of medical images. In fact, 90 percent of all health care data comes from medical imaging. This concentration of data has led to some compelling advances in AI for health care that aim to help doctors analyze medical imaging results.

As more hospitals migrate to using electronic health records, it will become easier to collect more patient data that can be compiled into high-quality datasets. Epic, one of the largest electronic health record companies in the U.S., has hinted that it is investing in AI-powered tools to improve doctors’ abilities to capture accurate patient data in their systems, which would then improve the kind of datasets that could be compiled from these records.

Some AI startups are also exploring the role of crowdsourcing to compile datasets. At my company, Mind AI, we see an opportunity to use blockchain and cryptocurrency to motivate millions of members of the public to contribute information of all kinds, including health data, to our reasoning engine, giving us an enormous head start in educating our engine compared to working with existing datasets.

People are reactive — not proactive—with their own health.

We found that doctors immediately embraced our telehealth platform, seeing it as a great way to reach more people with potentially life-saving health information at a reasonable cost.

But consumers were less enthusiastic about the service, which surprised us, because survey after survey has shown that the public is eager to see their doctors adopt telemedicine.

We found when it was made available, patients were reluctant to substitute a video consultation for an in-person visit.

A survey from late 2017 indicated that patients like the idea of telemedicine in general, but don’t actually trust it when it comes to specific health concerns.

While 77 percent of respondents said they would likely choose a doctor who offered telemedicine over one who didn’t, when asked about specific ailments like skin conditions, respiratory issues, or joint pain, respondents showed a clear preference for an in-person visit.

With our platform, we noticed that users didn’t seem to know what kind of questions were appropriate to ask of the doctors in the system. The space between a traditional in-person visit and just looking up your symptoms on Google is murky. Patients liked the idea of a tool to communicate with doctors, but they didn’t know what questions to ask.

Startups developing telehealth apps — whether they involve AI or not—will need to work hard to educate the public about how to use them.

Snake oil sellers will always beat you to market, eroding public trust.

This is already happening on a small scale with the growing popularity of online DNA analysis companies, as this paper from The MITRE Corp. explains. After discovering that they have a certain gene that has been linked with an increased risk of developing blood clots (and other diseases), some consumers become worried about their health and look for preventative treatments. But if they confine their search to the internet, rather than talking to their regular doctor, they will often end up on predatory websites that offer treatments that are unproven, at best, for outrageous prices.

Entrepreneurs developing AI for health care applications will need to be proactive about winning the public’s trust. This could require everything from developing industry-wide standards or certifications to self-policing bad actors.

Regulations have little to no framework for dealing with AI.

Regulation is important in health care — it’s what’s supposed to keep fraudsters out of the market—but it also slows down innovation.

Even without integrating AI into our application, we hit regulatory roadblocks with our telehealth platform. Regulations prohibit some health services, like prescribing medications on the controlled substances list, without an in-person visit.

To get regulatory approval of a new medical device in the U.S., you have to prove that it is safe and useful and produces predictable, repeatable results. Much of the development around AI is focused on machine learning, which produces new results every time more data is added. Under today’s FDA rules, that device could never get approved.

There are also big questions around liability. Who is responsible if an algorithm makes a mistake? Who is responsible for maintaining and protecting health care databases that all AI for health applications need to have access to?

Designers of any healthcare algorithm will have to tread cautiously to avoid violating regulations.

AI will have a major impact on health care, but will it be a positive one?

To stay up-to-date on our progress, sign up for our email list, talk to us on Telegram, or follow us here on Medium.

Read More about Mind AI:

Mind AI

Mind AI is an artificial intelligence engine that is…

Mind AI

Mind AI is an artificial intelligence engine that is capable of human-like reasoning and general knowledge. Mind AI is the pioneer of the Third Wave of AI, with a mission to democratize and distribute the power of AI.

Mind AI

Written by

Mind AI

Mind AI

Mind AI is an artificial intelligence engine that is capable of human-like reasoning and general knowledge. Mind AI is the pioneer of the Third Wave of AI, with a mission to democratize and distribute the power of AI.