AI in healthcare: keeping data safe and building trust

New technologies should enhance, not replace doctors, writes Antoinette Price

IEC
e-tech
4 min readJan 25, 2019

--

Medical personnel at William Beaumont Army Medical Center using a robotic surgical system. Photo: Marcy Sanchez, Wikimedia Commons

Our approach to healthcare is changing rapidly, thanks to the Internet of Things (IoT), which continues to drive the demand for services offering more intelligent analytics. As machine learning advances, there is also a broadening applicability of AI.

In an increasingly digitized world of connected devices and intelligent systems, international standards play a key role in addressing the ethical, technical, safety and security aspects of the technologies we encounter in daily life.

Work is already underway in a joint committee for AI established by IEC and ISO. This is the first of its kind to consider the entire AI ecosystem rather than focusing on individual technical aspects. Headed by Wael Diab, a senior director at Huawei, it draws on the breadth of application areas covered in IEC and ISO, with IT and domain experts coming from different sectors.

“Connected products and services such as medical devices and automated healthcare systems must be safe and secure or no one will want to use them. Trustworthiness and related areas such as resiliency, reliability, accuracy, explainability, safety, security and privacy must be considered from a systems perspective from the get-go. Standardization will need to adopt a broad approach to cover the AI technologies and consider synergies with analytics, big data, IoT and more”, says Diab.

An apple a day keeps the algorithm away

From robotically-assisted surgery, virtual nursing assistants, dosage error reduction and connected devices to image analysis and clinical trials, AI technologies already play many different roles in the delivery of healthcare treatments, surgeries and services. They include improving diagnostics and helping doctors make better decisions for patients.

Health insurance is a critical part of the industry and is also making use of AI. For example, some software platforms use machine learning to identify and reduce inefficiencies in the claims management process such as fraudulent inaccurate billing or waste through under-utilization of services. Others help patients choose tailored insurance coverage to reduce healthcare costs and assist employers looking for group coverage options.

Digitizing healthcare

The personal data of millions of patients worldwide is being gathered, stored and shared electronically in healthcare management delivery systems, clinical research and medical consultations. Doctors and researchers alone can’t leverage all this information to enhance patient care, but in a growing number of trials, algorithms have successfully mined huge numbers of patient files and medical images in a timely manner, with the result that diverse conditions are detected and diagnosed. Examples include certain cancers, the risk of heart disease and eye-related conditions.

AI-powered imaging technology has learned to read thousands of anonymized complex eye scans and detected more than 50 eye conditions successfully. With an accuracy level of 94%, the algorithms matched or beat the performance of world-leading eye specialists. The argument is that this technique of sifting through big data rapidly could help reduce the time taken for patients to be seen by a consultant, and possibly save a person’s sight, but there are many hurdles to overcome before trials are fully approved.

How safe is AI in the medical context?

What happens if we are not in the 94% accuracy group? What if the algorithm developers get it wrong and create biases which impact patients negatively?

While it has been acknowledged that technology has the potential for improving patient care greatly, thereby saving costs, some physicians and scientists are warning the AI community to get their ethics right first. In the healthcare context, errors could potentially harm or be fatal. If this doesn’t happen, we run the risk of introducing automated systems into the mix in a blind fashion. If errors occur, who will be accountable: machines or healthcare professionals?

Recent research by Stanford University, published in the New England Journal of Medicine, raises a number of key issues which need to be addressed thoroughly before rolling out AI into healthcare. They include:

  • Ensuring that data bias in algorithms doesn’t skew results
  • Making sure physicians have an adequate understanding of how algorithms are developed and don’t over-rely on them
  • Maintaining regard for clinical experience, so that the human aspect of patient care is not lost
  • Maintaining confidentiality as the dynamics of doctor-patient relationships change

Find out more by reading the article Eliminating bias from algorithms in this issue.

Looking ahead

Disruptive technologies like artificial intelligence pose both challenges and opportunities across all sectors. AI has already changed many aspects of daily life and will continue to have a massive impact on the lives of people and on entire societies. The important task of ironing out the many ethical questions already raised is vital to the successful adoption of these innovative technologies.

IEC also contributes towards this effort as a founding member of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS). This community provides a space for interested organizations from around the world to share information and collaborate on initiatives and programmes, while enhancing the understanding of the role of standards in facilitating innovation.

“Consensus-based international standards will play a crucial role in accelerating adoption of AI technology in industry application verticals,” says Diab. “End user societal concerns, ethical and trustworthiness considerations are being discussed and incorporated from the ground up.”

--

--