What happens when advances in technology raise the ethical bar?

Nicholas Culbertson
tech-protenus
Published in
5 min readJan 9, 2018

--

The HIPAA Privacy Statute, and related interpretive regulations — together known as the “Privacy Rule” — creates a federal set of rights for patients to access their protected health information. The rule translates to a simple premise: That the only person who can access your medical information is you.

As an exceptions statute, the Privacy Rule starts with a broad prohibition against the use or disclosure of protected patient information, unless the use or disclosure meets one or more exceptions. Most obviously, anyone providing medical treatment, such as your doctor, fits within the ‘treatment’ exception to this broad prohibition. The Privacy Rule’s counterpart — the HIPAA Security Rule — requires organizations to audit access to patient data, to ensure that such accesses are proper.

Not too long ago, when health data was still largely limited to what your doctor wrote down in a paper chart, this was relatively simple: access to patient information was generally limited to those directly responsible for patient care.

Today, health data has increasingly converted to electronic format, and federal mandates, including Meaningful Use requirements, have encouraged the proliferation electronic health records. In turn, the number of people who have access to health data continues to increase. Through data sharing, interoperability, and health exchanges, nearly every health system employee, vendor, and business associate can access your digital health record.

At a large academic or regional medical center, it’s not unusual for a day’s work to generate more than 10 million actions inside an electronic medical record (EMR) system. Compliance officers have been doing their best to keep up by using the tools available: reactive, manual audits in response to a suspected patient privacy violation, or running routine reports — “show me the top 10 most accessed patients.” Meant to identify the riskiest scenarios, these tools lack clinical context and ultimately, review an insignificant fraction of audit logs in an industry where 41% of all data breaches are attributable to insiders.

These approaches have been deemed acceptable because health systems don’t have the time or human capital to audit more. Unsurprisingly, patient privacy violations, like the theft of hundreds of abortion records by a hospital employee taking part in an anti-abortion campaign, continue to rise.

Yet, advances in technology, especially the ability of artificial intelligence to analyze large amounts of unstructured data and serve up that analysis to augment human expertise, poses a new question for healthcare compliance officials: Is it ethical to continue to audit only a portion of accesses to health data, when artificial intelligence makes it possible to protect patients by auditing every single one?

In other industries, artificial intelligence is already replacing mundane, repetitive operations at unimaginable scale, empowering subject matter experts to focus in on the obvious patterns of anomalies. Law enforcement agencies use AI-powered fingerprint recognition software to identify criminals, instead of flipping through massive binders of known prints. Credit card companies use artificial intelligence to monitor our accounts for fraud, rather than having analysts review every single purchase. Instead, AI is used to elevate suspicious spending alerts for review, allowing the cardholder to confirm a purchase and reducing overall fraud in a consumer population that would otherwise be unlikely to act if asked to review every purchase manually.

Yet, we are still having healthcare compliance officers manually review a mere fraction of accesses to some of our most sensitive patient data.

Artificial intelligence in patient privacy would work like this: Imagine that each access by a workforce member into an individual’s health data is reviewed by artificial intelligence algorithms, and given two scores. The Suspicion Score represents how suspicious the access is relative to their various roles, their historical behavioral patterns, and the behavioral patterns of their peers. The Risk Score represents how much risk is involved with the access, such as what information they saw, whose information they accessed, and what they did with it.

In this way, compliance professionals gain the ability to both audit every single access to patient data — an unprecedented advance in industry best practices — but also to truly apply their expertise by investigating only those accesses that ranked high in both Suspicion and Risk, transforming compliance, privacy, and security auditors in healthcare into powerful, comprehensive stewards of health data.

This effect, known as AI efficiency, provides health systems with the opportunity to help privacy teams reduce overall risk to patients by capitalizing on their uniquely human skills, but more importantly, for the first time ever, demonstrate to patients that every single access to their most sensitive data is being monitored.

The value of artificial intelligence doesn’t stop there. First, when auditors review the case and determine whether or not it’s a violation, their feedback is incorporated back into the AI through a machine learning feedback loop. Not only that, but the feedback of all auditors using the same platform is continuously refining and adapting the AI to accurately identify patient privacy violations. This is how some analytical platforms achieve accuracy up to 97%.

Second, organizations covered under HIPAA are also required to provide individuals with an accounting of health data disclosures if a patient wants to know if anyone inappropriately accessed their medical data. Currently, the odds that the organization would have reviewed these accesses for privacy violations prior to the request is virtually zero. With artificial intelligence, auditors can instantaneously report back on any suspicious behavior, and any corrective actions taken, since they have already used artificial intelligence to review every single access to every single record.

Lastly, AI allows healthcare organizations to not only easily report to OCR exactly how many violations have occurred, but also to demonstrate how they are measuring a reduction of policy violations through their analytical platform, and which strategic initiatives are being put in place to address concerns. The best defense is having a strong toolkit in place to demonstrate that the very real concerns of patients are being addressed.

The intent of the Privacy Rule is to strike a balance between keeping patient data private and secure, and allowing health systems the ability to provide high quality health care. On one hand, this system requires strong limitations against who should access data and when. On the other hand, members of large, distributed care teams need to be able to access patient data dynamically and broadly without any sort of delay. Artificial intelligence is a powerful tool that can bridge the gap.

Is there an ethical obligation to use artificial intelligence to monitor access to health data?

In our nation’s top hospitals, the answer appears to be yes. In fact, change is already taking place, largely because embedded in the ethical consideration is a legal one. Advances in technology make it possible for patients, and regulators, to hold healthcare organizations accountable to a willful neglect standard when it comes to protecting, or failing to protect, health data. As a result, compliance departments are already rethinking what it means to demonstrate that access to health data is being comprehensively monitored — and using artificial intelligence to develop a new set of standards for protecting patient information.

This piece was originally published in the January/February 2018 issue of ethikos, a SCCE publication.

--

--

Nicholas Culbertson
tech-protenus

CEO @Protenus, @ICITorg Fellow, @The6thBranch Board Treasurer, former @USArmy Green Beret