<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Doug Fridsma on Medium]]></title>
        <description><![CDATA[Stories by Doug Fridsma on Medium]]></description>
        <link>https://medium.com/@fridsma?source=rss-5c680c74cc34------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 07 May 2026 19:01:48 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@fridsma/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Revolutionizing Health AI: Lessons from Past “Rodeos” and the Path Forward]]></title>
            <link>https://fridsma.medium.com/revolutionizing-health-ai-lessons-from-past-rodeos-and-the-path-forward-15279bfd9b18?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/15279bfd9b18</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Tue, 17 Jun 2025 15:02:33 GMT</pubDate>
            <atom:updated>2025-06-17T15:02:33.840Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Doug Fridsma, MD PhD</em></p><p>Healthcare AI isn’t new. In fact, we’ve been here before — multiple times. As we stand at the precipice of another AI revolution, it’s worth examining what we can learn from previous “AI rodeos” and how this time might be different.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/483/1*541vDC_isgLWm_Anhlaajg.png" /></figure><h3>This Isn’t Our First AI Rodeo</h3><p>Healthcare has a rich history of AI experimentation dating back decades:</p><p><strong>The First Mental Health Chatbot (1966)</strong> ELIZA, developed by Joseph Weizenbaum at MIT, was a mock Rogerian psychotherapist that could engage in surprisingly human-like conversations. While primitive by today’s standards, it demonstrated the potential for computers to interact with humans in healthcare contexts — and how systems without real intelligence can look smarter than they actually are.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/415/1*h3YFJLRzf1boIfOqRanWyg.png" /></figure><p><strong>MYCIN: The Pioneering Expert System</strong> Developed at Stanford in the 1970s, MYCIN performed bacteriological diagnosis and treatment recommendations. Remarkably, when tested against doctors, interns, medical teachers, and medical students across 80 different cases, MYCIN outperformed them all. It used explicit knowledge representation with rule-based reasoning — you could actually see and understand how it made decisions. But rule-based systems often had a “plateau and cliff” performance — they would perform well on things that they knew, but would quickly degrade when they got outside of the narrow area of expertise.</p><h3>Neural Networks and The Limits of “Computer Thinking”</h3><p>In the second wave of AI, researchers created neural networks — names so because they resembled the neural connections in the human brain. Some felt that this resemblance would create systems that could reason in the same ways that humans reasoned.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/685/1*1f_deHLCxm6Nxeu3mjIZmQ.png" /></figure><p>But here’s the thing: computers don’t think like people. This became painfully clear with the advent of deep learning and neural networks. While these systems achieved impressive accuracy, they also exhibited bizarre failure modes. For example, adding very targeted, but imperceptible noise to an image would create widely unpredictable results. When presented with a picture of a panda, the additional noise could make a neural network confidently classify it as something entirely different (ie, a gibbon). When you dive deeper, you realize that neural networks see objects in completely abstract patterns — there is some characteristics present, but that humans would never mistake for real objects.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4ap5bP4V1ZNfvIuwH71z2A.png" /><figcaption>EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES Ian J. Goodfellow, Jonathon Shlens &amp; Christian Szegedy</figcaption></figure><h3>A Cautionary Tale: The Pneumonia Risk Predictor</h3><p>A perfect example of AI’s hidden biases came from a pneumonia risk prediction model developed at Carnegie Mellon University. The goal was simple: classify patients as low-risk (send home with antibiotics) or high-risk (admit to hospital, as ~10% of pneumonia patients die).</p><p>The most accurate model was a neural network, but researchers were uncomfortable deploying it without understanding its reasoning. When they compared it to an interpretable rule-based model instead, they discovered something alarming: the system had learned that “HasAsthma(x) =&gt; LessRisk(x).”</p><p>While this pattern was technically correct in the data — they hypothesized that asthmatics notice symptoms sooner and seek care faster, leading to better outcomes — it would be dangerous to deploy clinically with that rule. A doctor using this system might send asthmatic pneumonia patients home based on the AI’s recommendation, potentially causing harm.</p><p>The model (and data) was initially collected for insurance risk assessment — highlighting the risk of reusing data collected for one purpose to aid in the development of a system for clinical decision-making.</p><h3>So What’s Different About the Current AI Wave?</h3><p>So is this just a repeat of the previous hype and disappointment, or is this some how fundamentally different? I would argue that several factors distinguish our current AI moment:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A0sG67Hs824iIgyqQqp5Lg.jpeg" /></figure><p>The computational advances, the availability of data, the real problems that the current AI can solve, make this a very different moment. AI leaders are putting real resources in solving fundamental problems within health care organizations, and it is likely this round of AI “hype” will fundamentally change health care IT.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/612/1*L15ZXO2nZOHCB7NjspXFMQ.png" /></figure><h3>What Healthcare Leaders Are Thinking</h3><p>This is borne out in what health care leaders are thinking. According to a recent study by Bessemer Venture Partners, AWS, and Bain &amp; Company surveying 400+ healthcare buyers:</p><ul><li><strong>95% believe GenAI will transform healthcare</strong> revenue, costs, or administrative burden</li><li><strong>60% report AI budgets outpacing total IT spend</strong>, with C-suite making funding decisions</li><li><strong>Only 30% of AI pilots reach production</strong>, held back by security, data readiness, integration costs, and limited expertise</li><li><strong>48% prefer working with startups</strong> over established players for AI solutions</li><li><strong>64% are open to co-developing</strong> with early-stage partners</li></ul><p>The message is clear: healthcare organizations are bullish on AI but struggling with implementation.</p><h3>Predictions for the Future</h3><p>But I think there still will be changes as AI adoption in health care accelerates. We can look to previous technology adoption to get a sense for what might be coming.</p><h3>1. EHRs Won’t Be the Center of AI Innovation</h3><p>Just as Thomas Watson famously (and incorrectly) predicted a market for “maybe five computers” in 1943, I predict that <strong>EHRs will not be the most important place for AI in the future.</strong></p><p>Current EHR technology resembles those room-sized computers of Watson’s era — powerful but monolithic, difficult to customize, and slow to innovate. The future belongs to more nimble, specialized approaches.</p><h3>2. There Will Never Be One AI Model to Rule Them All</h3><p>Modern AI is following the pattern of other digital platforms (databases, operating systems, search engines): <strong>a small set of general-purpose foundations surrounded by many specialized or locally-run derivatives.</strong></p><p>The landscape is converging on:</p><p><strong>A handful of universal foundations</strong></p><ul><li>OpenAI, Anthropic, Google, Meta, and open-source communities training large, general models as “OS kernels” for language reasoning</li></ul><p><strong>Layers of fine-tuned vertical models</strong></p><ul><li>Health-specific variants (Med-PaLM for question-answering)</li><li>Narrower subspecialties (radiation-oncology planners, rare-disease counselors)</li></ul><p><strong>Local/edge deployments</strong></p><ul><li>Clinics, wearables, and ambulances running trimmed models offline for privacy and real-time decisions</li></ul><p><strong>Multi-agent orchestration</strong></p><ul><li>Workflow engines chaining specialized agents: one pulls FHIR data, another reasons over guidelines, a third drafts notes, a fourth checks compliance</li></ul><h3>Final Thoughts: Implications for Healthcare Leaders</h3><p>To succeed in this multi-model future, healthcare organizations should:</p><p><strong>Architect for plurality</strong> — Design APIs and governance systems that allow swapping or ensembling models without rewiring the EHR</p><p><strong>Invest in evaluation</strong> — With multiple models, you’ll need automated fairness, drift, and robustness testing (preparing for EU AI Act requirements)</p><p><strong>Balance buy-and-build</strong> — License best-in-class foundations, then fine-tune lightweight adapters on institutional data to retain IP and prevent leakage</p><p><strong>Stay agile with regulation</strong> — Adaptive-AI rules assume continuous updates; multi-model platforms fit this future better than monolithic EHRs</p><p>Finally, technological, regulatory, and economic realities all point toward an ecosystem of cooperating — and competing — AI models and agents, not a single dominant system. Healthcare leaders who prepare for this plurality, rather than betting on one AI solution, will be best positioned for success.</p><p>The AI revolution in healthcare is inevitable, but its form will be diverse, distributed, and dynamic. By learning from past AI “rodeos” and preparing for a multi-model future, we can harness this technology’s potential while avoiding the pitfalls that have tripped us up before.</p><p><em>This transformation won’t happen overnight, but it’s happening. The question isn’t whether AI will revolutionize healthcare — it’s whether we’ll be ready when it does.</em></p><p><em>For more insights on healthcare AI and digital transformation, connect with Doug Fridsma at Doug.Fridsma@HealthUniverse.com</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=15279bfd9b18" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hello, Health Universe]]></title>
            <link>https://fridsma.medium.com/hello-health-universe-e181128b3a97?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/e181128b3a97</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Wed, 23 Aug 2023 19:08:47 GMT</pubDate>
            <atom:updated>2023-08-23T19:08:47.807Z</atom:updated>
            <content:encoded><![CDATA[<p><em>A faster way to build and share health apps</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/963/1*9RJwumHn1E5rMLaf95h0Yw.png" /></figure><p>It’s been a very long time, but I still remember writing one of my first programs in BASIC on a TRS-80. I would turn on the machine, wait for the prompt, and then type in my program by hand from the command line. I remember typing in the code for a simple expert system ELISA, and creating programs that would create banners and designs from simple lines of code. It was amazing to me that I could turn these words and lines of code into something that actually did something.</p><p>But every time I turned off the computer and turned it back on again, I would have to type my program back into the machine. I eventually got a “tape drive” — an analog tape that could be used to save a program — but it was very hard to take the ideas that I had and turn them into action without a laborious task of re-creating the code each time I wanted to use it. But the potential that computers had to change how we did things was clear to me from the moment I first started to play with my TRS-80.</p><p>I kept that interest through my studies in medical school and saw the potential with how computers could change medical research and health care delivery. And I’ve been fortunate to be able to make contributions to the use of computer technology in health care.</p><p>Fast forward to 2023, and we now have cloud storage of data, the ability to move petabytes of data around the internet, and nearly every doctor in the country using electronic records to support patient care. We have data scientists and machine learning systems that can identify patterns from data and anticipate patients who may be at risk for complications or adverse outcomes. And we have billions of patient care records being exchanged for patient care.</p><p>Using this data, we are seeing more tools used to improve care. Researchers are developing applications from this data that improves are ability to diagnose disease, plan for interventions, and manage populations at risk for complications and adverse events. Increasingly, researchers are publishing not only their findings, but they are publishing the data used in their research, and the code used to generate those insights. Transparency in research is something that has helped us understand underlying biases of algorithms, identified safety issues, and helped to engender trust in the systems used to support patient care.</p><p>But in many ways, we are still at the “TRS-80” stage of making these algorithms actionable. Even if the code is available in the open source through publications or through code repositories, it can be challenging to “type it back in” and configure your development environment to run the code effectively. This affects both researcher and clinicians — researchers are unable to get their ideas into the real world to test, and clinicians struggled to use those technology advances in their care settings.</p><p>We have tried multiple solutions to get actionable algorithms (and not just data) into care settings. Sometimes it is a walled garden of apps that work only within a particular EHR environment. In other settings, it is a bespoke integration of a specific tool within a specific environment. And in still other settings, it is a host of different apps and platforms and analytics tools that all need to be maintained and integrated in complex changing healthcare environments. And even then, what seems like simple black box algorithms to support sepsis care, can sometimes mislead clinicians about their effectiveness.</p><p>We have gotten good at moving data around, but we have not gotten good at moving algorithms and apps in a marketplace of ideas.</p><p>I think there is a better way. We’ve made remarkable progress in the open-source community in using the collective energy of the community to drive transparency, trust, and re-use of code. We’ve made it easy to compile the code into actionable knowledge that can be disseminated easily. And I think the same can — and should — be true in healthcare.</p><p>What we need is the same kind of sharing of actionable knowledge as we do of static data. No more black boxes, or walled gardens, or bespoke one-off integrations into health IT systems. We need an open-source community that is driven by a desire to connect cutting edge research with forward leaning clinician without requiring the work of system configuration, debugging, integration before it can be used.</p><p>We’ve done a good job in reducing the barriers to moving data. Now we need to reduce the barriers to moving computable knowledge in apps and machine learning algorithms.</p><p>Welcome to <a href="http://www.healthuniverse.com">Health Universe</a>!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e181128b3a97" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Health care as an ultra large-scale system]]></title>
            <link>https://fridsma.medium.com/health-care-as-an-ultra-large-scale-system-e977f07d9d70?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/e977f07d9d70</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Fri, 16 Sep 2022 16:29:11 GMT</pubDate>
            <atom:updated>2022-09-16T16:29:11.532Z</atom:updated>
            <content:encoded><![CDATA[<p><strong><em>To solve the problems of interoperability and health IT, we should think city planning, rather than building architecture</em></strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JRYG-LwtMBdRp-hzlDttlQ.jpeg" /></figure><p>Last month, Senator Romney proposed a new agency within HHS focused specifically on public health data. <a href="https://www.romney.senate.gov/romney-proposes-new-data-agency-for-the-protection-of-public-health/">The Center for Public Health Data</a> would be a stand alone agency focused specifically on collecting de-identified data to inform the general public and support government and public health decision makers. In the wake of the performance of the CDC with the COVID pandemic (and the influx of money for data modernization) there are a number of similar initiatives aimed to modernize the public health infrastructure (<a href="https://www.himss.org/resources/public-health-information-and-technology-infrastructure-modernization-funding-report">HIMSS</a> , <a href="https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2021/12/BPC_Public-Health-Forward_R01_WEB.pdf">Bipartisan policy report </a>, E<a href="https://www.ehidc.org/resources/report-creating-modern-public-health-system">xecutives for health innovation</a>, and many more) .</p><p>Some propose a one-size-fits all, closed garden approach with public health as a one-off, separate from the rest of the health IT infrastructure. Others propose that we should take what we know about enterprise architecture and “super size it” to fit into a national framework.</p><p>While there isn’t a “wrong” framing, some ways to frame the problem are better than others. Einstein is quoted as having said, “If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.” This is the same issue with framing. How you frame the problem is critical to what solutions you get.</p><p>I believe that we have been framing the problem incorrectly–and we continue to get solutions that don’t solve the problem until we change our perspective.</p><p>My assertion: we need to frame health care as an <a href="https://resources.sei.cmu.edu/asset_files/Book/2006_014_001_635801.pdf">ultra large-scale system</a> .</p><p>We’ve all interacted with a ULS before — the world wide web is an example of a highly distributed, ultra large scale system that handles billions of websites, searches, and information. Other examples of ultra-large scale challenges include solutions for climate change, networked transportation (with autonomous vehicles), homeland security and military preparedness. Most hard, interconnected, complex problems are ultra large-scale problems. And framing the problem as an ultra-large scale system gives you a set of underlying features of the problem, and a different way to evaluate solutions..</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RSfVHDeXWej_dOlo2fYPig.png" /><figcaption>5 characteristics of an ultra large scale system</figcaption></figure><p><strong>Decentralized</strong></p><p>Health care in the US is decentralized. Public health is decentralized with states and local agencies having wide latitude for how to address public health issues. This means that any solution for health IT or for public health will require a decentralized solution. Creating centralized databases or development approaches won’t match the way that our health care system is organized.</p><p><strong>Conflicting/Unknowable and diverse requirements</strong></p><p>In software and architecture development, we often want to get the requirements of the system first, and then build a system that is capable of meeting those requirements. The problem is that it is nearly impossible to get the requirements for health IT systems before you begin, and even if you do, the requirements are likely to change. This means that we need to take an incremental, modular development design approach that allows for flexibility as the systems evolve and grow. Otherwise we end up with a “rip and replace” solution that never achieves the kind of success that is needed.</p><p><strong>Continuous and evolution with heterogeneous capabilities</strong></p><p>Not every hospital or public health agency is at the same level of sophistication when it comes to electronic data. Some academic medical centers have fully digital solutions, sophisticated data analytics, and interoperable systems that can communicate seamlessly with the outside world. Other hospital and public health agencies are trying to get their fax machines to work more efficiently, still struggle with the simplest reports, and revert to sneakernet systems to work around problems. We need a health IT system that is capable of letting everyone participate where they are, and supporting evolution and growth for those organizations that are lagging.</p><p><strong>Normal failures</strong></p><p>Someone wise once asked “how do fail safe systems fail?” — the answer: “Fail safe systems fail, by failing to fail safe”. When system are complex, it is normal for things to fail from time to time. What that means is that we need to build for resilience and recovery, not just to pull up the draw bridge. Systems need to be able to prevent negative feedback loops that propagate failures, and need to have the resilence to recover from a data breach, a system failure, or a</p><p><strong>Sociotechnical systems</strong></p><p>This is perhaps the most important feature on an ultra large scale system. Patients and health care providers are not just interacting with the Health IT systems, but they are part of those systems. We need to build systems that include people in the processes, the technology and make sure that human computer interaction is not just an interface, but a part of the system.</p><p><strong>What does this framing mean for health IT?</strong></p><p>We need to change the way we approach the problem. Often we think that if we build a single system, we can solve the health care problem. This is the equivalent of trying to build one enormous building and having it work for every purpose that a person might need. I believe that healthcare IT is not a problem of “architecture” (and building a particular building) but of “city planning”. It’s the difference between a blueprint for a building, and the elements that help a city thrive: basic underlying infrastructure (water, roads, security), incentives for certain kinds of behaviors (zoning incentives for growth, building codes for safety), and a focus not on building a particular building, but creating a rich ecosystem that creates value for everyone.</p><p>What does this mean for healthcare?</p><ul><li><strong>Healthcare is decentralized</strong>: Fragmentation of data is a natural side-effect of our fragmented and decentralized health care system. Rather than trying to centralize data, we need to find ways to link and aggregate data in dynamic ways that match how healthcare delivery is organized..</li><li><strong>Understanding the technical needs of healthcare is hard:</strong> Because it is hard to know the requirements apriori, we need to build incrementally in a flexible and modular way. All in one solutions that try to integrate all aspects of data, linking, normalization and analysis may be attractive at first glance, but as requirements change, they become less resilient to change. Modular, integrated, and standards-based ways of integrating data are far more effective in the long term.</li><li><strong>Make health IT systems simple, interoperable, and extensible</strong>: We need to make sure that sophisticated organizations are not held back in the work that they do, while we enable those with fewer resources to participate in the Health IT ecosystem. Systems that can provide backward (and forward) compatibility allow the individual hospitals and public health agencies to be able to mature at their own pace.</li><li><strong>Data breaches will happen–build in privacy from the ground up:</strong> We should anticipate that identifiable data is at risk for breach, and should do everything we can to mitigate that. This means we need to build privacy into the foundation of the health IT system, and recognize that it’s not if, but when data may become compromised. Reducing that risk by limiting the amount of PPI that is shared is essential to preventing a failure of our systems to ensure privacy.</li><li><strong>Never forget the people are part of the system:</strong> In everything we do, we need to see these systems from the patient and the health care providers perspective. Patients are not the objects that we “do” things do in health care, but they should be active participants in their care, and the technology that we use to support them should also acknowledge the way that patients and healthcare providers interact with the systems.</li></ul><p><strong>Everyone has their role in an ultra-large scale system</strong></p><p>When we frame healthcare IT as an ultra-large scale system, everyone plays a role: We need basic standards for collecting, securing, moving, and understand health care data from standards organizations and public-private partnerships. We need the government to establish rules for safety, access and enforce following the rules. We need incentives for organizations to build useful tools and systems that fit into this healthcare city. And we need to recognize that we are building this system to support the providers and patients who live in it every day.</p><p><strong>Framing health care systems right will lead to better solutions</strong></p><p>The Romney proposal is on the right track, but it needs to rely on a distributed health care system, protect the privacy and security of patient data, and create a valuable tool for public health to use. It shouldn’t try to create a one-size-fits all solution, but identify the importance of data to inform decision making.</p><p>Framing matters. And when we get the framing wrong, we run the risk of suggestion solutions that don’t match the problem we are trying to solve. Framing the health care system as an ultra large scale system allows us to consider the unique characteristics of such a system, and create resilient, capable health IT systems that can adapt and grow as our data and health IT needs change.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e977f07d9d70" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Getting rid of lazy data]]></title>
            <link>https://fridsma.medium.com/getting-rid-of-lazy-data-c16ddf67a88b?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/c16ddf67a88b</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Fri, 26 Aug 2022 16:26:33 GMT</pubDate>
            <atom:updated>2022-08-26T16:28:54.554Z</atom:updated>
            <content:encoded><![CDATA[<h4>Sharing research results and data will power a learning health care system but we need to ensure that privacy is protected</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*-z_R-PIGveONf2zp.jpg" /></figure><p>In my recent collection of blogs, I’ve been exploring data privacy, and how important protecting a patient’s privacy is to public trust. As more data is available in electronic form, it becomes even more critical that we protect the privacy of patient data, maintain public trust that we are using health data responsibly, and share what we learn.</p><h3>Get rid of lazy data</h3><p>When I worked at ONC and with Todd Park (when he was the White House CTO), he used to talk about “lazy data” — data that didn’t really do anything, but just sat there. Much of the urgency to get health data off of paper records and into electronic health record systems was driven to get rid of lazy data — to make it possible to use health data at scale for understanding population health and improving the health care system. Many people talk about the idea of a learning health care system in which every data point collected as part of research and care delivery can be used to improve research and patient care. And in my previous blogs have pointed at how important protecting patient privacy is to both public trust, and the ability to turn lazy data into something useful.</p><p>So this week, I want to point to an <a href="https://www.whitehouse.gov/ostp/news-updates/2022/08/25/breakthroughs-for-alldelivering-equitable-access-to-americas-research/">important announcement out of the Office of Science and Technology Policy</a> that has a direct impact on research results, research data, equity, and the learning health care system. While there have been incremental steps to move toward more access and transparency to research results, often important research results are embargoed behind a firewall, or data used for that research is difficult to find and share. That has changed with this <a href="https://www.whitehouse.gov/ostp/news-updates/2022/08/25/breakthroughs-for-alldelivering-equitable-access-to-americas-research/">OSTP announcement</a>.</p><h3>Research results belong to the public</h3><p>Now, any research that is federally funded (think, NIH) must make research results are available without firewalls or payment barriers or embargoes. Results must be published in open access journals. For anyone that has a family member with a complicated diagnosis or rare disease, it can be frustrating when important research results are locked behind firewalls or require significant costs for access the paper results. This policy now makes that data available not only to research institutions, but to anyone.</p><p>It also helps level the playing field so that organizations that may lack the resources for costly journal subscriptions now have access to the same information as their more well funded counterparts. Differential access to data creates an unfair advantage to researchers at academic institutions who can afford those subscriptions. In this policy announcement, if the public pays for the research, the public should have access to the results.</p><h3>Data should be shared equitably</h3><p>Even more importantly, the OSTP announcement strengthens the requirements for sharing the data that was used to generate the results. In the past, there has been a lag in the time when the results of a study were published, and when the data used for those results are made available to other researchers. Now when results are published, the data must be made available as well.</p><p>This is important in a number of ways. First, having access to the data allows other researchers to replicate the findings. This creates more transparency and trust in the science when the results of the first study can be replicated in the second. Studies of research replicability in medicine suggests that <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3655010/">50</a>–<a href="https://www.nature.com/articles/483531a">75</a>% of cancer study results could not be replicated when a different research tried the same experiment. While there are many reasons this is the case (more reasons that I can convey in this overview), having data available at the time that research is published will improve the ability to more rapidly “check the findings” of a study and assure the public that the scientific results are valid.</p><p>Second, having equitable access to the data for all researchers will make it easier for under resourced institutions and early investigators to jump start their research. Often students and early investigators are delayed in starting their research because they don’t have access to good research data. Many academic medical center are establish research databases that are available to investigators to test out hypotheses. But if a student or investigator is at an institution that lacks these resources, they are handicapped in competing for federal grants and funding. Having more data available for research purposes will level the playing field for young investigators or institutions that lack the resources for these large research repositories.</p><h3>The learning health care system</h3><p>So what does this mean for our goal of developing a learning health care system? First, it makes sure that research results and research data aren’t lazy — people can use data for secondary purposes, accelerate follow-on research studies, and confirm that the results of a study are indeed valid. In this way, every federal research dollar contributes to new insights and new learning in how to take care of patients better. It is an important step, and one in a long line of other changes that need to happen to make sure every research dollar and every patient encounter contributes new knowledge into how to take care of patients better.</p><h3><strong>Privacy, trust, and data</strong></h3><p>I am fully supportive of government efforts to get rid of lazy data and make sure that data that is collected as part of government research is made into a public resource. I once asked Francis Collins to estimate what percentage of research dollars are used not for analysis, but to collect data — often collecting the same or similar data again, and again, and again across multiple federally funded grants. While he didn’t have a number, he acknowledged that as data collection becomes more expensive (and research dollars remain level), we continue to collect data, use it once and then collect it again. Now, we are seeing across the NIH, the FDA and other agencies, a desire to use real world evidence — evidence collected as part of patient care — to be repurposed to improve research, health and health care. These efforts have the potential to accelerate drug discovery and lower the cost of research across the life sciences.</p><p>But we cannot forget in all of these data sharing plans that much of the clinical research that we use is fundamentally data about people. Individuals who’s data we have an obligation to protect and keep private. We must do everything we can to protect the privacy of patient data while we repurpose it for public good. We need to build privacy into the learning health care system from the ground up.</p><p>The OSTP announcement charges government agencies to develop new policies to beef-up data sharing plans and create new incentives to make sure the data isn’t lazy. But we must also beef-up our technology to preserve patient privacy while we link and combine and analyze the data in new ways to generate new insights. Privacy enhancing technologies (PET) are a focus of an ongoing <a href="https://www.whitehouse.gov/ostp/news-updates/2021/12/08/us-and-uk-to-partner-on-a-prize-challenges-to-advance-privacy-enhancing-technologies/">White House challenge in the US and the UK </a>to accelerate research while being responsible stewards of private patient data. Privacy-preserving linkage technology that allows patient records from different organization to be linked without risking patient re-identification will be be a key ingredient to beefed up data sharing plans, and a foundational aspect of the learning health care system to which we all aspire.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c16ddf67a88b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Protecting privacy is a first step toward public trust]]></title>
            <link>https://fridsma.medium.com/protecting-privacy-is-a-first-step-toward-public-trust-13d457f2d401?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/13d457f2d401</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Fri, 19 Aug 2022 17:36:00 GMT</pubDate>
            <atom:updated>2022-08-19T17:36:00.143Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Protecting patients data — both keeping it private and keeping it secure — is a fundamental part of creating public trust in health IT</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*TItszLpK6Djm_MCl.jpg" /></figure><p>A lot of my Friday blogs in the last couple of weeks have focused on privacy — <a href="https://fridsma.medium.com/metapixels-and-megatrends-f6d93fbedc17">companies not keeping health data protected</a>, <a href="https://fridsma.medium.com/privacy-is-global-d245546b345c">international laws that can sometimes delay regulatory and oversight issues</a>, and <a href="https://fridsma.medium.com/july-25-29-a-week-in-review-4f00fa9385ce">different perspectives on privacy across patients, consumers and the government</a>.</p><p>But protecting a patient’s privacy is important but is part of a much bigger issue — and that is one of trust. In Nordic countries, there is a much higher level of trust in the government than in the US. It makes it possible for Denmark to have a national database of patient health information — something that seems unthinkable in the US. The US relies more on the private sector for health data collection and analysis, and the public tends to put more trust in the private sector to manage health data — despite recent calls for more regulation and oversight for sensitive data.</p><p>Protecting patients’ information — both their privacy and the security of their data–is a fundamental part of establishing trust.</p><p>This idea of trust makes me want to highlight two things that have happened this week.</p><p>First, the <a href="https://www.gao.gov/blog/new-gao-report-suggests-many-ways-improve-covid-response">GAO report on the CDC</a> and <a href="https://www.politico.com/news/2022/08/17/cdc-agency-overhaul-covid-19-response-00052384">its response</a> to the Covid-19 pandemic highlighted some of the failures that the CDC had in response to the pandemic. Data was not collected correctly (or not at all), messaging was confusing to the public, and the culture within the CDC contributed to a blunted response to the pandemic.</p><p>The CDC has a trust problem. And it’s a problem that will need a comprehensive approach to solve. While I can’t speak to the specifics of the CDC culture (except in my external interactions with the CDC when I was in the government), we can (and should) talk about trust, the CDC and the health data that they collect.</p><p>Part of the solution being proposed by the CDC is that they need broader authority to collect more data, to compel states and individuals to send data to them. The CDC argues that this will help address the problems identified in the CDC response to Covid-19.</p><p>But if trust is the fundamental issue, then broader authorities will not solve the trust problem, and in fact, may make things worse. The CDC has often not seen itself as a part of the broader healthcare ecosystem — the CDC uses different health IT vocabularies, different technologies, and different computer formats to collect data — many of which are not well integrated into the rest of the health care system. These systems hampered the CDC response to the pandemic. And broader authorities to create yet another one-off system will not solve the CDC data or trust issues.</p><p>And this brings me to my second observation this week. The ONC’s “<a href="https://www.healthit.gov/buzz-blog/interoperability/e-pluribus-unum">E Pluribus Unum</a>” blog post. The Secretary for HHS has issued a new management policy that puts ONC in the center of health IT — for interactions with EHRs, HIEs, and hospitals and providers AND for coordination across all agencies within HHS.</p><p><strong>This is a big deal.</strong></p><p>This means that ONC is charged with coordinating HIT efforts with CMS, NIH, FDA, AND CDC. It means that the ONC will be charged with restoring public trust in how their health data is used both within HHS (in agencies like CDC) and across the health IT sector.</p><p>Trust is easy to lose and hard to earn. The ONC presented on Wednesday their <a href="https://www.healthit.gov/sites/default/files/facas/2022-08-17_PHDS_TF_Micky_Tripathi_Presentation.pdf">plan for public health at their HITAC committee meeting </a>— and is beginning to articulate a plan for how the CDC can restore trust.</p><p>Broader authorities are not the answer when the public perception is that the CDC did not use its existing authorities effectively. But I’m pleased with the ONC efforts to bring the CDC (and other HHS agencies) into the broader health ecosystem, and begin the process of restoring trust in our health data systems. New approaches to protecting patient privacy that didn’t exist when I was in the federal government should drive novel ways to protect patient privacy while still enabling population level analytics to help drive analysis and policy.</p><p>It’s a daunting charge for the ONC and one that will require that the public trusts their approach. Since its inception, ONC has used the patient as a north star for their approach to how health data should be used. If they continue that approach — putting patient privacy first, and providing patients with the ability to provide both input and transparency– these new authorities should not only create a more resilient and responsive way to manage health care data, but restore public trust in how their health data is protected and used for the public good.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=13d457f2d401" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Privacy is Global]]></title>
            <link>https://fridsma.medium.com/privacy-is-global-d245546b345c?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/d245546b345c</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Sat, 13 Aug 2022 01:02:35 GMT</pubDate>
            <atom:updated>2022-08-13T01:02:35.954Z</atom:updated>
            <content:encoded><![CDATA[<p>Multinational clinical trials with different privacy rules complicate data sharing and regulatory oversight</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*OGqAXZ5uzF3WOfzR" /></figure><p>A brief, but very interesting <a href="https://www.fda.gov/international-programs/global-perspective/how-european-data-law-impacting-fda">report from the FDA</a> dropped this week that highlights how different privacy rules in the US and EU affect clinical trials and the work of the FDA. I’ve always been fascinated by the patchwork of privacy rules for data in the US, and the more comprehensive rules that exist in the EU. But I had never really thought about the implications of these different rules for how differences in privacy rules could affect the important functions of organizations like the FDA.</p><p>The <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679">GDPR</a> is a comprehensive regulation that applies to how organizations handle personal information (including health data) of EU data subjects — regardless of where those organizations are located. The GDPR is not limited to health data, but includes all data classified as personal. The GDPR empowers EU residents with rights to know what that data is, requires consent from individuals for many cases of data collection, and gives individuals “the right to be forgotten” and their digital data deleted.</p><p>In the U.S., HIPAA rules allow health data to be used for research purposes without patient consent, as long as the data is either limited to a specific non-identifiable data, or the data sets have been certified through expert determination to have a negligible risk or re-identification. For clinical trials and data that is to be used for regulatory decision making, the FDA has not such exclusion. It requires investigators to submit patient-level data (which is considered identifiable) to ensure the integrity of the data and the safety of the participants.</p><p>However, the GDPR limits the sharing of personal data to third-parties — including the FDA. According to the FDA report, this restriction can complicate or delay the ability of the FDA to assess the results of multi-national studies. In an effort to increase the diversity of clinical trials participants, the FDA requires many clinical trials to collect race, ethnicity and other demographic data. However, the GDPR regulations prohibits the collection and processing of race or ethnicity data except in specific cases for scientific or research purposes — and this again can delay or complicate the FDA review of clinical studies if that information is not routinely collected. Even critical factory inspections (like those that produce monkey pox vaccines) can also be delayed with significant impacts on the public.</p><p>And because GDPR applies to all entities that collect or hold data from EU data subjects, even US companies are subject to the rules. In a previous life, I worked to convert our entire US-based membership organization to be compliant with GDPR because it was easier to update our information systems for everyone, than it was to try to single out EU residents for separate treatment.</p><p>For the FDA, the requirement to accommodate EU data subjects affects their adverse event reporting systems — while there is implicit consent when a patient enters information on their own, adverse events on EU data subjects reported by a third party are subject to the GDPR rules.</p><p>Finally, these rules are not static — and within the EU, even privacy rules are evolving. For example, in addition to the GDPR, the <a href="https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en">European Health Data Space </a>specifies how both primary and secondary data is protected:</p><blockquote>1) empowering individuals through increased digital access to and control of their electronic personal health data, at national level and EU-wide, and support to their free movement, as well as fostering a genuine single market for electronic health record systems, relevant medical devices and high risk AI systems (<a href="https://health.ec.europa.eu/ehealth-digital-health-and-care/electronic-cross-border-health-services_en">primary use of data</a> )</blockquote><blockquote>2) providing a consistent, trustworthy and efficient set-up for the use of health data for research, innovation, policy-making and regulatory activities (<a href="https://tehdas.eu/">secondary use of data</a>)</blockquote><p>A “single market for electronic health record systems, relevant medical devices and high risk AI systems” could have remarkable effects on EU-US collaborations or data sharing.</p><p>As more clinical trials become multi-national, and we increase the diversity of populations (and geographies that we study, we can expect that privacy rules will continue to impact the work of US organizations like the FDA, and NIH. We should continue to monitor these changes and work to create frameworks that reduce the barriers to multi-national clinical research, preserves patient privacy, and reduces data fragmentation.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d245546b345c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[MetaPixels and Megatrends]]></title>
            <link>https://fridsma.medium.com/metapixels-and-megatrends-f6d93fbedc17?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/f6d93fbedc17</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Fri, 05 Aug 2022 17:50:45 GMT</pubDate>
            <atom:updated>2022-08-05T19:12:31.714Z</atom:updated>
            <content:encoded><![CDATA[<p>Lawsuits and public concern lead to legislative proposals</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/612/1*SjZcgK4r0JfzFVj7ZEZj_g.jpeg" /></figure><h3>MetaPixels and Patient Portal</h3><p>This has been an interesting week in health data privacy news. The biggest bombshell has been a <a href="https://www.documentcloud.org/documents/22123376-meta-lawsuit?responsive=1&amp;title=1&amp;utm_source=STAT+Newsletters&amp;utm_campaign=2bad9a1a8a-health_tech_COPY_01&amp;utm_medium=email&amp;utm_term=0_8cab1d7961-2bad9a1a8a-154639068">lawsuit</a> filed against Meta, UCSF, and Dignity Health, alleging that Meta placed a “MetaPixel” on various health portals (including USCF and Dignity Health) that allowed Meta to harvest patient information for patient portals, without a patient’s knowledge or consent. A MetaPixel is a snippet of code that can be embedded on third party sites (like patient portals) that can track and log data from the portal users. The lawsuit states that in a study done by <a href="https://themarkup.org">The Markup</a>, 33 of the top 100 hospitals in the US had a MetaPixel embedded in their website and at least 7 had the MetaPixel behind the password protection portions.</p><h3>How does a MetaPixel work?</h3><p>If a patient went to a patient portal to schedule an appointment, not only could personally identifiable information (“Jane Doe”, “555–857–5309”) be sent back to Meta, but in some cases, the specific information that the user wanted treatment for (“discuss pregnancy termination”) and health conditions (“HIV positive”). This information then could be used by Meta for targeted ads related to their sensitive information from the portal. Patients who had facebook accounts, would find that after scheduling a visit or accessing their patient portal, ads targeted at their medical conditions would begin to appear in their Facebook feeds.</p><p>The scope of this is enormous. With 33 of 100 hospitals having this MetaPixel harvesting patient information (and linking it back to the Meta ad targeting engine), they estimate that <strong>26 million patient admission and outpatient visits in 2020 alone</strong> have been compromised. The expectation is that the scope is even bigger — The Markup only sampled 100 of the over 6000 hospitals in the country.</p><p>Other FTC complaints against Meta have become even more significant after the Dodd decision: In 2021, FTC had complaints that Meta was receiving pregnancy data from popular women’s health apps. Things like “abortion pill” were not filtered and sent directly to Meta. These breaches of patient trust and using their identified data have become increasingly problematic, and companies like Meta are not subject to HIPAA rules.</p><h3>But why are Health Systems included in the Lawsuit?</h3><p>It seemed strange that in addition to Meta, the lawsuit included UCSF and Dignity. Why are health systems also named?</p><p>The suit alleges that UCSF and Dignity knew that Meta had placed a MetaPixel on their patient portal, and despite knowing that this would allow sensitive information to be given to Meta, did not remove the MetaPixel. They knew that patient appointment and scheduling information would be sent back to Meta, and did not inform patients that this would occur. Some EHR vendors (like MyChart) specifically warned hospitals to be careful with custom analytics.</p><p>And although only these two hospitals are cited in the suit, given that this is only a 100 hospital sample, it is unclear how many other health care patient portals were benefiting from the Meta advertising. And all of these hospitals are at risk of both failing to disclose how their health data is being used, and likely of more serious HIPAA violations. The current cost of HIPAA violations is between $100–$50,000 per individual violation (which a maximum cap of 1.5M/year) — so this could be a costly problem if the allegations are true.</p><h3>The government is paying attention</h3><p>While companies like Meta are not covered by HIPAA rules, patient often are unaware that health data, stored in consumer (ie, non healthcare organizations) is not protected by HIPAA. The only thing that companies (like Meta) need to do is to disclose that in their terms of use agreement (you know, that 100 page legal webpage that we just click through on our way to installing an app).</p><p>The recent supreme court decision on Dodd, based its analysis (among other things) on the absence of a right to privacy explicitly written into the constitution. In supportive arguments, Clarence Thomas suggests that other rights based on the right of privacy — LGTBQ+ rights, gay marriage — are also potential targets for revision. (Remarkably, Thomas did not cite the Loving decision on interracial marriage which is also based on the right of privacy, as also a potential target for revision.)</p><p>This has accelerated the interest in privacy regulations within congress to help protect individual and patient privacy. <a href="https://www.congress.gov/bill/117th-congress/house-bill/8152">The American Data Privacy and Protection Act (ADPPA)</a> passed the house commerce committee in July 2022, but has not yet passed congress, although a <a href="https://www.commerce.senate.gov/services/files/6CB3B500-3DB4-4FCC-BB15-9E6A52738B6C">discussion draft</a> is under review.</p><p>In the past week, we’ve seen two additional initiatives. First, the Romney proposal for a <a href="https://www.romney.senate.gov/romney-proposes-new-data-agency-for-the-protection-of-public-health/">Center for Public Health Data</a>, emphasized that data collected for public health purposes must be de-identified, and cannot have personally identifiable data included. The proposal recognized the value of health data, but the importance of also protecting the personal, identifiable information of a patient.</p><p>Amy Klobuchar has taken it one step further, in direct response to the MetaPixel lawsuit. She has proposed a law called the <a href="https://www.congress.gov/bill/117th-congress/senate-bill/4738?q=%7B%22search%22%3A%5B%22s+4738%22%2C%22s%22%2C%224738%22%5D%7D&amp;s=1&amp;r=15"><strong>Stop Commercial Use of Health Data Act</strong></a><strong> </strong>which would prohibit the use of personally-identifiable health data for commercial advertising. In many ways, it has elements of the GDPR rules in the EU, in that patients would have the <strong>right to access</strong> the information that a organization has one them in both in human and machine readable formats and would have <strong>rights of deletion</strong> to have their data removed.</p><p>It specifically states that this law would not supplant or abrogate any part of HIPAA, and that public health data is excluded from the requirements. It prevents taking de-identifiable data and re-identifying it, and would have organizations only keep the data as long as is necessary — and no longer.</p><p>The law is focused on personally identifiable data — de-identification safe harbors that are described in HIPAA remain intact</p><h3>The MegaTrend — Protect Identifiable Health Information</h3><p>On the black market, identifiable health data remains one of the most valuable commodities. While a single social security number might cost a little more than $0.50, a complete, identifiable health record sells for $250.00. It’s the reason why data breaches and ransom ware attacks are increasing — the average data breach will <a href="https://healthitsecurity.com/news/average-healthcare-data-breach-costs-surpass-10m-ibm-finds?eid=CXTEL000000661285&amp;elqCampaignId=26915&amp;utm_source=nl&amp;utm_medium=email&amp;utm_campaign=newsletter&amp;elqTrackId=ba3719b25fbe4620a7f4a149c638b6ef&amp;elq=79f7925d4e4747f9b0eb8144b63d805d&amp;elqaid=27760&amp;elqat=1&amp;elqCampaignId=26915">cost a healthcare organization over $10M</a>.</p><p>We all know that health data is valuable — when properly de-identified, it can be used for new drug development, safety monitoring, population health analytics, and development of new and novel decision support tools. We can expect to see more use of de-identified health data to provide social benefit to the public. But we can also anticipate that patients will expect — and demand — that their privacy is protected, and that organizations do not use their identifiable data without their permission.</p><p>Stay turned for more legislation, regulation, and technology to help ensure health data is used responsibly.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f6d93fbedc17" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[July 25–29 — A week in review]]></title>
            <link>https://fridsma.medium.com/july-25-29-a-week-in-review-4f00fa9385ce?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/4f00fa9385ce</guid>
            <category><![CDATA[patients]]></category>
            <category><![CDATA[privacy]]></category>
            <category><![CDATA[health-data]]></category>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Sat, 30 Jul 2022 00:37:55 GMT</pubDate>
            <atom:updated>2022-07-30T00:40:24.447Z</atom:updated>
            <content:encoded><![CDATA[<h3>July 25–29 — A week in review</h3><h4>Health data privacy from three perspectives.</h4><p>As more health data is collected, stored, and shared electronically, people are starting to pay more attention to keeping that data private. This week, a number of announcements related to health data privacy have been circulating, and give us patient, company, and government perspectives.</p><h3>AMA patient survey on privacy</h3><p>The AMA recently partnered with <a href="https://www.savvy.coop/?hsLang=en">Savvy Cooperative </a>(an interesting data company in its own right) to release a <a href="https://www.ama-assn.org/system/files/ama-patient-data-privacy-survey-results.pdf">survey of patient perspectives around data privacy</a>. While I wasn’t able to find a copy of the survey questions (to understand exactly how the questions were asked), 92% of patients believe that privacy is a right — with many unclear about the privacy rules and who has access.</p><p>The survey seemed to focus most specifically around data that is shared outside of the confines of the HIPAA framework (which would be consistent with the AMA <a href="https://www.ama-assn.org/system/files/2020-05/privacy-principles.pdf">privacy principles</a> which focus on non-HIPAA covered entities), and showed that patients felt most comfortable with physicians having access to their data, and least comfortable with social media, big tech, and prospective employers having access to their data.</p><p>A fundamental belief of the AMA is that the “primary purpose of increasing data privacy is to build public trust, not to inhibit data exchange” and reflect a focus on data that falls outside of those entities that are covered under HIPAA regulations. It was not clear where medical research, de-identified data (that protect privacy while putting health care data to good use), or other data issues are address.</p><p>The remedies suggested by the AMA are aligned with their privacy principles: transparency, control, and rules again discrimination that would disadvantage individuals — all good goals to restore public trust without inhibiting data exchange for public good.</p><h3>Romney proposal for a new data agency for the protection of public health</h3><p>Romney’s Senate office announced on Thursday a proposal to create an independent, HHS wide agency called the <a href="https://www.romney.senate.gov/romney-proposes-new-data-agency-for-the-protection-of-public-health/">Center for Public Health Data</a> (CPHD) According to the Romney website,</p><blockquote>The Center for Public Health Data (CPHD) would be a modern data agency, focused exclusively on aggregating comprehensive, de-identified public health data from diverse sources, including local, state, and federal public health units; state health data utilities and exchanges; hospital systems; public and commercial laboratories; and academic and research institutions.</blockquote><blockquote>CPHD will be structured as an independent data subagency inside the Department of Health and Human Services (HHS), and led by a Chief Data Engineer. It will serve as an open and transparent repository of information to provide the public, academics, and policymakers objective, unbiased data in real time. A clear picture of the state of public health and disease spread will help policymakers develop and implement informed and proactive policy solutions.</blockquote><p>What is interesting about this proposal, is that it emphasizes a comprehensive, de-identified approach to aggregating federated data sources. It requires that personally identifiable data would be de-identified at the source before it is sent to CPHD, and limited to infection disease information. Such a solution would need to leverage technology that can link data across different datasets while still maintaining a patients privacy.</p><p>The AMA survey didn’t include public health use cases, or a direct discussion of de-identified that to be used for the public good, but this is another approach to keeping patient’s data private while still allowing it to be used to improve health and healthcare.</p><h3>Finally, a cautionary tale</h3><p>Stat+ reported an investigation of IQVIA in which internal documents shows privacy lapses in the company’s relationship with Experian. Experian is a credit reporting agency with detailed consumer buying data and IQVIA which has over 1.2 billion patient records from around the world. This data is used to accelerate pharmaceutical research, but can also be used for other purposes such as developing specific marketing campaigns for those drugs, or targeting specific communities.</p><p>Companies like IQVIA follow HIPAA regulations to ensure that personally identifiable data is removed, but sometimes an individual can be re-identified when data from one de-identified data set is combined with another de-identified dataset. A <a href="https://blog.petrieflom.law.harvard.edu/symposia/law-ethics-science-of-re-identification-demonstrations/">example</a> of this (not associated with IQVIA) was when Latanya Sweeney, (at the time, an MIT graduate student) was able to identify the records of the former Massachusetts governor by combining two de-identified data sets.</p><p>What the investigation identified is the importance of having — and following — established practices of expert privacy review for processes within an organization. Between 2009–2016, IQVIA failed to do an independent privacy assessment. To their credit, when the these privacy issues were identified, they re-instated the privacy review, using the firm Privacy Analytics, which they acquired in 2016. The two organizations continue to operate independently, but the authors raised the issue of potential favorable treatment of IQVIA.</p><h3>Three perspectives, one bottom line</h3><p>As health data becomes more ubiquitous, the public must trust that their data is being kept safe and private. But there are good reasons that data — when properly de-identified — can be used to support pandemic response and the public good. Finally, it is the responsibility of all organizations — hospitals, providers, data aggregators and the government — to ensure transparency, independent privacy audits, and assure a skeptical public that their data is being used responsibly.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4f00fa9385ce" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[UMSI Commencement Address]]></title>
            <link>https://fridsma.medium.com/umsi-commencement-address-e31a10b2e33c?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/e31a10b2e33c</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Tue, 03 May 2022 19:15:48 GMT</pubDate>
            <atom:updated>2022-05-03T19:15:48.845Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LesNB1D7wfPVHebIc53hPQ.png" /></figure><p>I had the rare privilege of going back to my alma mater, the University of Michigan, and addressing the University of Michigan School of Information. When graduated from Michigan, UMSI didn’t exist in it’s current, modern form — but it has been a great privilege to address the students, and understand more deeply the incredible impact that individuals skilled in information sciences will have on our future world. I want to thank the faculty of UMSI, the dean Tom Finholt, the staff, the students, and everyone who made this incredible opportunity possible.</p><p>The actual video presentation can be found <a href="https://www.youtube.com/watch?v=qvHSgLrWlI4">here</a>. I’ve included my remarks below for those interested!</p><p>I would like to congratulate the class of 2022 in their remarkable achievements in obtaining their University of Michigan school of information degrees. And I want to welcome and thank all the friends, family, and supporters of these graduates who have been so instrumental in their success.</p><p>I suspect for many of you this has been a long and challenging journey to get to this point of graduation — it hasn’t been an easy couple of years.</p><p>It’s taken me nearly 3 years to get to this day myself.</p><p>You see, I was first asked to give the commencement speech in early 2020. I was totally excited to give a commencement speech back at UM, my alma mater.</p><p>So I was going to talk about the “information economy” — how the raw materials of personal information are processed in the social media platforms, how algorithms that turned those raw materials into enormous wealth. I was going to note the insightful the similarities of the information economy with the industrial revolution. I was going to end with a grand call to action for the information sciences community. It was going to be great.</p><p>Of course, I never got to give that address.</p><p>By March, we all realized that we were entering a once in a century pandemic, and all in-person meetings — including the 2020 commencement — was cancelled. I didn’t get to give a commencement address but went about hording toilet paper and learning to make a sour-dough starter like everyone else. I thought the opportunity to give this commencement speech had passed.</p><p>So, in 2021 I was optimistic about the possibility of an in-person commencement. The vaccines were just being administered, I had mastered sour dough bread, and had gained the required covid-19 pounds from all that bread. Even though I was depressed that my sour dough starter eventually died, I was optimistic that with the vaccines, we’d get the virus under control, and we’d be able to get back to in person meetings. Including the 2021 commencement.</p><p>For the 2021 commencement, I was ready to talk about how these information economy platforms and algorithms were distorting the truth and spreading disinformation. I had all sorts of statistics, and anecdotes, and a really funny stories to tell you about bleach and horse worm pills.</p><p>But unfortunately, disinformation about the virus, slowing mitigation efforts to control the virus, and mistrust of the vaccine lead to a slowing of vaccination rates, a rise in infections and in viral variants, and…. the cancellation of the 2021 in person commencement. While I’m sure the first talk would have been good, I’m sure you would have enjoyed the stories about horse pills and bleach.</p><p>So like everyone else, I went back to hording toilet paper, trying to find KN95 masks, and working though 100s of zoom calls with my dress shirt on the top, and sweats on the bottom.</p><p>So, this talk represents the third commencement speech that I’ve written. And it’s hard after three years, a huge pandemic, and writing two great commencement speeches to think about what I’m going to say today. And frankly, I’m out of ideas. This will clearly be the least interesting of the talks.</p><p>So rather than something grand, and visionary, I’m going to just try to give you a few words of advice that have helped me after I finished school.</p><p>And I’m going to talk fast, because I don’t want a fire drill, or a flood, or a plague of frogs to keep me from giving this talk.</p><p>So my first word of advice is: <strong>Seek out the intersection of fields that don’t naturally overlap.</strong></p><p>What I want to challenge you with is that the most innovative, creative, and impactful areas in which you can work are when you find two seemingly disparate fields and figure out how to make them overlap.</p><p>I am a physician, but I also have a computer science degree. And I got it at a time in which most of medicine was practiced with pen and paper and very little data was collected and used electronically. But I knew that somehow these two fields — the intersection of two fields that wouldn’t normally overlap — was going to be a critical part of health care in the future. And now we couldn’t imagine it any other way.</p><p>Finding those intersections forces you to think about how to take a solution in one area and apply it to a different problem. Climate change and healthcare. Energy independence and information sciences. The information economy and economic justice.</p><p>Find things you are passionate about and make them overlap. That’s where the best problems — and solutions — are.</p><p>My second word of advice: <strong>We will all make mistakes. Just try to make new ones</strong></p><p>When I was at the office of the national coordinator for health IT we were charged with taking 20% of the largest GDP in the world, and changing it from paper records to electronic records in 5 years — something that took the financial industry 25 years to do. I was the chief science officer, so I had the hard task of all the technical work of figuring out the standards, the certification criteria, and trying to convert those technical specifications into regulations.</p><p>We studied lots of other countries — Germany and Canada and Australia and Denmark — to try to understand how they did things and learn from their experiences. Lots of different models and approaches, with varying success.</p><p>But we decided to do things a little bit different. We didn’t award contracts to build an electronic health record. We didn’t centralize the data into one giant record. We crowd-sourced the standards development process. We focused on the connections between systems, rather than the systems themselves. We build a stack of standards, modeled after the internet and the world wide web, that we hoped would be resilient to future changes.</p><p>See, when the WWW was first developed, no one imagined that we would eventually do most of our banking and holiday shopping on the WWW. And a system developed years ago to send email and files and academic documents, now has Netflix streaming accounting for over 50% of all internet traffic. And in the past year, those same standards supported all those zoom calls in all my sweats.</p><p>And I know that some of what we did ONC will work, and some of it won’t. And I know that someone some smart than me will find new uses that we never imagined for the standards that we developed. We specifically tried to chart a new path and a new direction that has so far — fingers crossed — created a new robust industry in health IT. We still have a long way to go with interoperability and health IT, but I hope our future mistakes are all new.</p><p>So learn from what others have done, take a chance on something different, make mistakes, but always try to make new ones.</p><p>Third — <strong>Take the path of least regret.</strong></p><p>This has served me well in a couple of occasions. When I was an academic profession in Biomedical Informatics, I knew what my career path would be. I would be an assistant professor, then I would become an associate professor, then eventually a full professor and potentially the head of a department. It all seemed pretty predictable.</p><p>But when David Blumenthal called me in 2009 and asked me to join the office of the national coordinator for health IT, it was a big decision. I would mean leaving a tenure track position in a university and move to the federal government in which there was a tremendous amount of uncertainty.</p><p>I had never managed a $600,000 let alone a $60 million budget, and that the challenges that we had were daunting in terms of trying to get the United States to move to electronic health records. But I also knew, that if I didn’t take that job, that it didn’t if I didn’t move to Washington DC, that if I didn’t work on this problem that was big and hairy and hard, I would always wonder what if?</p><p>So I left my tenure-track job, I jumped into the federal government, and it has opened up opportunities that I never thought would have existed.</p><p>I used to use the same phrase when I was at ONC — and my colleagues would laugh at me, because I would also judge our initiatives on whether they represented a “path of least regret”. I knew that all the things that we thought were going to be relevant in 2010 were likely going to change in the next 10 or 15 years. And so we couldn’t lock ourselves into a particular approach — we had to be resilient to change. So just like the internet, we needed to create an infrastructure that could weather changes technology and medicine as well.</p><p>So I want to close with a final thought. One of <strong>resilience.</strong> These past 2 years have been hard. I made a lot of sour dough bread, I gained and lost a lot of weight, and I still have toilet paper that I haven’t used. But we’ve all gotten through the zoom calls, and the remote work, and the masks, and the vaccines, and all the changes that have forced us all to be resilient. You as a group, have demonstrated resilience to get to this point in your careers.</p><p>So my final thoughts comes from the late Madeline Albright in an article she wrote a few years ago. She had a brilliant career — she was smart, tenacious, successful, and had a broad impact on the people who came after her. But when she looked back at her life, she said:</p><p>“Genius is often defined as the ability to be right the first time”</p><p>But she went on to say ”No matter how smart we are, we can either allow sorrows and grievances to overwhelm us, or we can respond positively to setbacks — either by our own misjudgments or by forces beyond our control.”</p><p>And she was right. She was describing resilience.</p><p>So my final recommendations comes from Madeline Albright — If you have to choose, <strong>always choose resilience of spirit over brilliance of the mind.</strong> The past two years have made that so true — we have all had to show resilience in the face of political unrest, wars, the pandemic, and social injustice. And we will face more challenges in self-governance, climate change, and preventing advanced information technologies from causing more harm than good. And you, as a cohort, will be better prepared than previous generations to be resilient. Because you had to live it, these past 2 years.</p><p>So I don’t know if this is a better commencement speech than the other two that I wrote. But I want to leave you with this:</p><ul><li><strong>seek out the intersection of fields that don’t naturally overlap.</strong></li><li><strong>Make mistakes, but always make new ones.</strong></li><li><strong>Take the path of least regret.</strong></li><li><strong>And always chose resilience because it’s not whether you fall down, but how you get up.</strong></li></ul><p>Thank you for the honor of addressing you today and congratulations to you and your families for all your tremendous achievements in graduating today. Thank you.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e31a10b2e33c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[We need the Internet of Health Data]]></title>
            <link>https://fridsma.medium.com/we-need-the-internet-of-health-data-fe11b0d9f19f?source=rss-5c680c74cc34------2</link>
            <guid isPermaLink="false">https://medium.com/p/fe11b0d9f19f</guid>
            <dc:creator><![CDATA[Doug Fridsma]]></dc:creator>
            <pubDate>Tue, 26 Apr 2022 14:30:15 GMT</pubDate>
            <atom:updated>2022-04-26T14:30:15.453Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>(and not the walled garden of AOL)</strong></p><p>When I was the Chief Science Officer at the HHS Office of the National Coordinator for Health IT (ONC), I can’t tell you the number of times that a company would come into my office with the solution to interoperability, or information exchange, or data analytics or any number of challenging problems that we need to solve in health IT.</p><p>Invariably, what they meant to say was “if the government would require everyone to use our solution, then all of your problems would go away”.</p><p>It can be seductive to think that with a single contract, or a single approach, you could make all the problems of data exchange and interoperability in health IT go away. But the success of those solutions are often short-lived. In conversations with other industries (and countries) who had taken a “one size fits all” approach to data exchange and interoperability often failed to deliver on the long term value of health IT. Sometimes, a messy, vibrant ecosystem of solutions can better drive long-term value and sustained benefit for patients, providers, and the health of the country.</p><p>There are historical examples of how this approach can limit innovation. Back in the 1990s, AOL wanted to be the central portal for access to online services. It aggregated content across the internet, created a single interface for email, developed communities and social networks, and tried (with limited success) to integrate the WWW and a web browser into its platform. With millions of users, there was some initial success. But as technology and the use of the internet grew, AOL was unable to keep up with the innovation. At one point, you could get everything you needed from AOL — except the internet. The flexibility of the internet, its openness, neutrality, and simple set of standards made innovation more rapid than a single platform could accommodate.</p><p>We need to learn the lessons of AOL (and the internet) as we think about building a stack of connectivity technology to support public health and health care IT. While creating a single solution that integrates centralized data sources, privacy-preserving linkages, and analytics can be seductively simple, we know that ultimately this approach limits future capabilities, slows the pace of innovation, and makes the public health community less able to respond to rapid changes in technology, data, or the health of the public.</p><p>What is needed is not a “one ring to rule them all” solution, but a flexible stack of technology that allows for interoperable privacy-preserving linkages, a neutral approach to data sources, and the flexible and extensible approach that can leverage new analytics techniques and resources that is resilient to new types of data and new health questions. We should not build walled gardens but instead, encourage interoperability and a competitive ecosystem of solutions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ck03toCWGS_boBgW" /><figcaption>Figure 1. We should resist the temptation to create a single solution that is not resilient to technology or health care needs.</figcaption></figure><p>With our wireless phones, international calling capabilities, and unlimited data and phone services, we benefit from an ecosystem of providers and telecommunication options. But that was not always the case. For a very long time, AT&amp;T was the singular network for phone services in the US. Phone services and telephones were expensive. Innovation was limited. And customers had few options.</p><p>Fast forward after the break-up of AT&amp;T and the introduction of competition within telecommunications. Now, if I have a cell phone with connectivity powered by T-Mobile, I can still call a landline, an international number, or another cell phone supported by a different company — and they all work together seamlessly. Interoperability occurs seamlessly to support connectivity in the background.</p><p>Similarly, the neutrality of the internet (and later the World Wide Web) allowed the network and technology stack to grow and evolve as new uses became available. When the initial stack of standards for the internet were developed, no one imagined that we would eventually do our banking, stream movies and entertainment, and ultimately support team collaboration, remote education and telemedicine in the face of a pandemic. These are functions that would not have developed within a monopolistic, singular platform. Given the dynamic and changing nature of public health, and the growing appreciation for the importance of social determinants of health, novel data sources for analysis, and the evolving health care IT ecosystem, it becomes even more important to build a flexible, neutral approach that is resilient to new use cases.</p><p><strong>A Future-Proofed, Resilient Health Data Network</strong></p><p>This history suggests that what we need for a scalable, future-proof health data infrastructure is a stack of interoperable technologies that are resilient and can incorporate new, and unanticipated innovations –</p><ul><li><strong>a neutral and inclusive approach to data providers</strong>, <strong>coordination and bridging between different privacy-preserving technologies</strong>,</li><li><strong>ways of moving information from one place to another that doesn’t necessarily require centralized aggregation</strong>,</li><li><strong>the ability to transform formats and semantics between one information model to another</strong>,</li><li><strong>and accommodating different analytics and population health approaches that fit the problem to be solved.</strong></li></ul><p>This can be accomplished through establishing a neutral stack of technology that multiple interoperable standards can roll up into.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*_I-VaWIrRdrtnCLL" /><figcaption>Figure 2. The Interoperability Stack. Each of the vertical use cases can be supported by a common set of standards that are layered to allow for flexibility and resilience to changes in the use cases, the technology or other aspects of health interoperability.</figcaption></figure><p>As the graphic illustrates, there is tremendous value in developing a dynamic portfolio of technologies (and policies) that allow for diversity and innovation in public health. As privacy-preserving approaches to exchanging and using data become important with increased data literacy, we need to consider this “layer” in the stack of technology, and support a neutral, interoperable approach to preserving patient privacy in public health data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*x8u8yEQJ6hPqySLY" /><figcaption>Figure 3. An illustration of how interoperable privacy-preserving health data networks can function with either building bridges between different networks, or providing two token keys (from each network) to support linkages across different privacy preserving health data networks.</figcaption></figure><p>Using a stack of standards allows networks to communicate with each other. As the figure above illustrates, we should avoid data networks that have interoperability within a network, but prevent data from flowing between networks. Instead, we need to have interoperability and data exchange <strong>between</strong> networks, based on a consistent stack of standards. Patients (and their data) rarely exist within only one ecosystem.</p><p>Taking a “one size fits all” approach will severely impede other interoperable approaches that have started to flourish. Had ONC done that at the start of Meaningful Use, we would never have seen the adoption of interoperable data exchange standards such as FHIR that are data source and technology agnostic–FHIR and the API infrastructure that we see today, didn’t exist when we began to adopt EHRs. Encouraging interoperability and data exchange at the beginning will lead to new innovations that are often unforeseen when we start.</p><p>For care delivery, and identifiable data exchange such as TEFCA, the Sequoia Project, Directrust, and multiple health information exchanges, data can flow freely across different networks to give a full picture of a patient’s care. De-identified data networks should be no different. Every de-identified data network should be interoperable with all others, so that data can flow across networks in a neutral manner to serve all use cases. In no case should public health agencies build data networks that are closed systems, as that inflexibility will doom the projects built upon them to fail to adapt to changing research and surveillance needs, and to be subject to monopolistic contracting that is inefficient in the best case and wasteful in the common case.</p><p>Such an approach will drive innovation by decoupling exchange networks from innovations in analytics and allow data providers and data analysts to evolve independently. It allows customers to manage their risk tolerance by using methods that are less susceptible to re-identification risks. And it makes the data ecosystem more diverse and reliable.</p><p>All-in-one solutions often have limited access to different kinds of data due to basic market competition. If a company both de-identifies and aggregates data for sale, they will be unlikely to be able to work with other data aggregators who view them as a competitor, which impedes the ability to link all necessary datasets together. Neutral approaches do not interfere with the free flow of information because they do not compete with any data source nor any data user, and can bring together competitors in ways that allow the customer to have access to the biggest network of data.</p><p><strong>Build a future infrastructure that enables innovation, not stifles it</strong></p><p>We must be thoughtful in how we move forward with a health data infrastructure. Part of the analysis that ONC did early in meaningful use was to study the successes (and failures) of other countries and their health IT efforts. Countries that adopted a single stack of standards were often unable to change as new standards and technologies emerged –this limited competition made innovation more difficult. We should continue to learn from what works and what doesn’t in health IT.</p><p>Ultimately, there is rarely a “one size fits all” approach to data networks. Data needs will change over time and we need systems that are <em>resilient</em> to change. Analytics capabilities will improve and new techniques will want to be applied to existing (and new) data sources. And we want health data infrastructure that will drive continued innovation and create more benefit for the public. This will require a stack of interoperable technology and a heterogeneity of approaches that allow innovative solutions to work together in ways that benefit health and health care. Ultimately, we want to build a system for health IT infrastructure that has the robust, dynamic, and innovative features of the world wide web, and not (the now) quaint idea of AOL, “you’ve got mail”, and very little else.</p><p>Our health IT future requires a resilient, interoperable, privacy-preserving and dynamic data exchange infrastructure. The public deserves nothing less.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fe11b0d9f19f" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>