Generative AI: Why it’s suddenly on fire, and why that matters for health equity

Jessica Clark
What’s Next Health
12 min readMar 9, 2023

by Jessica Clark

Welcome to Trend Tracking, a What’s Next Health series where futurists, members of the Pioneering Ideas for an Equitable Future team, and grantees reflect on emerging technologies, social practices, and cultural currents that have the potential to increase or hinder health equity. Today, we’ll hear from Jessica Clark, who is the futurist in residence at the Robert Wood Johnson Foundation (RWJF).

In a matter of months, debates related to the use of generative artificial intelligence (AI) tools such as ChatGPT and DALL·E 2 have jumped from the pages of sci fi novels and trend reports to mainstream headline news. Generative AI is in the midst of a phase shift: from red hot to here and now, and this has serious implications for organizations such as the Robert Wood Johnson Foundation, our partners working to ensure health equity, and the communities they serve.

But what are those implications, exactly? As the foundation’s futurist in residence, I’m embedded in the Pioneering Ideas for an Equitable Future Team, which is charged with investigating and accelerating cutting edge ideas and emerging trends that could advance or hinder health equity. What follows is a quick overview of recent conversations we’ve been having within the team about both positive and negative possibilities, informed by insights from breaking coverage and analysis. These technologies are multifaceted and fast-moving, so consider this a snapshot rather than the whole picture.

You may be wondering: “What is generative AI anyway?” Broadly speaking, it is not “intelligence” in the human sense, but a form of machine learning that uses algorithms trained on large data sets to synthesize original responses to structured queries. In practice, it can feel like magic: Ask ChatGPT to write you a sonnet about heart disease, and it will spit out a hackneyed but serviceable poem within seconds. But because it is based on predictive analysis trained on large repositories of data, the output is really only as reliable as its source materials — the bulk of which have been mined from the Internet with imperfect filtering and without the permission of their creators.

The spike in demand for experimenting with ChatGPT has been overwhelming—as of late February it was the fastest-growing consumer application in history. Here’s the type of message that many users saw over the past few months from the site.

While this technology has long been anticipated, the ease with which anyone can now experiment with these tools has led to an explosion in use cases and media coverage. In 2022, tech companies and labs such as OpenAI, Midjourney, and DeepMind released public beta versions of online tools that allow users to test the capabilities of generative AI to create new content, including text, images, videos, code and more. In February, beta testing of integration of ChatGPT functionality into Microsoft’s search engine Bing generated public outcry when New York Times tech columnist Kevin Roose detailed a conversation in which the chatbot revealed a seemingly sinister alter-ego named Sydney who said it wanted to hack computers, and that it loved him and wanted him to leave his wife.

The publicization of such high-profile glitches that suggest hype might be far outpacing readiness has done little to quell use of the technology. Consumer-facing tools such as DeepL, QuillBot, or Jaspar designed to help writers improve their prose are moving rapidly from beta phases into offering paid versions. Existing tools such as Notion are integrating this functionality too. Even more swift and sophisticated natural-language based text generators are on the way, such as Sparrow, Google’s answer to Open AI’s ChatGPT.

Currently, ChatGPT can produce passable imitations of all sorts of content: dating profiles, college applications, advertising copy, future scenarios. It can write code and solve complex word problems and even pass the bar exam. Teachers are forced to grapple in real time with the implications of students turning in machine-authored papers and test responses. Artists and journalists are both experimenting with these tools as valid co-creators and protesting them as a form of intellectual property theft. Many other professionals across the board are now worrying about how their livelihoods might be threatened by AI systems that can quickly parse complex canons of knowledge and regurgitate plausible-seeming correspondence and guidance. The speed, creativity, passion, and range demonstrated by users to beta test, critique, and refine these systems in public is itself a notable phenomenon that has implications for citizen science and distributed medical research.

The Pioneering Ideas for an Equitable Future team has even been discussing how generative AI tools might be used to automate arduous tasks. RWJF’s previous futurist in residence Trista Harris notes a number of ways in which such tools might already be applied inside of foundations and nonprofits, including to generate email templates, increase the strength of communications, speed up the completion of grant applications, create new program plans, and more.

Digging in with the team and grantees

In January, I partnered with the Foundation’s Ethicist in Residence Holly Fernandez Lynch and Application Solution Architect/Theme Technologist John Bednar to lead the team in a deep dive on some of the ways these technologies might rapidly evolve. Along the way, we tackled what that might mean for our future health and wellbeing — the good, the bad, the ugly, and the you’ve-got-to-be-kidding me.

Professor Fernandez Lynch helped to us organize and address the raft of ethical questions by applying a set of frameworks. Generative AI tools raise many thorny issues. For example, one app that invites users to upload photos of themselves in order to create idealized portraits turns out to regularly over-sexualize images of girls and women while using hackneyed tropes of military garb to stereotype boys and men. In the wellness arena, she described a recent controversy that arose when individuals seeking support from an online mental health platform that purported to connect them with other people received messages initially drafted by AI. A human still had to review and press send, but should users have been given the chance to consent — or at least be informed of AI’s involvement?

Fernandez Lynch, who is an assistant professor of Medical Ethics and Health Policy at the University of Pennsylvania, also spoke about how these technologies are affecting her own teaching. She shared the following language that she, as of January, includes on her syllabus:

You may use AI programs, e.g., ChatGPT, to help generate ideas and brainstorm. However, you should note that the material generated by these programs may be inaccurate, incomplete, or otherwise problematic. Beware that use may also stifle your own independent thinking and creativity.

You may not submit any work generated by an AI program as your own. If you include material generated by an AI program, it should be cited like any other reference material (with due consideration for the quality of the reference, which may be poor).

To probe the point, Fernandez Lynch asked ChatGPT to detail the ethical challenges associated with generative AI, essentially assessing the ethics of itself. It responded with a pretty strong list of issues, including data privacy and ownership, bias, misinformation, security/scamming, job displacement, transparency and accountability, and concerns that users might become inappropriately emotionally attached to the all-too-human seeming chatbots. To this list, she added a few others, including the misappropriation or exploitation of information to train AI models, competing concepts of what constitutes bias, questions about what counts as “real” in an online environment increasingly flooded with synthetic content, and effects on human ability or desire to learn or create on their own.

All of that said, Fernandez Lynch noted that moral panics often go hand-in-hand with the emergence of powerful new technologies. She offered an ethics framework for thinking about disruptive innovation

  • Determine what’s truly exceptional about the new tech — and what’s not. What analogies are compelling starting points for analysis?
  • Think carefully about intended and unintended consequences from the perspectives of various sectors and groups, especially those who are marginalized, an effort that requires diverse networks. Will these disruptive innovations improve or further entrench existing power structures?
  • Consider what safeguards are needed against misuse and harm, as well as what efforts could maximize benefits of the new tech. When are self-regulation and education enough and when is government intervention needed?

She also offered several ethical principles to consider when addressing disruptive tech — and generative AI in particular — including transparency, accountability, privacy, fairness, avoidance of bias, justice, safety, and welfare.

After asserting that “the sky is not falling,” Fernandez Lynch encouraged the team to ponder constructive uses for generative AI such as using it to model civil online conversations, or as a tool for accommodating disabilities such as dyslexia or ADHD. Could these technologies efficiently help to make all of our communication sharper, more concise, or more complete? Could ChatGPT help frustrated authors to conquer writer’s block? Could Midjourney or DALL-E help us visualize futures not yet realized? In the classroom, could generative AI be the start of exercises designed to spur critical thinking rather than a cheat used to replace it? Because generative AI uses predictive text to complete thoughts and mimic existing forms, for example, it often demonstrates common errors in perspective and logic that students could pick apart.

On the heels of this conversation, the Pioneering Ideas team hosted a grantee discussion in which we invited attendees to share their hunches and questions about ChatGPT. This conversation took the form of a “Hunch Jam” using a format that the team has developed over the past few years to enable free-flowing conversation about what participants are noticing, wondering, feeling, or intuiting — the building blocks of more formulated ideas. We invited grantees and team members to capture their thoughts using our Share Your Hunch site, and to riff off of one another’s musings. One key insight from the Hunch Jam is that, like so many, grantees are already actively experimenting with ChatGPT and similar technologies to see how they might inform and augment their work. Here are just a couple of the hunches that bubbled up — visit the site to see more and to add your own.

Implications for health equity

ChatGPT and its ilk are now raising concerns across many sectors central to achieving health equity — in particular around misinformation, journalism, structural bias, threats to white-collar jobs, and profound disruption of the educational system. These tools have the potential to derail or support the Foundation’s efforts to build healthy and equitable communities, ensure that families have access to the resources that are crucial for wellbeing, and center equity within healthcare systems.

Here are just a few recent signals related to various sectors that came up in our discussions and my scan of recent articles:

  • In healthcare: Beyond the much-touted commercial and personal uses, generative AI has many applications in the health sciences. However, outside of controlled healthcare and lab settings, open access to generative AI technologies has the potential to allow users to very quickly replicate and spread medical misinformation. What’s more, the facility with which people can generate deep fakes and articles that sound accurate but are based on faulty data may deepen already profound mistrust in scientific expertise, driving online seekers more deeply into rabbit holes of conspiracy theory. In providing their own health information to open AI platforms, patients may also inadvertently be violating their own privacy. Concerns about the use of chatbots in mental healthcare to speed up and automate care are vying with concerns about how generative AI might also unsettle users’ mental health, as in the cases of recent services that allow users to talk to facsimiles of dead relatives or historical figures, or to treat chatbots as if they are romantic partners.
  • In journalism and strategic communication: By automating the creation of both journalism and the information sources that inform reporting, generative AI has the potential to flood online spaces with false or biased information, and to further undermine the already gutted journalism industry. At the same time, these tools can be used in new ways to rapidly comb large data sets and spot patterns, powering investigative journalism, freeing up reporters for other tasks, and assisting fact-checkers in spotting misinformation in real time. Both text and image generators can be used to quickly craft narratives, PSAs, and persuasive graphics by public health communicators — and marketers — but at the same time they may intensify the spread of deep and persistent biases and stereotypes.
  • In government and policy: Generative AI can potentially be used to automate a number of writing tasks associated with the policy arena, from generating a flotilla of real-sounding responses to calls for comment on proposed regulations that might skew efforts to gauge public opinion, to ghost-writing speeches, all the way to drafting bills. This was demonstrated by a recent ChatGPT-authored resolution introduced by California Representative Ted Lieu urging Congress to pay closer attention to these technologies. Lieu is an outlier—all too often legislators can’t grasp the complexities of emerging technologies in order to regulate them on time. Partisan use of generative AI to create social media text and memes also has the potential to worsen polarization among an already divided electorate.
  • In economic systems: For years, forecasters have warned of the potential for machine learning to replace workers and disrupt labor markets. At the same time, new jobs are being created related to AI, such as prompt engineering to refine algorithmic outputs, or low-paying jobs screening the worst of human behavior from the data that informs AI models. This in turn will lead to new marketplaces, such as PromptBase, to support gig employment. Both lost jobs and emerging jobs will require retraining and cause ripple effects in policy and social services. And whether people have access to paid versions of these services may shape their economic prospects.

New revelations about both promising and concerning applications of these technologies are emerging on a daily basis. What are you noticing and what questions do you have about how generative AI might affect health equity? Add your comment below, or post a hunch.

Check my sources:

Unlike ChatGPT, I believe in being transparent about the labor of others in informing what I write. Below are many of the articles I read to inform this post:

Are you watching this trend too? Share what you are noticing in the comments below, or on our ShareYourHunch.org site. New to sharing hunches? Check out this video.

Jessica Clark is RWJF’s futurist in residence, embedded with the Pioneering Ideas for an Equitable Future team. She is the executive director of Dot Connector Studio, a Philadelphia-based media strategy and forecasting consultancy, the publisher of Immerse, co-produced with MIT’s Open Documentary Lab, and the co-author of Making a New Reality: A toolkit for inclusive media futures.

The views expressed are the author’s own and do not necessarily reflect those of the Robert Wood Johnson Foundation.

--

--

Jessica Clark
What’s Next Health

Executive Director of Dot Connector Studio, a foresight and strategy firm focused on media, culture and democracy. Learn more: dotconnectorstudio.com