Reflections on ITU’s 2023 AI for Good Conference

Cooper/Smith
Cooper/Smith
Published in
3 min readAug 21, 2023

ChatGPT launched in November 2022 and immediately captured the world’s imagination, performing feats many thought were years or decades away. But GPT-3.5, the technology that made those feats possible, wasn’t released that month — it had been available for eight months already. The secret of the November 2022 ChatGPT was not just the groundbreaking AI model behind it, but the simple user interface that launched it into the public consciousness. The chatbot made the GPT-3.5 model accessible, and above all useful, to the average person.

Last month, I attended ITU’s AI for Good conference in Geneva on behalf of Cooper/Smith. The showcases were full of incredible robots engaging in conversations with passersby, dodging obstacles on four legs, and even pouring glasses of wine. But the hard work to bring such ideas to life is still just a first step. As all these technologies enter maturity, the crucial next leap is finding applications that will make them useful to people, or else they will fade away as mere novelties — the robot bartender, I was told, was designed to be a curiosity at events, not work an actual bar.

There were many technologies on showcase that could prove tremendous boons for public health. Image processors that look for signs of cancer seem like a particularly promising tool, as they can give a radiologist a quick suggestion rather than forcing them to analyze a slide from scratch. Large language models should prove adept at combing through all a patient’s qualitative statements in a way modern medical records can’t. More speculatively, friendly and communicative robots could alleviate the loneliness crisis that is already unfolding in the world’s aging populations.

To take advantage of these breakthroughs, we must do the hard, sometimes unglamorous work of bringing them into the real world. For example: for a radiologist to get the most out of an AI-based image scanner, the interface around it must be designed to make their life easier. If it is cumbersome or hard to learn, they will abandon it, and rightly so — their lives are busy enough already. If its outputs aren’t compatible with the reports they need to write for their clinic, hospital or ministry, the extra work won’t be worth the quicker time of diagnosis. No matter how impressive the underlying technology, it needs to fit the context in which it’s deployed and be user-friendly.

Bridging that gap requires an excellent understanding of both AI technology and the public health reality we should be targeting — not just in a clinical or medical sense, but in understanding how health workers go about their days, what requirements are placed on them, and how AI can help, rather than hinder their work. There could be no greater failure for a technological solution than for a health worker to say that dealing with it takes time away from helping patients. AI solutions need to be designed with care — and we must always be open to admitting when they are not the right solution, or not a helpful one, or not a cost-effective one. We’ll be talking through this in detail at our Directors’ retreat on AI & Tech in November.

The WHO forecasts a shortage of ten million health workers by 2030. I truly believe that the technologies we are seeing represent something new, and that they have immense potential to help health worker productivity, and thereby alleviate the burden of disease on the world’s most vulnerable people.

But to realize that potential, we need to make sure technologies serve those that are being deployed for, and are not shoehorned in to be shiny new toys, or talking points for public relations. The technologies are already powerful, and will only get moreso: now the slow, hard, but necessary work of bringing them to the service of humanity, one step at a time, must begin.

--

--

Cooper/Smith
Cooper/Smith

We use hard data to increase effectiveness and efficiency of health and development programs worldwide. www.coopersmith.org