The future of Medical Education: organizing, teaching, coexisting and learning from artificial intelligences.
Since the emergence of modern natural philosophy, the structures governing information have been based on a foundational myth where a central authority or force defines the goals, paradigms, structures, distribution channels and beneficiaries of the knowledge and its wealth. This creates a very concentrated type of pyramidal constitution, where creation and management is restricted to a few societal groups ruling the correct paradigms, creation methods and channels for dissemination. In others words, this is a centralized hierarchical and authoritarian managed system.
In contrast to the centralized model designed by humans utilizing creational myths, Complex Adaptive Systems (CAS) are being recognized as the model explaining how nature, biological beings and data are organized. CASs are defined by the ability to self-organize, adapt to the changes of internal and external conditions and provide a survivability advantage to the organism (community) as a whole. Many things that surround us are CASs, like social media networks, cities, wolf packs, swarms of insects and memes.
When knowledge and ideas are organized as CASs, they are based in the concept of stigmergy, where cues created by individuals (nodes) influence the behavior of others members of the community (network) changing the overall output of the group. A classic example of this is ant troops building an ant colony. This stigmergic collaboration requires communication, social negotiation and a creative output. Humans associated this way for thousands of years, from hunting mammoths to building cathedrals. But as information became more complex, the creative outputs became more difficult to socially negotiate and stigmergy faded into the background.
Gilles Deleuze and Félix Guattari rediscovered and described the concept of rhizomatic organization (a form of stigmergic collaborative network), where the network is non-hierarchical, self-governed, distributed, maximally connected, multi-domain, semiotic and where the behavior and outputs can not be predicted by the characteristics of the nodes as they change when they communicate with each other. We have written about this before on this blog.
Medical education for centuries and even after the advent of the Flexnerian era has been based on the centralized, hierarchical and authoritarian paradigm of information and knowledge management. Despite quantum leaps in the last decades such as competency-based education, the overall framework remains founded in a stratified model, where some members of the group are directors, others are teachers and others are students with unidirectional flow of information.
Our world is changing rapidly in the way we manage data and knowledge. For most practical purposes the average individual now has access to an almost incomprehensible amount of information, and this includes medical science and education. Users of the information, in this case our learners, want to turn it into knowledge without necessarily having a preceptor telling them what is right and what is wrong. What learners want is a community to give contextual meaning to the information in order to create their own personal learning networks and educational artifacts. This is partially an explanation of the erruption and success of the Free Open Access Medical Education movement.
Centralized, authoritarian and hierarchical structures are per definition inefficient and non-resilient as they can’t manage problems with unbounded data and not able to react nimbly to changes in conditions. After hundreds of years they way we teach medicine remains one within these structures. On the other side, CASs are a core part of our lives, from the way we share news with our friends and family, shop for items on the internet, how the traffic lights are organized in our commute, and how our insurance premiums are calculated. This is the time to move medical education to a new social constructivism paradigm based on CASs, rhizomatics and open knowledge. This new construct is predicated on an engaged community, robust knowledge exchange, and self-governance, where collaboration is encouraged and facilitated, curators are enablers but not authorities, and the system is controlled by multiple iterations of social negotiation as via an evolutionary algorithm. David Cormier better describes this as “the community is the curriculum”.
Within medical education there is a (distant) move from an individual competency model to a collective competence construct, where the outcomes of education and healthcare are not defined by the isolated performance of an individual but by the complex interconnections of multiple agents. At the same time, we need to start considering how to incorporate collective clinical competence with dataism and artificial intelligence (AI)..
The idea of collective competence, developed by Lorelei Lindgard and Bryan Hodges, mirrors the concepts of stigmergy and rhizomatic organization of networks. Traditionally we have focused on hyper-specialization, data reductionism and individual performance. However, a more decentralized architecture calls for multipotentiality, contextuality, interconnection, data augmentation and network/community performance. As Lindgard proposes, this can be mainly achieved with technological affordances and constraints.
We have that technological affordances and constraints now. We are witnesses to the arrival of soft Artificial Intelligence (AI) in our lives, from preemptive recommendations on what we want to buy to predictions of who is pregnant. This type of AI is becoming ubiquitous in our clinical practice, particularly in the domains of pharmacotherapeutics and decision support. Currently, decision support is nothing more than a cognitive crutch, but it is becoming increasing intrusive in all aspects of clinical care. As clinicians and educators, we have not given enough attention to the issue of how interact with soft AI. (e.g., If pharmacy decision supports become universal, why should new learners know anything about it? Can we shorten training? Can we just focus on diagnosis and decision-making? ) I feel that this particular train has already left the station and many of our learners are using these tools without understanding the key concepts behind them.
Although soft AI is creating a lot of questions, the tectonic change will come with the advent of strong AI. This event, the emergence of efficient, supra human and massive data management intelligence will redefine what we do. The day that a strong AI tell us that our diagnosis is wrongand our treatment recommendations are faulty is not far into the future; I’m certain this will happen during my life time.
We need to start thinking and planning our roles for a future where AI will take most of the important decisions with little input from humans. Will the training of future doctors be restricted to learning empathy? How do we teach students to learn from a digital intelligence? How do we teach digital beings? Do we actually have to do it? Is medical education going to be nothing more than how to interact with AIs?
Many experts think that data can’t self organize (following Claude’s entropy). However, strong AI will almost certainly behave in a way that assures instrumental goal achievement, self-preservation and resource acquisition. Strong AI will relentlessly pursue the objective that they are programed for even if is not aligned with human priorities. We have to be very careful in deciding and programming what those goals are.
We are entering the age of dataism, where authority and truth does not emanate from human self-determination, but from data analysis. If we don’t pay attention to the changes around us, we are threaten to become nothing more than biological data entry agents to a supra human mind; we will become the machine of the Ghost in the Machine problem. The ultimate challenge is to create a framework for strong AI that guarantees that the prime directive of the system is to achieve what is good for the patient, what is good for the patient’s life, what is good by the patient’s self-determination and value structure and not necessarily good according to the AIs optimal solution. As educators, we need to start thinking about how to teach these digital beings about what it is to be human, how medicine is about helping, comforting and accompanying our patients, and not only the optimization of diagnosis and treatment. Finally, we need to start thinking about how we are going to learn from non-human teachers.
References and further reading
- La Innovacion Pendiente. Cobo C. [Spanish].
- Yuval Noah Harari on big data, Google and the end of free will. Harari YN. Financial Times.
- What is an evolutionary algorithm? In: Introduction to Evolutionary Computing. Eiben AE and Smith JE.
- Paradoxical Truths and Persistent Myths: Reframing the Team Competence Conversation. Lingard L
- Paths, dangers, strategies. Bostrom N. Oxford University Press.
- The Ghost in the Machine. Koestler A.
- “Answer”. Brown F.
— — — — — — — — — — — — — — — — — — — — — — — BONUS TRACK
“Answer” by Fredric Brown (1954)
Dwan Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.
He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe — ninety-six billion planets — into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment’s silence he said, “Now, Dwar Ev.”
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six
billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.”
“Thank you,” said Dwar Reyn. “It shall be a question which no single cybernetics machine has been able to answer.”
He turned to face the machine. “Is there a God?”
The mighty voice answered without hesitation, without the clicking of a single relay.
“Yes, now there is a God.”
Originally published at icenetblog.royalcollege.ca on November 29, 2016.