Why A.I. Can’t Replace Humanities Professors Just Yet
(Will they ever?)
In a recent article on Substack, Hollis Robbins (Professor of English and former Dean of the College of Humanities at University of Utah) warns that university professors are about to become obsolete. Describing the need for a complete transformation of the university in light of the development of general artificial intelligence (AGI), she states boldly that “The only defensible reason for universities to remain in operation is to offer students an opportunity to learn from faculty whose expertise surpasses current AI. Nothing else makes sense.” She consequently calls upon faculty members to write a memo justifying their employment by answering questions including “What specific knowledge do I possess that AGI does not?” and “What unique insights or capabilities can I offer that exceed AGI systems?” Anyone who cannot reply cogently to these questions, she writes, should have “no place in the institution.”
Strident and alarmist, but also provocative, her piece raises important questions for all humanities professors. For, while Robbins presents a vision of the future, even now, a spate of AI tools called “Deep Research” from Gemini, OpenAI, and Perplexity claim to be able to do our job for us.
Is the professor’s moment of obsolescence already nigh? Should we have no place in the institution starting today? I say not so fast, and you can consider this essay my memo, submitted on behalf of all humanists….
In my pedagogical philosophy, I differentiate between three core areas of humanities skills, which I shorthand as “the three Cs.” The first C is the consumption of knowledge. The majority of the entry-level college courses in the humanities focus on teaching students how to be savvy consumers of knowledge. That is to say, we help them to acquire the skills they need to effectively find, understand, and process information produced by others. Depending on the class, we may do this through a variety of means, including close listening, close reading, and note-taking skills. In order to consume information appropriately, students need to evaluate the outputs of different disciplines (how a clinical study differs from an ethnographic report, for example), how to acquire primary and secondary sources using advanced research tools (particular specialized databases, for example), as well as how to properly document, manage, and cite their sources.
Are LLMs capable of being savvy consumers of knowledge in these ways? At the time of writing, my verdict is “not yet.” In my own testing, I have found that the latest updates of the most popular AI tools can indeed retrieve a list of sources pertaining to a given topic. But are they able to tell me which sources are the most relevant, the most innovative, the most exciting, the most potentially compelling or useful? At present, I have seen no evidence that they can. In my experiments with these services, I often feel like I am working with an eager but naive undergraduate student — someone who is industrious yet indiscriminate in their efforts, unable to separate wheat from chaff, unable to articulate any intentionality or strategy behind their collecting. I know that with time and training, a human student will improve their sophistication and skills; whether AI will also be able to improve in these areas remains to be seen.
The next core area of humanities education, the second C, is curation of knowledge. This area involves cultivating the skills to critically analyze and synthesize the sources one has discovered. In the university setting, students tend to move into learning to curate knowledge in the advanced undergraduate courses they take for their major, and continue to hone these skills through masters level graduate work. Here, students learn to understand the shape of the conversation taking place between and among the sources they have uncovered, and to identify the key arguments, methodologies, and assumptions that shape that discussion. Additionally, students learn to explain all this to a specific audience, tailoring the presentation of information for maximum comprehension and impact while also investigating the most effective and persuasive means of conveying their conclusions.
Are LLMs capable of being effective curators of knowledge in these ways? The results of my experiments are a mixed bag. If fed good information and prompted well, the current generation of LLMs seems to be able to produce accurate and intellectually coherent summaries, and to target them toward specific types of readers. (This was not true just a year ago, when the analysis was riddled with hallucination to the point of being laughable.) However, the quality of the output is directly related to the quality of the input. If the AI is left to its own devices to do the collecting of sources (the first C, which it can’t do too well), then the results of this second C leave much to be desired. It is reasonable to expect that improvements in AI in the coming months and years will improve its performance in this area. That being said, at present AI is still dependent on a knowledgeable human being to properly feed and prompt it in order for it to be useful.
Finally, the third C is the creation of new knowledge, where having fully mastered both the materials and the intellectual contours of a given field, students begin to be able to make concrete novel contributions of their own. In the humanities, this kind of learning only typically becomes the primary focus during doctoral-level training. This is when students learn to design and execute a research project that both engages critically with the prevailing understandings in the field and that makes an effort to move those understandings forward.
Each area of the humanities has its own particular workflow for how its practitioners generate new knowledge. As a historian of medicine in ancient and medieval Asia, my own process involves reading highly specialized technical texts written in languages that have been dead for centuries. Many of the manuscripts we work with in history are not digitized, requiring us to laboriously decipher hand-written marks scratched on deteriorating pieces of paper, silk, stone, or other materials. Even if they are already transcribed, the contents of such texts are never straightforward, always requiring nuanced interpretation based on cultural and historical context. There are often grey areas that require judgment calls we won’t be able to explain rationally. We sometimes simply have a feeling that this is what the author must have meant by this particular turn of phrase, or that this must be the missing word in a tattered portion of the manuscript. These inchoate hunches are inexplicable: the product of decades of experience looking at similar documents plus a hint of spontaneous inspiration.
Could AI potentially do some of this work? I have no doubt. If anyone spent the money to train LLMs to scan historical manuscripts and to work with dead languages, these would no doubt become valuable tools that could assist us greatly. AI could potentially help us to discover subtle patterns in vast archives of data that are too big for human brains to process. It might even be able to suggest missing pieces in our knowledge or new directions for our research. But, that is a far cry from having the ability to develop its own genuinely new interpretations based on contextual intuition. Could AI some day develop that capability? Maybe, but I’m not holding my breath.
I have no doubt that AI will most likely continue to improve, and may one day be able to consume and curate knowledge much more effectively than it can now. New technological developments may also result in AI being more helpful for humanists in our work of knowledge creation. However, the role of the humanities professor includes a fourth C, cultivating, which is even more difficult to imagine being done by a computer.
At least here in North America, the job description of nearly all professors involves fostering the intellectual growth of our students as they learn how to participate in the above processes of knowledge consumption, curation, and creation. I think those of us who teach a range of students at various levels will agree that the further a student advances in these skills, the more nuanced and subtle their mentorship needs become. While certainly an AI learning module could teach students the basic skills of citing a source properly, could a computer ever meaningfully convey to an advanced student how to develop the kinds of intuitive scholarly judgments that come from trusting one’s gut? Could it ever compassionately coach a them to tap into their own lived sensory experience when encountering an unfamiliar historical object for the first time? Could it ever replace the warmth of the face-to-face human touch, or give the student the perfect balance of critique and encouragement based on an empathic sense of their emotional state, or provide a role model that inspires them to stretch beyond their comfort zones into the unknown?
The answer clearly is no. And, anyone who thinks these aspects of teaching are automatable or superfluous to the learning process doesn’t understand education. So, my colleagues, let’s not pack up our offices just yet. Our students — and the university and the world — need us to do what we are doing, and I have no doubt they will continue to need us for quite some time to come.
Thanks for reading! Please visit piercesalguero.com or subscribe to my newsletter for updates about my research, blogs, podcast episodes, and other work.