Chat GPT, artificial intelligence, and critical contemplation in education
There is something powerful about listening in silence. The new year brings with it a renewed energy to be something, to prove a concept, to illustrate urgency, and to challenge everything. We refuse to be silent for a good reason. But for some reason, we find ourselves incapable of listening. Because technology allows us to make noise. Too much noise. We are drowning out essential guidance that should influence our efforts for technological innovation in the classroom. Instead, there is a push to be first, loudest, and work with the barest bits of information and spin perspective at unprecedented speed. At what cost? We need reflection to avoid an impossible choice in education. Do we disrupt and dismantle without a strategy, leaving devastation in our wake, or are we complacent in the harm impacting vulnerable and impressionable learners?
The value of critical thinking has long been a part of computational science education. Kules (2016) offers a concise framing for synthesizing computational and critical thinking aligning with university goals. Faculty learning and student engagement are not seen in this model as at odds with one another. The goal? Learning to increase understanding of each.
Learning takes time, and urgency remains.
There is a considerable discourse on the role of natural language processing in higher education, such as Chat GPT 3. But we celebrate the same technology when used as chatbots, triaging student services, supporting students with language learning assistance, and cultivating community online. So, where are these frameworks in our current discourse?
It is a mistake to rush to identify and label a particular product. Since Sidney Pressey’s intelligence testing machine in 1924, the push and pull to innovate and automate has been dismissed. This is not to say that automation is not without flaws; political realities for the global majority demonstrably prove that fairness and equity are not mathematically attainable. Dismissing that element alone is disingenuous at best and, at worst, dangerous. In either case, time is not a valuable construct. By the time technology is widely accessible, it is already being reoptimized and monetized, rendering most “hot takes” ineffective. Remember how excited we were to send text messaging nudges before we developed what can only be described as automated spam? Overstating benefits, especially in terms of equity, is dangerous.
The existence of public information is different from the inclusion of diverse perspectives. Those most vulnerable are already susceptible to the consequences of design and policies that ignore their personhood, most notably as seen in risk management outcomes like surveillance and health care. Taking the time to understand a product’s history, how, and why allows us to consider its utility, mitigate harm, and reimagine possibility in the customized context of our classrooms and institutional policies. Should there be a push to develop standards and best practices? Absolutely. But, despite decades of research exploring ethics and AI, that isn’t happening. It’s January, and your discipline’s annual meeting hasn’t happened yet.
Academic disciplines have varying appetites for thinking through technology. The land grant university itself navigated its changing role and utility in the United States, balancing agricultural tradition and technological innovation. Today, we continue the rush to diagnose innovation as the solution to every ill or as a problem worthy of blame. Students are described as credential consumers. Faculty, whose teaching methods and assessment models are framed as inability and unwillingness to change eroding the nature of learning. Finally, oft-vilified tech companies are cautionary tales whose sole purpose is to destroy learning as we know it. Unless, of course, we are winning because of their use. Blame feels effortless and is often rushed and misguided. We have models and frameworks of understanding. They tell us that critical thinking cannot operate at algorithmic speed.
The only true urgency is our responsibility to pause, reflect, and understand. Currently, syllabi are being furiously updated, requiring students to submit their own words and conceptualizations to language modeling and sorting. Using Microsoft Word? The technological giant is investing in the technology for the third time. Meanwhile, an updated version of the tool is projected to launch in Spring. So in the meantime, yes, a rule may catch academic misconduct. But to what end? By whom? If we are sincere, the commentary’s feverish pace represents the breach of trust we hope to prevent. The unrelenting and opportunistic nature of academia forces higher education to choose a side, each hoping they will be correct.
Unfortunately, the reality is much more complex. No one answer speaks to the nuance of discipline and the consequences of decades of broken trust held together by fancy marketing. There are real concerns and fears held by each camp. We have seen this before. The pandemic-driven shift to online learning stretched and showed us our many flaws. Recent discourse is not collaborative. Most of it is rooted in scarcity of resources and threats to power and relevance. Consequences remain, despite our best intentions.
Natural language processing, machine learning, and other algorithmic designs are not new and are not going away. We only see the tip of the iceberg when it comes to sophistication. Thanks to our universal curiosities, we have provided impressive data to support these innovations. But what are they representative of? Stratification places the best of the technology in the hands of those who can afford its price point. We will be found wanting if we fail to engage and listen in interdisciplinary, inclusive, and critical ways.
Please balance educational decision-making with a historical context of the topic. Reading the links embedded in this essay is a great place to start. After that, engaging with studies that include your academic discipline, the identities of your student population, and concepts or policies most important to your decision-making is urgently needed. The name is new, but the ideas we wrestle with remain the same.
We can’t slow, stifle, or affect something we don’t understand. If we aren’t careful, we will alienate the very students we purport to protect. There is no academic integrity in feigning expertise. Our biases equate ease with equity, assume malintent, and seek ways to place gates around a landscape we don’t understand. Rather than seek collaboration, we are loud, and most of us are wrong. Listening could offer insight into what we have chosen to ignore for so long.
Authenticity is always needed to create. We are inauthentic when we do not pause for understanding or listen in silence. Not because we are incapable but because we are pressured, we are tired. What we risk by discounting change to suit our own ends will jeopardize what is possible, and those mistakes will remain on record. The algorithm always remembers.