Educators are troubled about Generative AI — but consider prospects

Saqib Jan
ILLUMINATION
Published in
4 min readApr 10, 2023

Generative AI has the potential to transform the way we generate novel data from existing sources, as demonstrated by ChatGPT.

Generative AI
Image by Pixabay on Pexels.com

ChatGPT, a generative AI application, has sparked an AI gold rush by demonstrating its ability to write better code, pass exams with flying colors, and generate simulated scientific content, such as abstracts, among other things. This has caused concern among schools and educational institutions in the US and elsewhere, with some announcing bans due to fears of its unethical use.

While there will be concerns about the applications of generative AI that we cannot yet address, it is important to shift from the realm of speculation and explore the powerful capabilities of generative AI in providing dynamic and adaptive content, which cannot be found in traditional textbooks or courses.

AI can create more personalized learning experiences and is already changing the education technology market. Considering these technological advancements, it is imperative for educational institutions to re-evaluate the teaching and evaluation methodologies they employ to cater to the evolving needs of students.

ChatGPT-like applications can help students improve their skills, but it also offers a tempting opportunity for students to exhibit their technological prowess and attempt to gain an advantage over their professors in the cheating “arms race.” “Many cheating incidents are related to students’ boredom, and ChatGPT provides a way to make academic work more exciting,” says Jo Ann Oravec, a Professor of Information Technology at the University of Wisconsin-Whitewater.

Blocking these programs on university campuses will be nearly impossible, and many similar systems are already emerging. Professor Oravec notes that most of her students have already experimented with ChatGPT and related programs, and some have already explored the social media forums in which its uses and misuses are discussed (such as those available in Reddit).

The initial skepticism towards a new technology or resource does not necessarily reflect its true value or potential. “Two decades ago, Wikipedia was treated with contempt by many faculty members in higher education, just as ChatGPT is treated today. However, research has shown that Wikipedia is comparably reliable with many major encyclopedias on some topics,” Oravec remarked during our call. “Wikipedia has since been construed as an acceptable educational resource in many higher education contexts.” Today, ChatGPT, a generative AI application, faces similar skepticism. It is, however, important to rely on research and evidence-based decision making to assess its reliability and usefulness.

Ethical guidelines and policies are also necessary for responsible use of advanced technologies in academic settings. Universities have already faced similar instances in which strong opinions about ChatGPT needed to be taken into account. For example, Vanderbilt University administrators generated and sent a condolence letter to their students using ChatGPT: “Vanderbilt apologizes for using ChatGPT on Michigan Shooting.” Oravec mentioned, “In my class survey, many of the students expressed outrage that the Vanderbilt administrators felt that such an effort was acceptable when they as students would be punished for plagiarism!” Higher education institutions will be working out new rules for the use of these complex systems over time to maintain academic integrity.

“Faculty members will need to reconfigure their assignments so that ChatGPT usage is less of an issue,” Oravec pointed out. “This will take some time, however, since many faculty members rely on “canned” textbook assignments. Some faculty members have already warned students in their syllabi not to use ChatGPT, which is rather counterproductive. Students who had not previously thought of cheating might be tempted to do so, taking this warning as a challenge. Students already have their academic work monitored for cheating in many ways, for example, with webcam surveillance during exams, so they may exploit AI powered applications in order to gain a sense of autonomy.”

An unfortunate by-product of the ChatGPT situation is that students might feel it is less necessary to learn and practice basic writing skills,” says Oravec. Many students in my classes have already projected in my cheating-related survey that artificial intelligence will make it unnecessary for them to produce their own significant writing efforts in the near future. Robots will do their writing as well as deliver their packages and cook their fast food! Universities should introduce “future studies” as a way to assist their students in understanding and interpreting such paradigm-shifting trends as Generative AI.

“Concerns about AI-related issues have had a long history,” Oravec reflected. “In past decades, vigorous discussions of how automation would affect society often blossomed, triggered by the developers of technological initiatives themselves, such as Norbert Wiener’s 1954 The Human Use of Human Beings. Wiener was deeply fearful of the social and ethical implications for the “cybernetics” that he pioneered. The inventor of the first chatbot, Joseph Weizenbaum, wrote Computer Power and Human Reason (1976), which outlined his reservations about the encroachments of artificial intelligence upon society. Donna Haraway’s (1987) “manifesto for cyborgs” presented a pioneering perspective on how humans and robots would meld.”

The future has come, and we are witnessing the rapid emergence of new capabilities in ChatGPT with support for visual inputs. It will be very difficult to control what is changing and expanding so often. However, Oravec affirmed that a renewal of passionate discourse in which the needs of humanity are outlined and the impacts of AI projected may indeed result in intense controversies but could help ensure that important factors are not being overlooked.

--

--