How to Disrupt Your Own Lecture Using ChatGPT

Marek Galinski
Networks @ FIIT STU
6 min readJul 15, 2024

The past year has undeniably been marked by the dominance of artificial intelligence (AI) in broader public discourse. However, this time it was somewhat different than ever before. This was primarily driven by the extraordinary popularity of ChatGPT, leading us to often engage in discussions that appeared to be about AI, while in reality, we were almost exclusively talking about large language models (LLMs), among which GPT is a notable example.

Within academia, the unprecedented rise of LLM services and their usage by students naturally could not go unnoticed. Various prestigious universities globally, as well as smaller local institutions, have had to address how to approach LLM models and their usage by students. It is quite clear to us all that banning them is impossible, and many of us believe that it would also be highly counterproductive.

As a lecturer at the Faculty of Informatics and Information Technologies at STU, I have also contemplated how to directly confront students with LLM services during lectures and guide them towards critically evaluating their outputs. This consideration stemmed from my encounters where students, in various projects and assignments, rather uncritically accepted the advice provided by ChatGPT.

One day, while lecturing on mobile app development, the topic was supposed to be the appropriate design of a backend for a mobile application. I began the lecture with a controversial statement — I told the students that there was no point in me explaining anything because, after all, they had ChatGPT, which would solve all their problems for them. So, I gave them an assignment: “I want a mobile application that will rent electric scooters in my city, similar to dozens of others out there, nothing revolutionary. Open ChatGPT and ask it what architecture and components to choose for the backend of such an application.”

Two remarkable things happened:

1. Those who had previous experience with backend technologies quickly noticed that some of GPT’s recommendations were nonsensical or poorly thought out.

2. Even those without backend experience noticed that GPT recommended a different technology stack to their neighbor, despite the assignments being more or less identical.

When students began exchanging information about what GPT recommended, I started to intervene. I asked them questions like: Did I tell you how many users I want in my app? Did I mention the reliability of the service I need? Did I specify any nice-to-have features? Did I tell you the data model I want to work with? Did I mention the communication architecture of the overall system?

From general questions, I went further — confronting students not with the technology they suggested but with the technology recommended by ChatGPT. For instance, if GPT suggested a particular database, I asked: How scalable is it? Is it suitable for many read operations or writes? Is it suitable for cloud deployment?

What was purpose of this disruption?

This small experiment led to a fruitful and interesting debate. The goal was not to show students that GPT is stupid, nor that it would indeed solve all their problems. I wanted to demonstrate that GPT exists, we all know it, it’s a fact, and that they have an incredibly powerful LLM at their fingertips. Simultaneously, I wanted to show them that the role of a LLM is not to think analytically or to question its own answers. Both these tasks belong to the analyst, software architect, or any tech professional. I wanted them to see that GPT is an excellent tool for quick research, overview, and even offering solutions we might overlook. The professional’s role is not to disdain LLM because “he knows better.” However, simply subscribing to GPT Plus is far from sufficient to consider oneself a professional.

I am currently preparing an update for the introductory programming course for future freshmen, let’s call it Procedural Programming 101. Here, the existence of ChatGPT presents a beautiful challenge from the perspective of teaching programming, offering not only challenges but also opportunities.

In the pre-LLM era, learning programming basics was relatively straightforward and iterative. You may be familiar with it — basic language constructs, fundamental algorithm properties, how to write an algorithm using a computer program, the syntax for loops, conditional statements, and how to declare data structures…

Today, a freshman’s natural reaction and question at a technical university might be, “Why should I learn to write code when GPT can write it for me?” Many teachers respond to this initially by banning GPT or negating its capabilities. Yet, it is incredibly useful that a student now holds a tool that can write the code for him — even for very simple exercises in an introductory course.

This can be leveraged in multiple ways to push learning further towards true understanding and critical thinking. For instance, you could ask a student to write the code himself and then have him solve the same task using an LLM. Then you could ask — which code runs faster and why? Which code is more efficient in terms of memory usage and why? What’s the difference, and how did you approach the task compared to the LLM?

This is how DALL-E imagines futuristic software developer that uses AI for his work. Well…

It’s not just about comparing my code versus the LLM’s code. I completely agree with the freshman’s protest about why he should learn to write basic code when LLM can do it for him. (To be clear, I disagree that he shouldn’t know it. I agree he should protest against it). The teacher’s task is to structure the course so that by the end, it’s not just a student who can write code (and hopefully understand it a bit), but a student who fully understands what works, how, and why in that code (and thereby also learns to write the code, even if he thought he was learning something much more significant and valuable). This might sound trivial, but unfortunately, my observation is that the nearly limitless CPU and memory resources available to students today, combined with forums like Stack Overflow and now the existence of affordable and powerful LLM models, lead students to think less about code efficiency and what happens in their program “under the hood.”

Lessons learned?

From my personal perspective as a university teacher, what has AI (specifically LLMs) fundamentally changed? It disrupted another piece of the classical perception of education, where a teacher imparts information to students who receive it.

It places the teacher, willingly or unwillingly (and this is a good thing!), more into the role of a mentor/consultant who has the task of not explaining how to write code but showing the student how to navigate the flood of information available through his smartphone screen, indicating which principles are truly essential to understand and where we can genuinely trust what AI recommends, and especially, showing students by which (mostly non-AI) methods they can critically validate whether AI tools are helping them or, in fact, harming them without proper due diligence.

There is much talk about how LLMs have given students a fantastic tool. And I would add that this tool is just as, if not more, fantastic for us teachers. It can change the paradigm — for the first time, AI can actively contribute to maximizing the potential of our students’ natural intelligence at universities. It is up to each teacher individually how to approach this and how to utilize it in their work. We are learning too.

Today, we truly do not need to discuss whether LLMs are great tools that can save us time and increase the quality of our outputs if used correctly. After all, this personal comment is also proof of this. No, I did not write it using GPT. But I wrote the draft in Slovak (my mother tongue, where I can naturally organize my thoughts and put them on paper the fastest) and then asked GPT-4 to translate this text into English in a language suitable for publication.

With new AI tools, we are looking forward to a lot of good fun in the near future, and I am convinced that the results will be worth it.

.

Marek Galinski (Member, IEEE) is an Associate Professor with FIIT STU, where he currently works as the head of the Automotive Innovation Lab and teaches several courses. The laboratory is focused on wireless communication, especially for connected and automated mobility applications. He is also the co-author of a publication focused on the technical and legal aspects of the cyber security of automated vehicles, which was created in cooperation with the Faculty of Law of the Comenius University in Bratislava. You can find more information about the laboratory here: https://ail.sk/

--

--

Marek Galinski
Networks @ FIIT STU

PhD @ Applied Informatics; Associate Professor @ STU; Co-Founder @ Regex Ltd.; Entrepreneur and tech enthusiast. Living day by day. Bratislava, Slovakia.