Classroom Conversations: Coding, ChatGPT, and the Quest for Accuracy

Arieda Muço
3 min readJul 23, 2023

--

The arrival of ChatGPT into our lives and classrooms required some adaptations to my teaching methodology, particularly for a large portion of the courses I teach.

For the coding-related courses, I shifted my advice — to students — from recommending the use of Google and Stack Overflow to encouraging the use of ChatGPT instead.

Jokes aside, we did incorporate both Stack-Overflow and ChatGPT in the classroom.

Dall-E generated image

As many are aware by now, ChatGPT is a language model that can offer reasonably accurate advice on coding-related matters, it occasionally stumbles on other job aspects like fact retrieval and is terrible at literature review.

Year after year, a popular topic of classroom discussion revolves around the accuracy of certain machine-learning concepts. Despite its widespread incorporation, many online resources and books contain errors leading to misconceptions. [I’ve been inspired to plan a series of articles debunking these common misunderstandings my students bring to class.]

With the students, we decided to switch up the format of an assignment. Instead of a typical Jupyter Notebook with code and findings, we opted for a Medium article to be submitted instead. My students are incredibly open-minded and they were immediately on board with using ChatGPT as a tool in the classroom and writing for a broader audience rather than just their peers and instructors.

Their take-home assignment was the following:

My students didn’t disappoint. Some examples: Sherkhan found that the speeches of Senators Biden and Shelby are quite similar. Anna took a more methodological approach and described well the methodology behind cosine similarity. Caroline and Seng Moon added some nice visualizations, an important aspect for understanding the data we’re working with. Malik compared different types of similarity.

All of them tackled the exercise differently. This is the beautiful puzzle coding is. You have the creative freedom to choose which piece of the puzzle you tackle first. Their submissions brought me pride.

In class, we had an interesting discussion about the findings, the purpose of the exercise, and the main take takeaways which I share here below:

  1. Anyone can publish material online and offline, including blog posts and books. However, peer-reviewed sources are the most reliable. Peer review can come in the form of journal articles, comments on a Stack-Overflow blog post, a YouTube video, or GitHub repository issues. We should also make sure to check out the comments carefully.
  2. With the democratization of science, and large language models (LLM), we have greater responsibility for what we post online. LLM such as Chat-GPT rely on online material as training data: so junk in, means junk out.

As an educator, this exercise brought me immense satisfaction and I will incorporate such tasks in future iterations of the course.

I believe that the exercise also brought a sense of successful completion to my students. Having a piece of work see the light of day helps everyone feel more accomplished, and it’s a valuable addition to the portfolio to showcase to future employers.

Finally, I hope that in the future, more of us will not only embrace the use of AI, but also understand more deeply the concept of ‘information hygiene’ and our role in the larger digital ecosystem.

--

--