Learning to Program Matters for Liberal Arts and Sciences Students in the Age of AI
ChatGPT is dramatically impacting CS education, but learning to program still matters in liberal arts and sciences. The reasons that liberal arts and sciences students should learn to program pre-date AI, and even pre-date the Computer Science major. Those reasons are even more critical today. Programming is a useful tool to think, learn, and create with — besides being a powerful tool for building application software.
Esther Shein wrote about “The Impact of AI on Computer Science Education” in Communications of the ACM (link). She argues that AI will make it less important for programmers to understand “the details of coding.” That might be true, but there programming is more than just “details.” In 2009, Michael Mateas wrote an essay explaining why procedural literacy was important for game designers, and why learning to program was the best way to get there (publisher link here, Michael’s link here). I wrote a blog post in 2009 “There will always be friction” inspired by his essay. I recently found his article again, and realized how relevant his essay is for thinking about the role of ChatGPT and other LLMs in computing education for liberal arts and sciences.
I understand why computing educators are pretty worried about ChatGPT and other LLMs. Papers about LLMs have taken over CS education. Much of CS education is about how to develop professional programmers who change requirements into working, robust, and safe code. Those are language activities, going from imprecise natural language to formal programming language. This is exactly what a large language model is all about.
ChatGPT is likely going to do some programming for us. But there will still be a need for professional programmers — if nothing else, because we can’t trust the code coming out of an LLM. Someone who knows what they’re doing has to check the LLM output (which Shein says in her paper). But there are also reasons to learn programming that have nothing to do with going from requirements to applications.
In the Program in Computing for the Arts and Sciences (PCAS), we serve liberal arts and sciences students. I described in earlier blog post how we think about computing in PCAS in three themes: Computing for Discovery, Computing for Expression, and Computing’s Impact on Justice. We developed these themes and our courses through a participatory design process with faculty from across the liberal arts and sciences. Even in the age of AI, computing education still matters in those three themes. I’ll use Michael’s essay to explain my point.
Michael’s essay is grounded in the Alan Perlis chapter of Martin Greenberger’s 1962 book “Computers and the World of the Future.” Greenberger documents a 1961 MIT symposium, transcribing all of the lectures and the discussion afterwards. Perlis, the first ACM Turing Award laureate, explicitly argued for all undergraduates to learn to program. (I’ve written about his lecture here.) Michael focuses more on the discussion, and so will I.
Do we still need to program?
Peter Elias, chair of Electrical Engineering at the time, pushed back against Perlis. Won’t the computers just become smart enough to understand us? Elias was foreshadowing the development of ChatGPT and the ability to program in natural language.
“Perhaps our most serious difference is in predicting the ultimate state of affairs when time-shared computers are available on every campus and good symbolic processing languages are in use. By that stage it sounds to me as though Perlis would have programming assume a large role in the curriculum, while I should hope that it would have disappeared from the curricula of all but a moderate group of specialists.”
“I have a feeling that if over the next ten years we train a third of our undergraduates at M.I.T. in programming, this will generate enough worthwhile languages for us to be able to stop, and that succeeding undergraduates will face the console with such a natural keyboard and such a natural language that there will be very little left, if anything, to the teaching of programming…
“I think that if we stop short of that, if it continues to demand as much effort to learn how to speak to machines as it costs us to teach students a course for a couple of semesters, then we have failed. I do not see anything built into the situation which requires as much as that.” (p. 203)
For Expression
Before Perlis responds, J.C.R. Licklider (grandfather of the Internet) responds. I love what he says. His comment foreshadows Kay & Goldberg’s “Personal Dynamic Media” and the development of uniquely computational art.
“Pete, I think the first apes who tried to talk with one another decided that learning language was a dreadful bore. They hoped that a few apes would work the thing out so the rest could avoid the bother. But some people write poetry in the language we speak. Perhaps better poetry will be written in the language of digital computers of the future than has ever been written in English.” (p. 204)
The development of photography did not eliminate sketching, oil painting, and watercolors. These are expressive media that can be used to create “poetry” that is different than a photograph. Of course, photography is accessible to more people than these other expressive media. Painting requires skill developed through practice. The investment in developing the skill increases the ability to express with the medium.
Yes, ChatGPT and other LLMs can be used to generate computational art, maybe even video games and other interactive computational media. Getting an LLM to develop the media you want, to say what you wanted to express, requires learning to get the prompt just right. But getting the prompt absolutely perfect for what you wanted to express may not be that much more complicated (and is less of a moving target) than learning to express in Processing or Snap!. ChatGPT provides a new way to express in computational media, but does not replace what a skilled computational artist can do. Being a skilled computational artist involves understanding and using code.
For Discovery
Perlis takes a different approach in his response to Elias. He argues that programming is not the point. The point is the ability to learn how to build models from processes.
Perhaps I may have been misunderstood as to the purpose of my proposed first course in programming. It is not to teach people how to program a specific computer, nor is it to teach some new languages. The purpose of a course in programming is to teach people how to construct and analyze processes. I know of no course that the student gets in his first year in a university that has this as its sole purpose.”
“This, to me, is the whole importance of a course in programming. It is a simulation. The point is not to teach the students how to use Algol, or how to program the 704. These are of little direct value. The point is to make the students construct complex processes out of simple ones (and this is always present in programming), in the hope that the basic concepts and abilities will rub off. A properly designed programming course will develop these abilities better than any other course.” (p. 206) (Emphases added.)
The key part here is the line: “It is a simulation.” The computer can be used to simulate the world. In my work in PCAS, this is what I see computational scientists doing. They’re building models. The point of science is understanding. We understand the basic processes of computers. If we can assemble those computational processes to accurately model something in the real world (from galaxy clustering to human cognition), we have a working theory of how that something works.
ChatGPT doesn’t replace the use of programming for model-building. Sure, you can tell ChatGPT your theory and ask for code in Python that implements that theory — but did it get it right? In computational science, it’s common to show your code, and to argue that your program is an accurate implementation of your model. You, the scientist, have to defend the code. To do that with ChatGPT-generated code, you’re going to have to understand the code.. In the end, scientists are going to need computing education. ChatGPT doesn’t alleviate that. Perlis wins.
For Social Justice: Critical Computing and Software Studies
The last point I want to build on in Michael’s essay comes after his discussion of Perlis’s lecture. Michael is describing a class to teach procedural literacy:
Another goal of the readings is to introduce students to the styles of writing found in technical, critical theoretic and art discourse. Since being procedurally literate includes being able to unpack social and cultural assumptions of code (deep readings of code), to understand the relationship between creative expression and code, as well as being able to program, students must comfortable participating in a variety of discourses. (Emphasis added.)
Michael combines two ideas here that are separate for the liberal arts and sciences faculty that I work with in PCAS. To unpack social and cultural assumptions, a student has to be able to think about code, to understand what it does, and to imagine how it might be better. Can you do that without learning to program? Maybe. Michael doesn’t think so. Certainly, being able to reason about code gives critical computing scholars a new way to understand computational contexts.
Could you use ChatGPT for critical computing? “ChatGPT, are there societal implications for this piece of code?” Whether or not ChatGPT gives you a reasonable answer, should you trust it? Should you trust an AI to tell you about the dangerous implications of AI? To answer that question requires deep understanding of code. The famous paper “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?” (Link) is written by authors who understand computing deeply, and they use their understanding of code to ask critical questions of what AI researchers are doing with code.
Michael’s “deep readings of code” connects to a growing area of humanities research called software studies. Humanities scholars have looked at written manuscripts, recordings, television, and movies for their impact on society and considering them as artifacts. Software is having a huge impact on society. Software studies scholars consider the code, how it’s written, who it’s written by, and the implications of all of that. To be a software studies scholar, you have to be able to read software. Understanding code is necessary to do this kind of work.
Learning to Program still Matters for the Liberal Arts and Sciences
In the Age of AI, computing education still matters for students of Expression, Discovery, and Social Justice.
- Computational artists will need to know code to use the medium skillfully. Image generating AI is another medium, but doesn’t replace code as a medium.
- Computational scientists express their models in code. ChatGPT might generate the code, but they still have to understand it and argue that it implements the theory.
- Critical computing scholars need to know code to critique code, and software studies need to know code to understand code. Both use their understanding to critique the role of computing in our lives. We can ask the AI those questions, but we have to be able to reason about its answers.
ChatGPT doesn’t replace the human need to express, to understand the world, and to question the assumptions in the world. Programming is a powerful medium for all three goals.