How undergraduate AI should be taught
Artificial Intelligence is a very important subfield of computer science. It is taught very differently between different universities. I have just taught an AI class at the undergraduate level at Florida International University (FIU). The class website is here http://www.ultimateaiclass.com/. Of course I realize that I am somewhat biased, but I believe I taught the class in the optimal way. I have looked around at the undergrad AI classes taught at several other major universities, most of which differ from my class. I will describe why I chose to teach my class the way I did and the ways it differs from the others, and then argue for the benefits of my approach.
There is some precedent for blogging about what should (and shouldn’t) be taught in an undergraduate AI class. See this blog post (from 2010) by my colleague from graduate school Abraham Othman http://aiecon.tumblr.com/post/663984657/undergrad-ai-teach-this-not-that. That post argued that certain “outdated’’ topics that no longer play a major role in industrial AI applications should be removed from the curriculum and replaced by more relevant topics. Specifically, Othman argues that Logic, LISP, and genetic algorithms should get the axe, while machine learning, mixed-integer programming, and local search should be welcomed.
I agree with him to the extent that recent relevant material (I’m sure “deep learning”/neural networks would have made his list had he written it today) should certainly be included even if it hasn’t played a major role historically within the development of AI. However, I feel that fundamental topics that have historical importance in the field, even if they are not the most useful in industrial applications at the moment, still merit inclusion. AI seems to work a lot in fads and hot “buzzwords.” Even though “deep learning” is super hot right now, there were several decades where many researchers thought that the impact of neural networks had “capped out.” Similarly, who knows if/when game theory, genetic algorithms, or even logic, may end up making a big “comeback,” perhaps on their own or with integration with other topics (for example, logic is combined with probability and machine learning in several ways). So I still think we owe it to pay homage to core approaches that were very relevant at some point in the past, and have played a large role in helping to shape the field, though they may not happen to be in fashion at this particular moment.
In relation to Othman’s list, the curriculum for my class included all three of the “favored” topics (machine learning, mixed-integer programming, and local search), and also several of the “outdated” topics (logic and genetic algorithms). I did however agree in removing LISP, as it appears that Python has taken over as the primary language used for many AI applications, particularly due to the availability of convenient libraries for many of the important algorithms.
The full syllabus for my class is on the course website. The class had 7 total modules: search, logic, optimization, planning, probability, decision making, machine learning. Each of these modules included lectures on several topics. Search — uninformed search, informed search, local search, adversarial search, constraint satisfaction; Logic — propositional logic, first-order logic, logical inference; Optimization — integer optimization, linear optimization, nonlinear optimization; Planning — classical planning, spatial planning; Probability — Bayesian networks, hidden Markov models; Decision making — Markov decision processes, multiagent systems, reinforcement learning; Machine learning — classification, regression, clustering, deep learning. Admittedly I was not able to quite cover each of these subtopics as fully as I anticipated. In particular, first-order logic, logical inference, spatial planning, hidden Markov Models, Markov decision processes, and reinforcement learning received relatively little coverage compared to the other topics. In addition, my class featured a novel class project where all students created an agent for playing 3-player Kuhn poker, a simplified poker variant that has been used as a testbed in the AAAI Annual Computer Poker Competition http://www.computerpokercompetition.org/index.php/competitions/rules/75-limit-games. My class had 25 lectures (with a bit of interruption in the beginning due to Hurricane Irma), followed by a class in which students presented the approaches they used for their project.
Here some thoughts on some of the undergraduate AI classes taught at several other universities.
First, there is a class from CMU taught by Professor Tuomas Sandholm and Professor Tai Sing Lee that was also taught this fall http://www.cs.cmu.edu/~15381-f17/. One observation, which seems to be very common in AI classes from my experience, is that there appear to be a large number of lectures on specific topics that fall within the expertise of the instructor(s), based on the class schedule that is provided on the website. For example, the CMU class has 5 lectures related to game theory, which is Professor Sandholm’s major area of expertise — 3 on imperfect-information games/poker, one on perfect-information games, and one on social choice and mechanism design. This constitutes 5 out of the 27 lectures (ignoring the introduction), which is around 18.5%. I feel like overall this is quite a high number given the significance of game theory within AI — many would argue that it is just one niche subarea, and that it likely isn’t worth taking up nearly 20% of the lectures. By contrast, game theory only took up 2 lectures (out of 25) in my class, despite the fact that it is also my major area of expertise. I think the role of an introductory undergraduate class should be to give a broad introduction to a variety of important topics that correctly cover the core areas of the field, as opposed to focusing disproportional class-time on niche subspecialties of expertise of the instructors. This can be done in a follow-up advanced undergraduate or graduate class. (For example, last spring I taught a graduate class just on game theory http://www.bestgametheoryclass.com/.) Note also that the CMU class concluded with two lectures on “AI & the brain,” which is Professor Lee’s specialty area. While certainly an interesting application of AI, I don’t think this is a core AI topic that merits more than one lecture in an undergraduate class, though of course there should likely be a full graduate class devoted to the entire topic. In general, I think for core introductory undergraduate classes, professors should make sure that their own subjective biases towards their research fields don’t cause them to overplay topics they are expert on at the expense of more broadly relevant topics. I am as staunch a supporter of the computer poker research field as any, but I think it is a mistake to cover 5 lectures on game theory and 0 on topics such as logic and planning.
A second class is entitled “Machine Learning and Artificial Intelligence” taught at Princeton University by Professors Sanjeev Arora and Elad Hazan in fall 2017 https://www.cs.princeton.edu/courses/archive/fall16/cos402/. One immediate observation is that both instructors — https://www.cs.princeton.edu/%7Earora/ and http://www.cs.princeton.edu/%7Eehazan/ — seem to have published primarily in theory as opposed to AI conferences. The vast majority of Professor Arora’s papers have been published in core theory conferences such as STOC, FOCS, SODA. Professor Hazan has published regularly in NIPS and ICML, though many of his publications are in more theoretical venues such as COLT, SODA, and FOCS. It does not look like either has published in core general AI conferences such as AAAI or IJCAI. This is not a major problem per se. However, as AI is fundamentally about actually engineering real agents for large-scale problems, I think that something important is likely missing from the perspective of researchers who have worked primarily on theoretical questions and do not have a track record of building significant AI systems. This comes out readily in the curriculum, which looks to be a pure machine learning/learning theory type class. While understanding the theory behind algorithms is of course very important, practical engineering issues as well as important heuristics that may have limited theoretical guarantees are crucial to building successful AI systems and should certainly be addressed. Logic and search are both allocated a single lecture, while topics such as planning, robotics, Markov decision processes, game theory, local search, constraint satisfaction, and integer programming seem to be omitted completely. I think this class is probably great for students looking to learn theory of machine learning, but it does a poor job as a general introductory AI class. That said it appears that some homework problems require programming in Python, so the class does have an implementation component.
In another undergraduate AI class at CMU, taught by Professors Emma Brunskill and Ariel Procaccia in fall 2014 http://www.cs.cmu.edu/~arielpro/15381f14/, there were 7 full lectures on game theory (Professor Procaccia’s expertise), as well as 6 lectures between MDPs and reinforcement learning (Professor Brunskill’s expertise), according to the lecture schedule provided on the class website. These two topics comprised nearly half of all the lectures. Important topics such as uninformed search, local search, logic, and optimization seem to have been omitted entirely. This is another clear example of professors heavily biasing the curriculum in favor of the areas of expertise at the expense of giving a broader and more representative presentation of the entire field.
A final class I consider is one taught by Professor Vincent Conitzer at Duke in fall 2017 http://www.cs.duke.edu/courses/fall17/compsci570/. It appears to give a good fair coverage to core topics of search, logic, optimization, planning, probability, and decision making. However, it completely omits machine learning (in contrast to the Princeton class that focuses on machine learning at the expense of the other topics). I realize that there may be entire other classes in machine learning as well. For example, if there is a two-semester undergrad AI sequence of classes, I think it could likely make sense for the first semester to focus on all major aspects of AI excluding machine learning, and for the second one to focus on machine learning. I am not sure if this is the case at Duke, but assuming that it is not (since there is no reference to a follow-up class), it looks like this class does not give adequate coverage of machine learning, which is one of the most important fields of AI in terms of practical applications in recent years. It looks like two of the five assignments for that class involve an implementation component. (For my class, all assignments included an implementation component, though there were 4 not 5). The Duke class does also not include a final project.
In terms of final projects, my class includes a new open-ended project for all students to work in teams of up to 3 to create an agent for 3-player Kuhn poker to participate in a class-wide competition. This is an important research question as well, as very little is understood strategically (or computationally) regarding how agents should behave in strategic environments with 3 or more agents (while a lot is known for 2-player zero-sum environments). Thus, a new approach could actually make a significant advance on a challenging open problem in AI. The project is sufficiently open-ended so that teams are free to use a wide variety of approaches and techniques. The approaches used by students for my class included counterfactual regret minimization, Q-learning, logic, fictitious play, and neural networks, among others. Slides from the students’ project presentations are given at the bottom of the class website http://www.ultimateaiclass.com/.
The 2017 CMU course did include a final project for the enrolled master’s students (though the class seems to be primarily aimed at undergraduates). The project is very open-ended (unlike mine on Kuhn poker, for the CMU class the project could be on anything). While there is a possibility that some may be revolutionary, I would expect that most involve mostly doing a literature review on a topic, or implementing an existing algorithm on a new problem or dataset (at least this was my experience from classes that had similar types of projects). I think this would make more sense for a PhD-level class, where students could apply the techniques to their own research. But for an undergrad class, I think it is better to assign a specific, yet open-ended, project where all students work on an interesting open problem and are able to create agents that can be played against each other for a competition. It looks like neither the Duke, Princeton, nor 2014 CMU class had a class project at all.
I will conclude this post by looking at a comparison for how the different classes define “AI” at the beginning of the class (according to the slides from their introductory lecture). For the Princeton class they define “intelligence” and “AI” by example; in other words, by just listing out a lot of problems that are claimed to fall within AI, without an actual definition being provided. These examples include crossword-puzzle solving, web search, and house-cleaning robots. I think this exemplifies the problem of the Princeton’s approach to the class. They view the class as primarily a machine learning theory class, and trivialize the definition to AI to a list of several arbitrary problems that are included within AI. Conitzer appeals to the classic table from the Russel/Norvig textbook listing the four perspectives on AI: systems that act/think like humans/rationally, and states that he will follow the “act rationally” approach. Again this somewhat sidesteps the act of providing an actual definition of AI. While Princeton performs this sidestepping by just enumerating out a bunch of problems that are claimed to be representative of AI, Duke sidesteps this by selects one from a pre-created set of 4 broad (and somewhat vague) perspectives. Procaccia from CMU sidesteps this question in perhaps the most extreme way: “Simplest (but self-referential) answer: look at the call for papers of the International Joint Conference on Artificial Intelligence” http://www.cs.cmu.edu/~arielpro/15381/381-1.pdf. You would hope that at the #1 CS school, and more specifically a top ranked school within the AI specialty (#2), that the professors would at least attempt to provide a real definition for the field rather than timidly asserting a circular definition that AI is whatever the conferences who publish “AI” think it is. So between these examples from Princeton, Duke, and CMU, no one has yet to take a definitive stance on what AI actually is.
While I may not have it nailed down, at least in my class I had the guts to provide a clear definition and go with it. First I provide Wikipedia’s definition: “In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.” While a reasonable one, I provide one that tightens Wikipedia’s a bit by specifying scale and goals of the agents: “AI is about creating real agents for solving interesting/important (large-scale) problems.” I qualify this definition with two key disclaimers: 1) Pragmatic programming and implementation issues are very important in building real agents. 2) However, ideally the agents are not just based on “hacks” or random engineering heuristics, and there is also some deeper fundamental theory that justifies their performance. But
producing strong “theory” is not the end goal.
While I’m sure many may disagree with my definition, at the least out of the classes considered here from FIU, Princeton, Duke, and CMU, I am the only one that actually proposes a new definition rather than sidestepping the question by answering it circularly.
In conclusion, artificial intelligence is an important subfield of computer science, that has gained significant recent interest in light of attention in the media. It is essential that major universities develop rigorous comprehensive curricula for properly teaching the course such that students are introduced to all the major topics which can lead to further advanced academic study, application directions in research, and numerous industrial applications.
Many of the top universities, including CMU, Duke, and Princeton focus curricula excessively on topics that fall within narrow expertise of the instructors, while completely omitting several topics of fundamental importance to the field. These classes generally stress too limited a role of implementation within AI, and use circular definitions of AI that avoid them having to present an actual definition. They also generally do not have a class project for students to implement a real AI system (even a small one) to experience the important issues that come up when doing so. By contrast, my class at FIU gives a fair coverage to all major AI topics, even many that are far outside of my area of expertise, presents a clear definition of the field, and includes implementation (in addition to analytical and theoretical components) in each assignment, and also in the final project.