Today’s Homework is brought to you by the Teacher’s Assistant Alex

Go Ogata
Bucknell AI & CogSci
8 min readMay 18, 2021

Team Pokemon Go to the Poll

by Swarup Dhar, Jack Goldberg, Go Ogata, and Hannah Shin

You are a middle school student waiting for the end of the day. Your focus is on the clock on the wall. Your thoughts are preoccupied with what you will do after class rather than your teacher Mrs.Watson’s lesson. The end of class is rapidly approaching, and Mrs.Watson assigns you homework. However, it is not your ordinary homework. This homework assignment was generated by an AI using a short story you were interested in.

Traditionally, students use workbooks to assist in developing critical thinking skills. However, there is no guarantee that every single student will be engaged with the topics. The AI in a story is an alternative to the traditional situation and providing a more personalized set of questions, and is intended to engage students. While the AI is an alternative resource to the teacher and a resource to help engage the students, can there be caveats to these methods?

Interaction between consumers and AI appears to be a recent development, but the field of AI dates back to 1956. The first AI implementation was the Logic Theorist, designed by Alan Newell, Herbert Simon, and Cliff Shaw to prove mathematical theorems (Newell, Shaw, & Simon 1957). Quickly after the first implementation of the AI, the field of AI in education began with Jaime Carbonell’s program SCHOLAR in 1970 (Carbonell 1970). Initial studies on the field of AI in education focused on how to tutor individuals in geometry and slowly developed how the architecture of the AI should be handled ( Beck et al. 1996 ).

Early AI and Education focused on developing effective methods. From this, the development of Cognitive Science to model the function of the brain and different systems like ACT-R to model these systems effectively(Anderson et al. 1997). Each improvement brought a real implementation of these AI in education ranging from tutoring math to practicing critical thinking. These implementations will create situations highlighting AI’s strengths in everyday life and show the dangers of using AI.

The Benefit of Teacher’s Assistant AI

Reduction of Funding after the Great Recession of 2008

After the 2008 Great Recession, many states disinvested in K-12 education funding. These funding cuts directly affected the resources that are provided to students and teachers. The cuts prevented the improvement of teacher quality, expanding learning time, and providing high-quality early education (Leachman et al. 2017). An earnest teacher must counteract all of these difficulties with the resources available to maintain their quality of education. The AI that we plan to implement may not improve all qualities affected by funding cuts. However, a focused improvement to providing high-quality education is still possible. Reduced funding forces teachers to choose which portions of education to sacrifice in quality at the cost of quality for the students. The Teacher’s Assistant AI aims to relieve the pressure of creating new and engaging topics and allow for a teacher to spend their time and resources in other aspects of education. This tool can be extended to individuals who are homeschooled to possibly unify specific how specific answers will be learned.

Design

Example text with Topics circled and Main Character Highlighted

The AI designed by the Team Pokemon Go to the polls was focused on feeding symbolic information to a neural network. The AI was trained using a dataset called NarrativeQA. The Dataset contained a list of documents with summaries from Wikipedia, a way to download the stories, and questions with answers. Using this design and dataset in mind, we moved forward with capturing different text generation methods capable of capturing the different crucial meanings with the large form of text, so we employed Long Short Term Memory as the core of our network.

High Level of Teaching Assistant AI Alex

The functionality of this design is that the teacher places a text, be it a story or news article into the AI. The AI generates a symbolic representation of the different topics and main characters from the article and inputs the different topics to the Long Short Term Memory Network. The final product is a question generated from the article’s different topic and main character.

Results

Example of an actual question for the seed sequence “mark, students, radio, their, stat” found in the training set

The topic generation functions the way we expected, and we were able to obtain coherent questions relevant to the text. The current state of the question generation is still simple, and a more complex question is desired in the future.

Ethics in AI

“Consciously or not, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, and so forth over a very long time.”

- Langon Winner, Do Artifacts Have Politics?

During the planning stage of the AI, we had to predict possible interactions between the AI and Society. We believe that these interactions were a significant consideration as we expect that they will involve many ethical dilemmas. The ACM Code of Ethics guided the design discussion by analyzing the different aspects of ethics. The clause that most directed our decisions was the “Contribute to society and human well-being, acknowledging that all people are stakeholders in computing” and “Be fair and take action not to discriminate” Clause (Anderson 1992). We recognize that if the AI is implemented, the AI will be interacting with developing children, and the long-term influences can be detrimental. Therefore it is vital to discuss and eliminate problems before they take effect. The different aspects of the interactions topics will be addressed closer in the following paragraphs.

The one forefront and the first significant ethical situation we discussed was the interaction existing with the student and AI. While the AI will be created for teachers, a large portion of fruitful interaction will be amongst the students who will be learning the topic. Ideally, where there is no such thing as bias and stereotypes, the AI will only have a pure dedication to training students’ critical thinking and reading skills. However, given that these biases can sneak into the AI, a haphazard approach to this AI can expose developing students to these biases and possibly engrain them in their critical thinking process. It is vital to make sure that our dataset doesn’t promote these kinds of biases and ensure that the range of content it trains on comes from a well-defined and diverse origin.

Another ethical dilemma can occur during the interaction between AI and the Teacher. This AI was strictly designed to assist teachers, and teachers should never be taken out of the formula. While the AI will be designed to take text as input and generate questions, we believe that teacher intervention and thought are vital to the process. Suppose the AI advances to the point where the AI can generate questions pinpointing individual students’ weaknesses. In that case, it can make a point of argument for those pushing for funding for K-12 education or possibly replacing teachers. Considering this design flaw, the AI was designed to function as an assistant focusing on generating good and diverse questions. Another reason why an AI that can generate a “perfect” question is a flaw is that teachers will be more familiar with different students’ strengths and weaknesses. We recognize the ideal functionality of the AI relies on interaction with the teacher to create curated questions more easily for students.

A situation that was also considered was seen during the interaction the AI might have with the general educational systems. As seen in the different distribution of reduced funding since 2008, we can presume that different districts have different funding levels. This disparity of funding can point to potential differences in accessibility to technology. While we are designing this for classrooms and teachers that feel the pressure under the lack of funding, there is an aspect that restricts the use of AI to those that have access to a computer. This restriction is counterproductive to the cause and, to some degree, is discriminatory. Further discussion is needed to prevent unequal access to AI.

Looking Back

Looking back at the development decisions made, we were able to make ample discussion attempting to minimize any biases that can propagate with the creation of our AI. We were able to implement our rule-based implementation that was initially laid out and capture the topics contained in different texts successfully. While the questions were basic, we captured the topics available in the text and generated new questions. We discussed future improvements focusing on increasing the complexity of the different questions generated and adding a more user-friendly interface. We hope this AI can help alleviate the pressure some teachers face with the restrictions they face.

References

Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A theory of higher level cognition and its relation to visual attention. Human–Computer Interaction, 12(4), 439–462.

Anderson, R. E. (Ed.). (1992). ACM code of ethics and professional conduct. Communications of the ACM, 35(5), 94–99.

Beck, J., Stern, M., & Haugsjaa, E. (1996). Applications of AI in Education. XRDS: Crossroads, The ACM Magazine for Students, 3(1), 11–15

Carbonell, J. R. (1970). Mixed-initiative man-computer instructional dialogues (Doctoral dissertation, Massachusetts Institute of Technology).

Leachman, M., Masterson, K., & Figueroa, E. (2017). A punishing decade for school funding. Center on Budget and Policy Priorities, 29.

Newell, A., Shaw, J. C., & Simon, H. A. (1957). Empirical explorations of the logic theory machine: A case study in heuristic. Paper presented at the Papers Presented at the February 26–28, 1957, Western Joint Computer Conference: Techniques for Reliability, pp. 218–230.

Taatgen, N. A., Lebiere, C., & Anderson, J. R. (2006). Modeling paradigms in ACT-R. Cognition and multi-agent interaction: From cognitive modeling to social simulation, 29–52.

--

--