170+ ICER attendees looking toward the end of a dance hall, sitting at round tables.
Ida Noyes Hall from the back, a former women’s gymnasium on the University of Chicago campus.

ICER 2023 trip report

Amy J. Ko
Bits and Behavior
Published in
23 min readAug 10, 2023


I’ve spent the year traveling, all over the world and country, having wonderful conversations with colleagues about learning, design, computing. It’s been exhausting, riveting, restorative, and revealing. It’s been one of the freest years of my life, and one of the greatest privileges I’ve ever received, to be able to think, create, share, and wander, all while being paid with benefits.

And so this summer, I’ve been keen to just stay home. So when August came, and it was time to start thinking about yet another trip, I have to admit that I wasn’t that excited. Another plane, and yet more networking, really wasn’t what I needed. I felt more like staying home for a few last Wordplay marathons before our quarter starts in September. I arrived to Chicago with a bit of reluctance.

That changed relatively quickly though. The things I usually love about conferences, especially computing education conferences, were all there: friendly colleagues, interesting ideas, an abundance of serendipity, and new neighborhoods and food to enjoy. I quickly warmed up to another three days of connecting and learning, and even to my usual practice of listening by writing.

That said, I didn’t have capacity this week for deep thoughts. So this trip report is mainly going to be a chronicle, of the talks, the conversations, and the venues. Hopefully anyone who missed ICER will have a glimpse of what it was like, and if you haven’t been, offer just enough FOMO to find a way to Australia next year.

Brown, ornate curtains, and a projector screen showing “Welcome to ICER 2023”.
The organizing team kicks off the conference.


The conference was located in Ida Noyes Hall on the University of Chicago campus. It had a dim monastic vibe, which apparently was by design, as the room we gathered in was called the “Cloister Club”. The room was long, and the screen wasn’t nearly large enough for people in the back, but the amplification was good and the dim lighting helped with projector contrast.

We started the day with a few intros from the organizers, where we talked about the record number of papers (35) and a record number of attendees (172 in person, 21 virtual), including many online on Discord. There were 229 submitting authors, 95 PC members, and 20 Senior PC members. This was an amazing place to be relative to ten years ago, where ICER was still a small 70 person workshop, just making the transition to a conference.

After a few explanations of the unique presentation format involving group discussion, we immediately jumped in to paper sessions.


Maria Kallia (University of Glasgow) talked about the role of inferential strategic reading that students do when making sense of code. She took a unique psycholinguistics approach, building upon general theories of text comprehension that view it as an inferential activity that builds associations between aspects of text. She did a case study comparing two classes and studied students studying a solved programming problem with a description and solution. She observed that successful students used many explicit inferential strategies to build a dependency graph between sections of a program, where the less successful students did not attend to dependencies. This work suggests that we need more work to teach these inferential strategies.

Colleen Lewis at the podium smiling.
Collen smiles for my smartphone.

Colleen Lewis (UIUC) presented Examples of Unsuccessful Use of Code Comprehension Strategies, which examined ways to develop comprehension pedagogy. She talked about the interaction between strategies and content knowledge to produce comprehension performance, but the severe lack of recognition for the complexity of teaching and learning program comprehension strategies. She proposed that we teach expert strategies that are well scaffolded for novices, and ones that build on students existing funds of knowledge. She did a microgenetic analysis (n=1) of one student, studying in detail their program comprehension strategies. She found five strategies that her student used: 1) identifying familiar elements, 2) connecting code to problem context, 3) monitoring current understanding, 4) focusing on important parts, and 5) tracing with example input.

Michael tells us about materiality.

Michael J. Johnson (Georgia Tech) talked about Examining the Materiality of Computational Artifacts. He spoke about multiple generations of constructionist toolkits and materiality and asked how the materiality itself facilitated personal and epistemic connections in learning. He studied a program at the intersection of poetry, visual arts, and physical computing, which explicitly engaged imaginative world building through digital physical artifacts. They used the BBC micro:bit and MakeCode as their platform. They found that meaning partly came from using found materials that already had meaning to them; that youth’s social worlds were a key asset in their artistic processes, both for feedback and support. These suggest that personal meaning can come from building upon existing emotional attachments to creating meaning through a creative process.

Hayden at the podium looking at the slide defining anonymity rate.
Hayden accounts for anonymity rates.

Learning, engagement, lightning talks

This short session had two papers, followed by doctoral consortium lightning talks.

Mrinal Sharma and Hayden MacTavish (UCSD) presented Engagement and Anonymity in Online Computer Science Course Forums, examining the culture of discussion forms in online CS discussions. The team specifically focused on gender disparities in experiences. They investigated the anonymity rate: the percent of posts that they posted anonymously. They then used a beta regression to analyze race, ethnicity, and gender intersectionally, relating it to the anonymity rate. Non-binary students were much less likely to use anonymous posts, but Asian women and Hispanic men were more likely. Women were much more likely to post questions anonymously, but the dominant factors were the course type: one intro course in particular had a very low rate, unlike all other courses. As a supplementary analysis, they also found that name-based gender imputation was highly erroneous: they were (unsurprisingly) 100% wrong for non-binary students, and 25% for other genders. Error rates are also much higher for Black and Asian students. All of this suggests to me that our methods around analyzing gender have long been flawed, ignoring gender diversity and intersections with racial identity.

Kendall Nakai (UCSD) presented Uncovering the Hidden Curriculum of University Computing Majors. She defined hidden curriculum as unspoken rules, social norms, and field-specific knowledge that are essential for student success but often not taught. She wrote a Google Docs UCSD Cognitive Science and Design hidden curriculum guide. She crowdsourced content for the guide from the student community, hypothesizing that it emulate what a peer mentor does in teaching hidden curriculum. Many of the themes they gathered were about timing of action, fear, career planning, curriculum confusion, and managing life circumstances. She then conducted surveys and interviews with students, which revealed that they helped with imposter syndrome, encouragement, and isolation, especially by using student-centered, relatable language.

Lightning talks!

The lightning talks were the first group of doctoral students. The group was investigating theories of learning, non-computing pathways for women, neurodiversity, Parson’s problems, query programming, emotionally charged student reflections, help seeking, contextualized computing, ethics, abstraction, and security education. Every student did a great job engaging the audience, framing their research questions, and identifying gaps in our field’s emerging knowledge.

Large language models

After a quiet lunch in the shade outside, and a much needed session with my therapist, I settled in for the last talk of the session on teaching in the presence of silicon valley’s latest fad: stochastic parrots.

Sam at the podium with a slide showing the first page of their PDF.
Sam poses a research question.

Sam Lau (UCSD) talked about instructor reactions to large language models through 20 interviews. They used a speculative design approach, asking instructors to imagine a world in which all students had access to a “perfect” AI. They revealed many short term plans, but there was a clear divergence of instructors who sought to resist long term and those that sought to embrace AI. All participants talked about cheating, and cheating ineffectively. Some felt they wanted to ban it until they understood it. Some of the long term resistance approaches involved designing assignments that were resilient to code generation. Some had an inevitability mindset about it, viewing it as a new essential skill for students to learn. Some felt that embracing AI tools meant shifting to program comprehension and verification skills.

Spatial reasoning, lightning talks

Jim points to a spatial reasoning task.
Jim explains spatial rotation diagrams.

Jim Williams (UW Madison) presented Exploring Models and Theories of Spatial Skills in CS though a Multi-National Study. He and his team measured spatial reasoning skills before and after courses and measured module grades. There was a moderate correlation between spatial reasoning and grades, confirming prior results across multiple institutions. They also found that spatial skills do change in CS, replicating similar effects in physics, but mediated by different courses and pedagogies. Finally, they found that prior programming fluency appeared to reduce reliance on spatial reasoning, which mediated the relationship between spatial skills and grades.

Jack at the podium smiling.
Jack passionately elaborates on spatial reasoning.

Jack Parkinson (University of Glasgow) also talked about spatial reasoning, investigating its role in student problem solving, trying to explain why there’s a relationship. They conducted a think aloud paper programming test and tried to relate the think aloud to spatial reasoning scores. They found that: 1) high spatial reasoning skills were more likely to adjust their mental models and solutions, 2) were more likely to make connections to previous work, 3) were more likely to use external memory aids, including gestures and verbalizations.

Quick talks by the second group of doctoral students.

There was then a second batch of doctoral consortium lightning talks, including my wonderful doctoral student, Jayne Everson. Topics included pair programming, CS teacher efficacy, working memory, cognitive apprenticeship, implicit power in schools, prompt engineering, “troublesome knowledge”, middle school teachers, Parson’s problems, collaborative learning, and legacy code.

Learning amidst AI

Lena at the podium in front of a slide that says “Navigating a Black Box”.
Lena prepares to speak.

The last session of the day started with my wonderful undergraduate Lena Armstrong (UPenn → Harvard) talking about students experiences searching for their first jobs amidst the rise of automated hiring algorithms. She interviewed 15 current and recently graduated students about their experiences, and found many troubling new forms of gatekeeping and hidden curriculum. First, applicants had developed many strategies to get through automated hiring systems. Students also had widely varying knowledge on how these systems worked, or whether they were even aware of them being used, due to system’s lack of transparency and feedback. Students also had different perceptions of power: many felt they had no paths to advocacy, or power to advocate, and now way to assess the company in the process of applying. Students had divided opinions on whether they should be used: some felt like it was a necessary evil, others felt like it should not be used at all.

The second talk was Tiffany Li and Silas Hsu (UIUC), who talked about Effects of AI Grading Mistakes on Learning. The started with the benefits of automated grading, but then pondered what harms might come from such systems, especially AI-based autograders. They deployed a system that offered some AI-based explain-in-plain-English problems and conducted surveys and interviews, with a pre-test and post-test. They then did some Bayesian regression for some causal inference. They found that the harm from false positives were worst because they didn’t know that they were wrong (wrong answers described as correct). Higher performers were actually worse at detecting autograder errors. False negatives (right answer, AI marks wrong), were also harmful: they increased engagement time but also promoted reflection.

Briana, Barb, Kristin, Andrew, Leigh Ann, and Amy smiling at a round wooden table.
A subset of the ACM TOCE editorial board spends some time together over food.


After the last session, I organized an informal dinner with ACM TOCE editorial board members, and a few others. There was no agenda, just food and conversation. We found a table at the casual and cozy Medici and talked about schools, leadership, climate change, and horses. (And yes, maybe a bit of peer review gossip). It’s wonderful to work with such dedicated and insightful folks to create communities of scholarship!

An ivy laden building with three AC units plugged into the windows.
Ida Noyes and its three air conditioners, sustaining the life inside.


After a quirky decaf drip at Philz, then a much tastier decaf cortado at True North, I walked to day two at Ida Noyes, which was full of more papers, lightning talks, and posters.

A projector screen showing discord and Noelle smiling.
Noelle answers questions on Discord.

Fairness and ethics

Noelle Browne presented the first paper on Designing Ethically-Integrated Assignments: It’s Harder Than it Looks. She recorded her presentation since she wasn’t available to attend in person. She focused on “professional computing ethics”: technical decisions that are inherently sociotechnical. She started from the premise that this would not require adding additional content, but changing it, by integrating ethical consequences of content. She described a research through design to develop assignments. Her key insight was that it was really hard. Challenges included: 1) finding an ethical context suitable for the assignment, 2) maintaining technical focus, 3) eliciting students integrated thinking, 4) making the assignment practical for students to actually do in a learning context. I found the most interesting point that the AI examples they found in the news were just too technically complex to be addressed at a level appropriate for students’ knowledge. My interpretation of this was that its because the algorithms are inherently sociotechnically complex; they can’t be reduced without significant scaffolding, and scaffolding has the risk of erasing the important nuances of the sociotechnical context.

Jean gestures on her title slide
Jean warns the audience of pending participation.

The next paper was Jean Salac’s, who’s been in my lab for the past while, joyously collaborating with undergraduates and doctoral students in my lab. She presented Funds of Knowledge used by Adolescents of Color in Scaffolded Sensemaking around Algorithmic Fairness, which sought to understand what kinds of knowledge youth of color bring to conversations about equity and justice in algorithm design. Through a series of classroom engagements with adolescents that slowly revealed algorithmic and sociotechnical complexities, she found that youth brought many different lenses to reasoning about fairness, including human and technical lenses, grounded in their lived experience and identity.

Jane on camera on Discord, smiling.
Jane answers questions.

Jane Waite and team presented a recorded presentation on Using a sociological lens to investigate computing teachers’ culturally responsive classroom practices. They exampled culturally responsive teaching in an English teaching context across 2 workshops, 9 schools, and 19 teachers. There was a unique focus on slavery and colonialism in the British empire, and teacher perspectives in particular, where computing has been mandatory for a decade. They observed teachers and talked to them in the context of their school, funding release time for a 2-hour workshop. Their analysis focused on Bourdieusian and Freirean theories of habitus and critical pedagogy. They found that 1) teachers adapt lessons to be relevant to learners’ experience, 2) teachers focus on rapport with learners, 3) teachers integrate social justice and see computing as a tool to challenge the status quo, and 4) teachers reflect critically on their own teaching at school, department, and individual levels. They interpreted these results as consistent with other work on teacher professional development that show teaching in responsive ways is a journey of exploration and reflective practice.


Diana at the podium with a slide showing causal model predictions.
Diana describes the decomposition trends.

Diana Franklin presented a paper How are Elementary Students Demonstrating Understanding of Decomposition within Elementary Mathematics?, on behalf of Maya Israel, who led the work. Their focus was on primary integrations of CS and math. They focused on action fraction digital manipulatives, and used basic coding to compute scripts doing friction arithmetic, and designed a TIPP&SEE based scaffolding and assessment. They found that students had varying struggle with decomposition word problems, but improved over time. The main types of errors that students made were pairings of incorrect math and limited evidence of decomposition. These suggest that there are some tensions in trying to isolate decomposition and math skills; in 3rd grade, there may not be enough decomposition in math to make it viable to teach.

Effective feedback, achieving equity, English-centric.
Jennifer talks about the goals and problems.

Jennifer Tsan presented An Analysis of Gallery Walk Peer Feedback on Scratch Projects from Bilingual/Non-Bilingual Fourth Grade Students. Jen considered the feedback that youth give to each other in gallery walk pedagogies on the creative aspects of Scratch projects, especially accounting for pervasive monolingualism in classrooms. They wanted to understand the kind of feedback that students give, what qualities it had, and how language mediated these qualities. They qualitatively examined feedback provided in a scaffolded feedback worksheet. They found that students did follow the “sandwich” structure, giving positive feedback at the beginning and end; they gave suggestions for project additions, especially aesthetics; and whether students were in a bilingual class did not appear to be related to prevalence of compliments. But bilingual classes were more likely to give “nothing” (blank) feedback.

11 animated snapshots of lightning talk slides.
Lightning talks, round 3

After the second talk was a group of lightning talks. Speakers discussed data science education, code generation, persistence, academic achievement, negative self-assessments, motivation, code editors, community organizations, large language models, and the future of computing education research.


After lunch and a quick grant sync meeting online, and some water damage repair coordination with contractors, I joined for the tail end of the career session.

Dan at the podium, a slide on leadership interviews, and several attendees looking on.
Dan carefully describes their design process.

When I entered, Dan Murphy had just started talking about their measurement approach to examining career intentions, in The Development and Validation of a Survey to Predict Computing Career Intentions. He walked through the survey design and rationale, some field tests to evaluate the feasibility and psychometric performance of its items. They checked for bias with DIF, and did some exploratory factor analysis. The final analysis, self-efficacy and social support were highly related; sense of belonging and intent were very related; interest was separate, and practical support was a fourth factor.

A light fixture, and behind it, a slide that cannot be read.
Marisa describes their mixed method.

The last paper of the session was Dispositions Computing Professionals Value in the Workplace : Systematic Literature Review and Interviews with Professionals, presented by a group of presenters, Deepti Tagare, Shamila Janakiraman, and Marisa Exter. They examined attitudes that computing professionals value in the workplace. They conducted a literature review and interviews. One attitude was taking feedback well. Others were about resilience. They then discussed the importance of dispositions in skill development.

Productive failure; tracing

Exploring productive failure.
Phil starts his talk.

After a lively poster session and break, we had two papers on learning from failure. The first was by Phil Steinhorst (U of Müenster), who talked about barriers to productive failure. He began with a basic overview of productive failure and its relationship to direct instruction, and the evidence for its success. They wanted to know whether these same successes in math and other disciplines also occur in CS. They examined a fairly scaffolded sequence of warm up, problem solving, survey, and instruction, with a varying sequence to compare productive failure to lack of failure reflection. They deployed this in a post-secondary course, testing whether students with productive failure pedagogies led to better transfer, whether problem solving patterns were different as predicted, and whether students search the solution space more broadly. They confirmed the last hypothesis, but not the first two.

Developing Novice Prog / Self-Regulation / with Code Replays
Benji, Jared, and Paul prepare.

The second was by my former PhD student Benji Xie, and former undergrads Jared Lim and Paul Pham. We explored the feasibility of using replays of programming sessions as a way of fostering self-regulation skills. We built a practice tool for keystroke logging and then evaluated students’ experiences with watching replays of themselve problem solving. We found that replays fostered self-regulation skills, but that many students struggled to watch themselves fail. There was a general sentiment that it could be a valuable tool with the guidance and scaffolding of teachers about how to use it successfully.

A slide showing two diagrams of the two studies.
Veronica and Nadia overview the two studies.

Veronica Chiarelli and Nadia Markova presented Evaluating the Utility of Notional Machine Representations to Help Novices Learn to Code Trace, who examined a form of code tracing scaffolding that built upon a representation of important features of program execution (a notional machine). They wanted to compare different types of notional machine to understand the relative merits of their design features. The key features they compared where a more hardware based representation and a more namespace-centered representation. They explored two dimensions of design: the importance of context (machine materiality) versus the importance of salience (focusing on a table); and the importance of concreteness fading (a scaffolding technique that has been previously used to teach equivalence in mathematics. There was no significant difference between learning gains in the two notations, or a significant difference between abstract and concreteness faded results.

The last talk of the big session was Mohammed Hassan, who presented Evaluating Beacons, the Role of Variables, Tracing, and Abstract Tracing for Teaching Novices to Understand Program Intent. (This brought me back to the 1990’s era of psychology of programming lab studies!) Their work was fundamentally about the skill identifying the purpose of code. They had an interview protocol centered on beacons and variable roles, and found that these ideas — which mostly emerged from studies of expert program comprehension— actually served as effective scaffolding for code tracing, especially for resolving false assumptions from quick glances about syntax.

The Chicago skyline with sunset and clouds.
Sunset in front of the downtown Chicago skyline.


After the talk, we all immediately got onto school buses and headed downtown to the Chicago river to get on a boat. The skylines and sunset were beautiful and the conversation was joyful! As usual, some people drank too much; conferences should not have open bars.

The Chicago skyline with dozens of buildings, each floor speckled with lights.
The night skyline, alit with skyscraper lights.


I woke up early to walk to Plein Air, a cafe on the University of Chicago campus. I had a tasty cortado, a simple breakfast burrito and potatoes, and enjoyed the quiet outdoor seating as the sun slowly rose. I coordinated a few minor crises over Slack and email, made a few animated gifs for this trip report, and then walked over for a last (shorter) day of talks and networking, but with a bit of restraint to protect my quickly fading voice.


Ryan at the podium and a slide with the paper title.
Ryan begins slowly, intentionally, given the late night on the boat.

Ryan Torbey gave the first talk of the day on Inequities of Enrollment: A Quantitative Analysis of Participation in High School Computer Science Coursework Across a 4-Year Period. This was from his dissertation, and really came from his K-8 teaching in Austin, Texas and his leadership on CS for Texas. He started from the premise that academic performance and participation is socially constructed by inequitable systems. He asked what factors predict who enrolls in CS in high school across a 4-year period and used multi-level logistic regression to model participation on 108,037 students in Texas and 350 high schools. He found that 1) wealthy schools were much more likely to offer CS, 2) being a girl had a 78% decrease in odds in enrolling in CS, 3) students who qualified for free and reduced lunch were somewhat less likely, 4) Black and Hispanic students were somewhat less likely, and 5) each algebra I exam score increased odds of enrolling in CS. After enrolling in one class, many of the socioeconomic differences disappeared. Based on these results, he recommended that measuring the demographics of participation in districts would be valuable ways of guiding broadening participation efforts.

Leah at the podium with the title slide and our headshots.
Leah begins the talk with a warm welcome from the audience.

Next, my recently graduated co-advisee Leah Perlmutter presented her culminating dissertation work, “A field where you will be accepted”: Belonging in student and TA interactions in post-secondary CS education. She defined belonging with the literature on self-determination theory, then talked about her approach to studying the role of TAs in shaping belonging. The focus on TAs was primarily because of scale: TAs interact far more with students in large classes than they do instructors. She found that feeling confident about course material promoted belonging, and how TAs were a crucial part of fostering competence, mediated by identity. She also found that TAs supported relatedness by creating community and autonomy by engaging on students on terms. She suggested that belonging required more than caring, it required concrete actions by TAs and instructors and decisions about which identities are in the room.

A projector screen with Aleata and a WestEd background.
Aleta kicks off a recorded talk on discourse analysis.

The final talk was CS Teaching and Racial Identities in Interaction: A Case for Discourse Analytic Methods, by Aleata Hubbard. Her general argument was that we should be using discourse analysis methods in our field; she described an application of this for studying teacher identity. She described the empirical approaches in discourse analysis (identifying and transcribing conversation data, conducting a preliminary reading, analyzing discursive strategies, selecting illustrative examples, and writing analysis). Her study on teacher identity examined how teachers talked about racial identity. She used Bucholtz and Hall’s framework for analyzing narrative strategies, finding that many white teachers evade questions about their identity, and often redirected their answers toward other discourses related to race that were not about them.

Gazing upon navels

The next session examined our community’s work (which I find valuable and necessary for a maturing field, but we do it a lot, hence the gentle ribbing in the header above).

Robert gesturing at the very large projector screen.
Robert kicks off the talk.

Robert McCartney presented How Do Computing Education Researchers Talk About Threats and Limitations? Robert and team analyzed how researches did research and reported it, focusing on ICER, CS, TOCE, and a math education journal. Most of what people reported were settings, researchers, participants, tasks, study design, data, measures, statistical analysis, but not the classic things like internal, external, and construct validity. The most common responses to these challenges were future work, some mitigation strategy, or dismissal of the problem. Some papers just mentioned a problem but then said nothing about it.

The second talk in the short session was from Murtaza Ali, Taking Stock of Concept Inventories in Computing Education: A Systematic Literature Review. He and his team sought to assess where we were at in progress on concept inventories in CS, especially relative to the 2014 Taylor paper that also reviewed the literature. He found that in 2014, there were 16 concept inventories, 2 valided; there are now 33 and 12 are validated. There was an almost exclusive focus on post-secondary education. One challenge that surfaced is shifting focus on programming languages. The most creative remedies focused on pseudocode and porting CIs to multiple languages.

Animated gif of 11 title slides.
So fast!

The session ended with one last lightning talk session. Talks focused on CS teacher capacity, problem solving, science integration, program understanding, perceptions of authenticity, social emotional learning, generative AI, program tracing, and multilingual learning. This particular batch had a lot of great ideas about learning, vulnerability, and growth amongst students and teachers.


A bright pink title slide with a meme.
Briana kicks off her talk, interrupted twice by a fire alarm.

After lunch was a medley of paper topics. Briana Bettin presented Say What You Meme: Exploring Memetic Comprehension Among Students and Potential Value of Memes for CS Education Contexts, which examined opportunities to use memes in CS education. They collected 30 participants for interviews on a set of memes and analyzed their discussions. They found that participants could see the analogical structure of the memes that they were applying to reason about the memes. There were also multiple pathways that students used to reason about CS memes. There was also a lot of pedagogical value at all levels of understanding.

A slide titled survey. SIGCSE-members mailing list: 26 respondents • Mostly opened ended questions (one select all that apply) • Q1–4: OERs and barriers to adoption • Q5: Eponymity • Q6-Q10: Incentives and compensation (monetary or otherwise)
Max describes the survey design.

Max Fowler then talked about “I Don’t Gamble To Make My Livelihood”: Understanding the Incentives For, Needs Of, and Motivations Surrounding Open Educational Resources in Computing. OERs are essential materials that are public domain, have no cost, and can be adapted and distributed. They set out to identify motivations and barriers for post-secondary faculty to use OERs and motivations for contributing. Textbooks were most common, adaptation was hard and often viewed as harder than creating from scratch. Maintenance and support was a risk of adoption. There was also a lot of reputation risk reluctance about sharing. Some saw benefits for promotion and tenure in teaching roles, as a way to demonstrate contributions and impact.

The last talk in the session was An eye tracking study assessing the impact of background styling in code editors on novice programmers’ code understanding, presented by Pierre Weill-Tessier and Neil Brown. They wanted to examine the impact of code editor background styling that reveals lexical scoping and AST structure. They chose three novice Java projects. They had 62 participants and recorded video and audio, then classified submissions into different levels of correctness, manipulating three types of scope highlighting. They found that that participants definitely scanned programs differently, but this didn’t translate into different success in program comprehension tasks.


I was unfortunately unable to stay until the very end of the conference, and so I missed the last poster session, the last paper session, and the awards session (where the wonderful Jean Salac and her team won a Best Paper Award. Congratulations Jean!). I watched from afar while I took two trains to the airport and find a little corner to get a quick dinner before boarding. But that transit gave me some time to reflect on where the community is at, and where I think it should be in the future.

So where is computing ed research? After SIGCSE last winter, RESPECT this spring, and ICER this summer, I think it’s at a crossroads. There is certainly a critical mass of post-secondary faculty still focused on refining CS1, but I think there’s an increasing recognition amongst that group that the notion of “CS1” has always been ill-defined, and that there are (and have always been) important questions the community has overlooked about computing, society, and literacy. I see many starting to think about ethics and fairness topics, many questioning assessments in light of large language models, and many starting to think more creatively about pedagogy and belonging. It’s breaking open, thanks partly to the bold work of many of our junior scholars, and I like it.

There’s also an increasing presence of researchers focused on primary and secondary, much of it building upon robust literatures from education research more broadly. I see this leading to really interesting differences in rigor, where some of the education research informed studies bringing rich qualitative methods to bear on complex social phenomena, juxtaposed with positivist, quantitative studies with higher ed CS faculty. And amidst all of this is a much smaller but vocal group talking explicitly about equity, justice, identity, and systems, mostly at RESPECT and to an extent at SIGCSE TS. Alas, few of those voices show up to ICER except, except for a key few (myself and my students included). I really wish we are all together, moving forward together, but the overwhelming number of computing ed venues means that our discourses are fragmented. But I get why they don’t come: ICER has long been overwhelmingly white, ableist. That is changing, but slowly.

I’m eager for our community to catch up to were education researchers and learning scientists have been for a long time on race and gender. Those discourses are far ahead of our own. But I also see elements of our community that are unique and powerful: our interest in design, our ability to create infrastructure and tools, our deep engagement with new technology. Even our occasional focus on accessibility is better than what I’ve seen in education research. I’d love to see our community evolve to an uncompromising centering of equity and justice, but using our rich methodological and epistemological toolbox. I think we could be a community that really could make substantial change in the world through computing education, if only we were to think bigger about social change, and computing education as a site for transgression, in the spirit of hooks.

I know how some will react to this: many say “not all of us can do that.” Unfortunately, I tend to think of this as another way of saying “I can’t talk about race and class” or “I don’t want to talk about race and class”. But we have to talk about those things. And the patriarchy, and Christian nationalism, and the broad systems in our world, including white supremacy, capitalism, and our impending climate collapse. These are the forces that shape what questions we ask, what we value, and ultimately what we teach and learn. And so a field that ignores them ignores the most fundamental of forces at play in computing education. We do that at our field’s peril and irrelevance.

On that heavy note, I can’t wait to do it again next year in Australia, and hope we move ever closer to this larger focus! It was wonderful to see everyone, especially the newcomers. Safe travels, stay well!



Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.