A panoramic photograph of the 152 attendees at ICER.
The ballroom at the DoubleTree Toronto during the opening plenary.

ACM ICER 2019 trip report: leveling up on theory, statistics, and significance

Amy J. Ko
Bits and Behavior

--

I’ve been attending the ACM International Computing Education Research conference for about 8 years now, and have a pretty good sense of the community, it’s trajectory, and it’s values. I’ve written about previous ICER conferences (2016, 2017, 2018). And if there’s summary I can give of my impressions of it’s evolution with a few words: inclusive, growing, and maturing.

And this is reflected in the numbers. As usual, ICER 2019 is the biggest conference yet. There were 152 attendees, 137 submissions, and 28 papers accepted, and a large number of these papers either explicitly provided guidance on how to deepen the rigor of our community’s scholarship, or were exemplars of how to do this. This growth and maturation is putting a strain on the single track nature of the conference, as the acceptance rate was lower than ever, and the standards were higher than ever. In my role as program co-chair in 2020 and 2021, I’ll be looking for solutions to ensure that time to present isn’t an artificial constraint on progress in the field, but also to help authors meet these higher standards.

This year my trip report is going to be pretty spare: I’ll focus on the doctoral consortium I led and the 28 papers presented. I was too busy with conference planning to think about much else!

SIGCSE 2019 doctoral consortium

The doctoral consortium students, organizers, and mentors.

This was the second year in a row that I co-chaired the doctoral consortium, sponsored by the SIGCSE board. This year, I was the senior organizer, and my junior chair was the wonderful Katrina Faulkner. We had 20 students (30 applied), and a packed schedule of elevator pitches, poster sessions, small group feedback with four additional faculty (Elizabeth Patitsas, Quintin Cutts, Lauren Margulieux, and Colleen Lewis), and then a career panel session.

I found the day to be riveting. The students were incredibly diverse on dimensions of interest, nationality, discipline, seniority, and gender (by design through our selection process), and so much of the day was simply students learning about that diversity of perspectives. Diversity itself was also a pervasive subject of the day, appearing in many students’ research plans, in their questions about academic workplaces, and in their reflections on what they need from advisors and their institutions.

One aspect of the day that really amplified this was our attempt to serve one student from the UK was not sent their Canadian travel visa in time, and so we tried to serve her remotely. Spending much of the day ensuring she could engage as much as possible through a Zoom video chat on my iPad really helped make everyone aware equity as a core theme of participation in learning.

Session: What Students Think

Sebastian presents his case study.

Sebastian Dziallas and Sally Fincher presented a paper about how students feel about accountable disciplinary knowledge (ADK), which might otherwise be described as the learning objectives in required courses (paper). They interviewed students about their life stories and how students made sense of these required learning experiences. The students they interviewed, upon reflecting on their required learning, often found them irrelevant to real-world computing. This work extended the context of ADK to a lifespan, but also extended the concept of it, by showing that ADK is externally constructed by the social contexts that people enter in their lives. This also means that beliefs about ADK evolve and are reconstructed over time. Whether the results generalize to other settings is questionable given the methods, but the methods themselves might be a powerful tool for understanding the ADK more deeply.

A photograph of Julia and Jonathan presenting.
Julia and Jonathan present.

Two of Colleen Lewis’s students, Julia Wang and Jonathan Raygoza, gave the next talk on the relationship between students goals and perceptions of computing and their sense of belonging in computing (paper). They investigated two types of goals: communal ones like giving back to community and agentic ones like making a lot of money or having agency at work. Prior work has shown that women, students of color, and first generation students all tend to have more communal goals. Goal-congruity theory (Diekman 2017) shows that congruence between goals and their STEM learning experiences tend to predict interest, belonging, and retention. Building upon this work, they focused on a sample of over 5,000 survey responses to the CRA CERP survey. Their analysis replicated the findings in STEM, showing that communal goals were more prevalent among students from underrepresented groups. They also found support for Diekman’s goal congruity theory, finding that communal goal orientation explained much of the variance in their sense of belonging. It’s not clear which direction this association is going (communal affordances causing a sense of belonging or a sense of belonging improving perceptions of communal affordances).

A panoramic photo of Jamie presenting her talk.
Jamie opens her Talk.

Jamie Gorson (DC participant), working with Nell O’Rourke, presented a paper on students’ conceptions of intelligence in computer science (paper). Intelligence mindset is well-known to explain many outcomes in learning, and in the learning of computing in particular. But how do CS students form these beliefs about intelligence in CS classrooms? They developed a code book of growth and fixed mindset statements and growth and fixed mindset behaviors. They clustered their findings into three groups:

  1. Students who had a growth mindset and exhibited growth mindset behaviors like pursuing resources for learning
  2. Students who had a growth mindset but exhibited fixed behaviors, like being proud of assignments they did with little effort
  3. Students who had both growth and fixed beliefs, who believed that some were “meant” for computing, and others had to learn.

The surprising result here is the disconnect between mindset and mindset associated behaviors, as prior work on mindset assumes the two are aligned. Jamie tried to investigate this more deeply in a second survey study that investigated the specific types of behaviors that students used to self-assess their behavior. Some of the surprising criteria included how fast they typed, whether their program executed the first time without errors, how much time they spend planning (sometimes viewed as a good thing, sometimes bad).

Session: Teaching Assistants

A panoramic photo of Diba presenting.
Diba opens her talk after our coffee break.

Diba Mirza presented a paper on a systematic literature review of studies of undergraduate TAs (paper). The goal of the review was to try to synthesize the literature to help inform policy and practice. The paper organized the analysis around design questions around undergraduate TA programs with evidence-based practices. They found about 40 papers presenting evidence on programs and analyzed and organizes the practices. Some of the more notable discoveries included evidence around the benefits of undergraduate TA programs: the papers claim that teachers and students benefit in learning, motivation, and satisfaction, spanning 28 unique benefits. However, only a few of these benefits were actually evidence-based. So basically, we know very little, other than practical anecdotes, about the reality of these potential benefits. The paper does, however, provide a guide for what design choices and benefits we might investigate in future work.

Yanyan opens her talk.

Yanyan Ren (Brown) presented a talk on the questions that students ask TAs in office hours (paper). They added a form that gathered student self-reported data about what they wanted help with and TAs filled out their own form. Interestingly, they used the TA form as a ground truth of the actual question the student was actually asking. The data suggested that students’ questions primarily focused on how to follow the design recipe that was being taught in the class, and that they struggled most with the function definition and testing steps of the design recipe. However, there were also a lot of questions about interpreting the problem they were supposed to solve. Also interesting was that over time, student and TA perceptions of the help needed converged over time, suggesting that students had increasing awareness of what knowledge they were missing.

Session 3: Evaluating Interventions

Thomas Price giving hIs talk.

Thomas Price presented work on evaluating the effectiveness of Parson’s problems on block-based programming, led by his student Rui Zhi (paper). Prior work has found that they’re engaging, faster than tutorials, and more efficient than writing code during learning. Their work investigated whether all of these effects would occur in block languages, which have similar interface affordances to Parson’s problems interfaces. They had to build a Parson’s interface for a block-based editor which involved a solution palette and some visual indicators of holes. In a quasi-experiment comparing the normal block-based editor to the editor with a Parson’s interface, Parsons problems were completed faster than writing code (replicating prior work), didn’t lead to different grade outcomes, and didn’t really change their problem solving behavior. Therefore, the evidence suggests that Parson’s problems just seem to be faster for learning but aren’t otherwise different.

A photo of Joseph Jay Williams speaking, along with Thomas Price.
Joseph Jay Williams opening the talk with Thomas Price.

Thomas Price also gave the next talk on behalf of his student Samiha Marwan, which was about the effectiveness of automated hint generation on learning (paper). The specific hints they investigated suggested simple changes that would bring learners’ closer to a known correct solution. They ran an experiment comparing four types of hints 1) only hints, 2) hints plus elaboration, 3) hints plus self-explanation prompts, and 4) hints plus self-explanation prompts plus elaboration. They engaged 201 rank novices. On performance, no hints performed the worse, with hints and hints plus self-explanation did incrementally better. (When you give part of the answer, they help). But, when looking at learning, students who got hints and were asked to reflect on them did significantly better than no hints or hints plus reflection. Bottom line, hints only help if they’re presented in a way that promotes reflection.

Barbara Ericson standing in front of her slide.
Barbara Ericson prepares to speak.

Barbara Ericson gave a talk about student Iman Yechehzaare’s work on spaced practice as a remedy for procrastination. I missed the talk because of a conference call about a panel I’m on in September, but I’m sure the talk was great! Check out the full paper for details.

Session: Theory and Cognition

A panorama of Lauren speaking.
Lauren Margulieux begins her talk.

Lauren Margulieux (Georgia State) presented a talk contributing Spatial Encoding Strategy Theory and how it explains the relationship between spatial skill and STEM achievement (paper). Interestingly, training on spatial skills actually causes improved performance. Why? How? Lauren’s theory argues that spatial skills help people develop strategies for encoding mental representations and identifying landmarks for orienting the strategies. This is different from working memory capacity, so it appears to be a different but also important explanatory factor in STEM achievement. Lauren appealed to recent neuroscience work on the spatial reasoning in the hippocampus, which shows that many of our core cognitive abilities are processed via these spatial reasoning skills, as long as it maps onto a two dimensional space. And there are many things that require spatial reasoning in CS: software visualization, blocks, IDEs, source files, etc. What does this mean for CS education? We have a lot of work to do to bridge between Lauren’s theoretical constructs and the specific 2D representations in CS, and then test them.

A panoramic photo of Brian wrapping up.
Brian wraps up his talk.

Brian Danielak (Fullstack Academy) made an argument that “misconceptions” are not a helpful way of modeling student about code because expert knowledge is so messy (paper). To define messy, Brian distinguished between the structure of code (syntax) and the function of code conceptually (what the code intends to do), arguing that the conceptual function of code shapes how we interact with and reason about it. Our table struggled to understand Brian’s notion of a misconception; we suspected he was arguing that it’s not sufficient to think about misconceptions in terms of syntax and semantics. Rather, we also need to reason about the intents built into a program through it’s identifiers and larger purpose.

Yasmin Kafai begins the presentation.

Yasmin Kafai and Chris Proctor presented the last paper of the day, and it’s the one I looked forward to the most (paper). It was a rebuttal to the paper that Greg Nelson and I wrote last year about theory bias, but also an extension of the discourse we started about how our community uses theory. The essence of their argument was that 1) theory is always relevant to our scholarly discourse and 2) we should be pluralistic in how we leverage it to plan research and interpret our discoveries. I couldn’t agree more (and I suspect Greg feels the same)—there is no theory-free design. The debate we were trying to start is whether theory should gate-keep discovery, whether it’s the only priority should we have as a community, and whether, in considering designs, we should constrain ourselves to those predicted to be good by theories. If we find a surprising correlation or an unexpectedly effective intervention that we can’t explain theoretically, we should publish it, not hide it for being unexpected. Of course, Yasmin agrees with this too, so everyone is in violent agreement!

Yasmin and Chris extended the discussion to talk about the need for multiple theoretical perspectives, in design and in science. They specifically discussed cognitivist (in the brain), situated (in a social context) and critical (in a sociopolitical context) framings of computational thinking. They showed the value of differing perspectives and recommended we talk. They also recommended, compellingly, that we focus specifically on literacies in computing, because they help unify cognitivist, situated, and critical theories of learning.

Monday evening I ended up going to dinner with Chris and Yasmin, along with Mark Guzdial, Barbara Ericson, Yasmin Kafai, Kayla DesPortes, and we had a lovely wide ranging conversation about theory, about computational thinking, about CS education policy, and of course, politics, academia, and work life balance.

Session: Assessment

After a nice Tuesday morning coffee talking to Miranda Parker about academic job searches, the community dove back in to paper presentations.

A photograph of Leo Porter about to speak.
Leo launches his talk.

Leo Porter presented a talk on a concept inventory for basic data structures (paper). A concept inventory tries to measure a student’s understanding of a topic. There are a few concept inventories in CS (the SCS1, and a digital logic one). Instructors can use them to evaluate courses, compare results across classes, and check alignment between courses. They had a huge project team and used a process for developing and validating instruments from science education. What they ultimately created was a design (called the “BDSI”) with 13 multiple choice and select all questions using pseudocode, and it covers lists, arrays, linked lists, array lists, binary trees, and binary search trees. They made and supported 5 claims: that it address content that matters to instructors, that they’re meaningful to instructors, that they address key student difficulties, that students properly interpret questions, and that the instrument is internally consistent and discriminates properly. The inventory had nice properties across all of these claims, and is available to use for research and practice.

A photograph of Rodrigo presenting.
Rodrigo Duran motivates hIs work.

Rodrigo Duran (Aalto) presented a paper on student self-evaluation of programming skills (paper). Rodrigo was interested in identifying a way to assess students more rapidly than heavyweight concept inventories allow for, and explored the idea of students assessing themselves. They wanted it to be easy to administer, easy to customize, and not feel like a test. The items focus on self assessing concept recognition, syntactic familiarity, semantic familiarity, and ability to use in writing programs. Through a series of psychometric analyses of more than 4,000 responses via a MOOC, they found that students were more likely to complete the self-assessment than the SCS1, took comparably no time for students to complete, had high internal consistency, and correlated with an exam on program writing.

A photograph of John Wrenn starting his presentation.
John Wrenn begins his presentation.

Jack Wrenn (Brown) presented a paper on how to support students testing themselves with executable examples (paper). He specifically focused on my student Dastyni Loksa’s notion of problem interpretation. He investigated the idea of focusing on input and output examples to facilitate problem interpretation. Prior work showed that students using the How to Design Programs, even when trained, students skip problem reinterpretation. To try to resolve this, John tried to make reinterpretation “live”, showing them that they might not understand the problem they were solving. His approach was to provide a consistent representation in an IDE of input-output test case examples, and give live feedback about the validity of examples and “interesting” (good at revealing gaps in understanding by revealing buggy implementations). The design conjecture was that by making it easy to write test case examples, get feedback from them, and make them useful in the process of programming, students would be motivated to evaluate their understanding of the program. The interface they made essentially provides actionable feedback about what kinds of test cases are missing, gamifying test case generation. Their evaluation found that test cases are compelling to learners, they wrote more valid tests, and they wrote more tests, though there are many open questions about the utility of the approach. I later talked to Jack about his work, I think we came to the conclusion that a software engineering-centric way of describing his work is that he was helping students evaluate the quality and coverage of their test suite as a mechanism for encouraging reflection on their interpretation of the problem.

Session: Best papers from the first five ICER

Robert opens the most influential paper session.

This is the 15th ICER, and the senior members of the community decided to start reflecting on some of the best work in the history of the conference (much like other communities’ most influential papers). Robert McCartney talked about the principles behind selecting awarded papers, focusing on a combination of highly-cited but also highly rigorous papers. A committee of senior members read a subset of the most highly cited papers and proposed papers for a shortlist, which all committee members read.

They chose six papers:

After discussing these awarded papers, we engaged in a fun activity to imagine what most influential papers we might write as a community in the next five years.

A photograph of people at a table creating our impromptu poster.
Hard at work on our idea for an A/B testing infrastructure to support instructional design.

Session: Primary and Secondary Education

Rebecca speaking in front of her slide.
Rebecca talks about TPACK.

Rebecca Vivian spoke about her work with Katrina Falkner on teachers technological pedagogical content knowledge (TPCK) for primary school (paper). They operate a MOOC for teacher prep and have been using it to learn about teacher preparation. This paper specifically focused on TPCK, the knowledge that teachers know about technology to teach to the specific challenges that learners face about learning computing. They analyzed hundreds of teacher posts from 98 teachers in their MOOC. They found many specific examples of CS-specific technologies, general digital technologies, and non-digital technologies. Their framework might support more targeted investigations of TPCK.

Tom presenting a slide showing EarSketch.
Tom demonstrating EarSketch.

Tom McKlin presented work on accounting for PCK in a theory change model (paper). They specifically investigated this via the interface of EarSketch, which is a digital audio environment for programming audio, and the month-long professional development that used EarSketch. The key question was how to integrate PCK into the model they used of how the PD they taught would change teachers. The kinds of changes they expected were content knowledge and intention to persist teaching via confidence, enjoyment, usefulness, motivation, identity. The question was how to interleave PCK into these other mechanisms of change. Tom viewed PCK as our most powerful metaphors for teaching, and developed an assessment to try to measure knowledge of these powerful metaphors. What he found with this measure was that better PCK predicted better student attitudes, better content knowledge.

Bobby Whyte presenting.
Bobby talks about the UK context.

The last paper in the primary and secondary session was an idea for integrating storytelling into K-5 computing education (paper). Bobby Whyte (University of Nottingham) presented the design study, motivating the work by the need to integrate computing into primary in UK, where it’s required. The particular approach built upon language literacy, trying to connect computing to storytelling. The specific approach was narrative structure, but also choice of media. When they asked learners to do both narrative and media simultaneously, they analyzed and found many fascinating ways that programming constructs were used to enable particular types of stories and story elements, and how certain hard concepts shaped the stories learners told.

Session: literature reviews

Kate talks about the motivation for her literature review.

Kate Sanders and her collaborators did a literature review on how inferential statistics have been used in (ICER) computing education research papers (paper). They found that 50% of ICER papers have used some form of inferential statistics or machine learning analysis (mostly correlation, hypothesis tests, Cronbach’s alpha, factor aalysis, and regression), that many papers do not follow the APA statistics reporting guidelines. Kate and I talked about ways to improve our authoring guidelines and reviewing guidelines to improve these things, and our table discussion talked about the challenges of learning statistical methods. Collaborations, guidelines, and reading all arose as some ideas.

Lauri begins his talk.

Lauri Malmi and his collaborators did a similar review, but of computing education research paper’s use of theory (paper). The particular concern was on to what extent our field is developing our own theories, rather than just relying on other field’s broader theories. In prior work, their group had found that 57% of ICER papers used some theory, model, or framework, but only 23 of the hundreds of theories they found presented their own theories. In this paper, they tried to extract the specific theories that our community has developed and the extent to which they’ve then been used, considering ICER, CSE, and TOCE, including 540 papers overall. They read the papers, discussed the theoretical constructs they found as a group, and divided them by computing education theories and others. They found 65 new theoretical constructs, which primarily focused on theories of learning, mostly derived from qualitative methods; 91% of papers that cited the theories just briefly mentioned them, and most papers that did use them deeply were the authors who developed the theories originally. This suggests that our use of theory continues to be quite shallow, despite some progress on domain-specific theory in the field.

Tom beginning his talk.

Tom McKlin presented another paper with Monica McGill, bringing the gospel of anti-null hypothesis testing and the replication crisis to computing ed (paper). He talked about the inadequacies of p-values, the need of effect sizes, and the goal of targeting effect sizes of about 0.4. He also talked about meta-analyses that consolidate results across homogenous study designs as a way to exploit replication and effect sizes. They conducted a study of a meta-analysis of studies conducting pre-post tests on interest to assess the feasibility of meta-analyses and explore the need for interfaces for conducting meta analyses.

Session: Learning programming

A photograph of Adrienne’s first slide, showing the title of the paper.
Adrienne’s first slide.

Adrienne Decker presented work with Lauren Margulieux and Brianna Morrison on subgoal labeled worked examples, specifically leveraging the SOLO taxonomy (paper). Prior work has shown that subgoal labels can improve learning, but has only been investigated in lab studies. This paper brought subgoals into a real introductory programming course. They compared two classes, one with traditional worked examples and another with subgoal labeled worked examples, and kept everything else fixed. In SOLO taxonomy terms, the subgoal group had a larger proportion of students in the relational and extended level, fewer in the prestructural and unistructural levels. In essence, students’ answers to questions were more substantially connected to the programming language concepts being taught. Obviously, the study had the usual confounds and validity issues, but it’s pretty reasonable that in some conditions, subgoal labels can increase the depth of thought.

A photo of a slide showing the code editor and Arduino board.
The tools used in the study.

Kayla DesPortes presented a paper on novice students working with Arduino (paper). She motivated the work by discussing the rich possibilities of electronics and computing, but the immense apparent barriers to making with Arduino. She did a 2-hour lab study with 31 novice college students, measuring pre and post measures of knowledge and self-efficacy, and asking students to make LEDs blink. They analyzed obstacles, breakdowns, and bugs, and found that many errors were related to usability: wrong Arduino pins, confusion about pin signals, and blocks not being connected. The most prevalent conceptual problems were that students didn’t understand the Arduino’s execution semantics, especially how sequence and concurrency interact, and they struggled to translate circuit representations into breadboard implementations. These findings mirror a lot of prior work on supporting hobbyist electronics software development, especially that by Bjoern Hartmann and Simone Stumpf.

A diagram of the courses investigated.
The operating courses investigated.

Filip Strömback (Linköping University) presented the last talk in the session, deepening prior work on concurrency concepts (paper). There’s a lot of prior work on concurrency challenges in software engineering; Filip’s work investigated this in formal education contexts. He specifically examined this in operating systems courses with 216 students. They asked students to analyze concurrent code identify failure scenarios, and then resolve them, then analyzed their work for misconceptions. There were many common problems related to the runtime behavior of synchronization, but it wasn’t clear how much this was due to inherent difficulties in the concepts or failures of instructional design in the courses. It also wasn’t clear how this was related to the large body of work studying concurrency programming difficulties published in software engineering literature.

Session: Potpourri

A panoramic photograph of Stefik giving his first title slide.
Andreas giving his opening slide.

Andreas Stefik presented a randomized controlled lab experiment on scientific computing student learners (paper). They specifically investigated scientists were experts in their science, but inexpert in programming. They specifically compared Base R, tilde style and Tidyverse styles of R programming. They wanted to know what errors are made in these styles (by both scientists and experienced computer scientists). Variation in programming style and experience accounted for very small amounts of task completion time difference and in errors.

Kyle giving his opening slide.

Kyle Reestman (with Brian Dorn) presented a paper on experiences with compiler errors in non-English-speaking contexts (paper). He was particularly curious about the effects of English on learning, especially given that only 5% of the world is native English speakers. Prior work has surfaced documentation exploration barriers, error message barriers, community participation barriers, and barriers to accessing learning materials. Kyle built upon this work to investigate compiler errors specifically, comparing distributions of compiler errors by students native language. They primarily considered Asian languages and Spanish. Their analysis shows that language didn’t explain much of the variance in errors; there was an effect, but the effect size was very small.

Greg Wilson opening his talk.

Greg Wilson gave the last paper, which studied what questions educators wish that computing education researchers would answer (paper). His work was inspired by a similar study in software engineering by Begel and Zimmerman. They recruited about 350 educators and asked them to propose up to 5 questions. Some of the most popular questions were:

  1. What concepts are most challenging?
  2. Why is problem solving so hard?
  3. What teaching methods are most effective for different teaching skills?
  4. What programming exercises are most effective?
  5. How do we give feedback on code?
  6. How do we help students transfer knowledge?
  7. What are the relative merits of active learning?
  8. What problems are found engaging?
  9. How do we teach problem solving?

There was nothing really about languages, tools, curriculum, or inclusion (sad pandas). There was also essentially no overlap between researchers and educators in what they agreed to be the top questions.

Session: Teaching at scale

Christine talks about the benefits of research.

Christine Alverado presented a paper about scaling undergraduate CS research (paper). She talked about how undergraduate research can support broadening participation in computing, but it doesn’t scale. Her goals were to scale, reading students who wouldn’t normally participate, and be compatible with students’ schedules. They designed a team-based model with 4 students, matched with a faculty advisor, but also provided a central mentoring team with a faculty member and a graduate student that provides student support on top of the advisor offering general mentoring. They achieved great gender balance, but the number of racially underrepresented students has declined. Their evaluation found that participating correlated with higher achievement, higher confidence in research, and higher GPAs over several years of data.

All of the attendees sitting at their round tables.
The final talk.

I missed most of Juho Leinonen’s final talk on a longitudinal study of a programming MOOC (paper) due to a conflicting conference call, but attendees caught me up when I returned. The interesting thing about this work is the way they used a MOOC as a form of alternative pathway to “prove” their motivation and then use that as a way to evaluate and approve students’ participation in a later course. There were strong correlations between participation and higher grades, but it’s not clear that these were causal.

Awards

ICER gave out three awards this year:

  • Lauren Margulieux’s paper on a theory of spatial skills won the John Henry Award for it’s bold attempt to explain the relationship between spatial skills and programming.
  • Yasmin Kafai’s paper on theory won one of the chair’s awards.
  • Lauri Malmi’s paper on theory use won the other chair’s award.

Congratulations to the awardees!

Reflection

Reflecting on this year’s ICER conference is particularly important for me because of my role as program chair in the next two years. Here are some trends and priorities that have emerged for me:

  • Our community has a lot to do to improve its use of theory, statistics, and related work. I attribute the high variation in quality on these dimensions to the lack of contexts for authors to learn these skills; I plan to help develop some learning resources for these in the form of an author guide. But overall, researchers and advisors of doctoral students probably need to spend more time reading and thinking deeply about prior work (including theories) before they finalize on their research designs.
  • Our community is increasingly interested in learning contexts outside of higher education (informal learning, K-12, data science). We need to get better at being inclusive in the way we report our findings in these spaces, discuss their generalizability, and present them in ways that are broadly relevant to our diverse audience. And most importantly, we need to encourage more people doing research on the broad diversity of CS learning and teaching contexts to publish at ICER. That diversity is key to its growth and survival as a research community.

In my role as junior ICER program chair in 2020 and senior program chair in 2021, I look forward to working on these longer term goals, leveling up the community’s discourse and discoveries.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.