A projection in the Hyatt lobby

SIGCSE 2019 trip report

Amy J. Ko
Bits and Behavior
Published in
22 min readMar 3, 2019

--

The day I travel to a conference is usually one of eager anticipation. I love the rising energy as we approach the first day of a conference and the sense of connection on the way to a destination. And when I arrive, the payoff: I get to reconnect with many old friends, meet the next generation of scholars who will shape our field, and develop the ideas I might work on in the coming years! What could be more fun for a scholar? Especially for a conference celebrating its 50th anniversary, SIGCSE was likely to be quite the party of reminiscing on big ideas and important people, and forging new foundation for the next 50 years.

Unfortunately, this conference was a bit different. The day I was headed to the airport I started coming down with a cold. And losing my voice. And feeling exhausted. By the time I arrived, all I wanted to do was sleep, when I usually want to head straight down to the hotel lobby, find people to catch dinner with. I’d never been sick at a conference in my past twenty years of attending them. How could I possibly do everything I love about conferences while being sick?

Of course, I couldn’t. I slept a lot. I didn’t see most of the people I wanted to see or have most of the conversations I wanted to have. Instead, my approach was to focus on the few things I desperately wanted to do — give my talk, run my workshops, support my students — and beyond that, just find opportunities to listen. And what better way to write a trip report than to spend most of my time observing rather than sharing?

Of course, this means I’ll have a bit less to share than usual. But how could any one trip report summarize four days of 16 parallel tracks anyway? I cover the highlights, then conclude with some reflections.

The numerous attendees of the teaching accessibility workshop that Richard Ladner and I organized

Wednesday

Pre-conference workshop on teaching accessibilty

The first event I attended was a half day workshop I co-organized with Richard Ladner on how to integrate accessibility topics into higher education computing courses. We had a surprise boom in late registration, with nearly 50 people attending. About half the attendees already teach some aspect of accessibility in their courses, and the other half came to learn. Some were deeply expert on accessibility but not at all experienced with teaching accessibility. Others were the opposite, struggling to teach accessibility without much expertise in it.

We organized a series of speakers with experience teaching accessibility, and they shared an incredible diversity of integration points. Some talked about introductory CS courses that had topical sections, some of which were explicitly about accessibility (among other topics like social media, games, security, etc.). Some had elaborate integrations of accessibility into web development courses. Others focused on accessibility capstone projects, where there are compelling applications of many ideas in computing to solving accessibility problems. I had planned on speaking about about how we’ve integrated accessibility through our Informatics curriculum in our intro course, our ethics course, our web development course, our databases course, and our numerous electives that touch on accessibility, but we ran out time, and I had lost my voice at that point, so we decided to cut my talk.

Interestingly, most people who were teaching accessibility had the autonomy to integrate it. It was very rarely mandated from the top down, nor did leadership prevent anyone from teaching about it. In fact, some people had entire courses dedicated to accessibility! And at the core of most of these stories, it was because the teacher was personally passionate around integrating accessibility topics in their course.

I think the biggest impact of the workshop was giving everyone a sense of not being alone in their efforts to teach accessibility. The breaks were incredibly vibrant and full of rich exchange about pedagogical changes teaching accessibility. Another big impact was giving people a sense of growth: if we’re not alone, then we can do bigger things together. And we shall!

You can see everyone’s slides here in an epic Google Slide deck we had everyone use.

Jan Cuny shared a new solicitation, CUE, which brings universities together to transform computing curriculum

ACM Education Advisory Council meeting

Just after the pre-conference workshop, I attended a 2-hour meeting of the ACM Education Advisory Council (of which I am a member), which, among many other things, produces curricular recommendations for computing and related subjects. I was beginning to fatigue quite a bit, so I stayed pretty quiet, and left promptly for bed at the end of the meeting.

The meeting was rich with updates:

  • New curricular standards about data science. These are tricky to do for many reasons, including many other disciplines creating their own standards and CS trying to catch up. This mirrors the same interdisciplinary messiness happening at individual colleges and universities.
  • New survey initiatives to track enrollment, retention, graduation, and changes in teaching track faculty. This should be useful infrastructure for both supporting and understanding the reshaping of CS department faculty, which have, to date, mostly been tenure-track research faculty.
  • Efforts to more aggressively advocate integrate ethics into computing curricula. I’m on this committee, and we’ve just started, but our goals will likely be to use ACM’s platform to advocate and disseminate all of the wonderful efforts happening worldwide (as opposed to duplicating them).

As usual, the committee is full of busy higher education faculty and industry representatives. We’re all volunteering our time. That means there’s a lot of passion, and not a lot of resources, and so slow but steady progress.

The conference opened by celebrating its 50th anniversity

Thursday

Conference opening

Before the first keynote, the chairs of the conference and the chair of SIGCSE (the special interest group that oversees the conference, not the conference itself), discussed many of the achievements of the conference and community over the past 50 years since it was founded. They talked about the four SIGCSE conferences: the (new) Global Computing Education Conference (to be held in China), ITiCSE (which I still have never attended), and ICER (which I regularly attend). The SIGCSE symposium is now quite large, with over 1,800 attendees, 16 parallel tracks, research papers, experience reports, tutorials, and a dozen other categories of contributions. In this way, it mirrors the other large conferences I attend regularly, CHI and ICSE.

SIGCSE received more papers than ever this year across its three paper tracks (526, 169 were accepted). Some of them were selected for best papers, including one for which my student Dastyni Loksa was a co-author (it built upon his theoretical work about programming problem solving).

Marie DesJardin dedicates her talk to Freeman Hrawbowski

The first keynote: Marie DesJardin’s

Marie wasn’t originally going to speak. The original speaker was Freeman Hrabowski, the long standing president of UMBC. Freeman, however, caught the flu , and so Marie stepped in to give a keynote. Marie opened her talk by talking about just how key a mentor Freeman had been to Marie and how powerful his visions have been at transforming UMBC. She shared a wonderful TED talk Freeman had given about his early days fighting for education.

After celebrating Freeman, Marie pivoted to reflecting on the massive growth in demand for computing and information. She talked about many challenges that this rapid change has created:

  • Our curriculum isn’t changing fast enough to meet demands of industry
  • We aren’t preparing enough CS teachers, nor are we preparing them well
  • We aren’t successfully teaching all students who come to us, especially women and racial minorities
  • We’re not attracting enough gender or racial diversity to computing. As Marie described her experience between part of the 3% of women in the field of artificial intelligence, “It’s tiring to be different all the time.”

She challenged everyone in the community to recognize that people from underrepresented groups in CS already have an already high burden as minorities in society, and are adding to that being minorities in computing. She challenged those in the majority to not add to that burden by singling out groups, denigrating groups, or just making assumptions about what people need or what they can do.

She had some specific recommendations for CS educators:

  • Don’t assume a student’s confidence reflects their ability; these two rarely correlate (as we know, ability is not the same as self-efficacy).
  • Design “launch” courses not “weeder” courses. (Of course, this is just good teaching).
  • Learn how to diagnose student learning in class; don’t say things like “Any questions? No? Good.” (Again, just effective teaching).
  • Encourage students to collaborate in their learning, expressing agency when it’s not going well (another clear principle of good teaching).
  • Teach to students who are least like you, not those who are most like you (most want become CS educators).
Me explaining strategies. Note my wild, unkempt, sickly hair. I think I was about to fall asleep. Credit: Benji Xie.

Teaching programming

Alas, I spent much of the rest of the day resting and preserving my voice, because I had a talk to give later in the afternoon. And it’s a good thing I did, because after 18 minutes of speaking, my voice was mostly gone!

I presented our work on teaching explicit programming strategies, which is the idea of providing step by step procedures for solving various programming problems like debugging, reuse, and testing. We tried teaching these in a summer CS class of 17 adolescents. Our results showed that students found them useful, but that most just couldn’t force themselves to slow down and follow them. Those that did, however, were much more successful than those that didn’t at independently writing programs. I speculated that one likely challenge in using strategies is that most adolescents don’t have sufficiently developed executive functioning to regulate their process so strictly. One question during Q&A was particularly interesting: how much does confidence interact with students’ use of the strategies? I shared some anecdotes that would suggest that not using the strategies was related to overconfidence. Students who lacked confidence tended to rely more on the strategies, because they were less sure about how to succeed. I wonder if they were more successful as a result!

The talk that followed me was a practical pedagogical tool called PRIMM, presented by Sue Sentance. The basic idea was to structure learning by asking students to predict the output of a program, then run a program, the investigate its behavior, then modify its behavior, then make something with the behavior. The intent of this pedagogical model was to give teachers tools that help them be more effective pedagogically. The most exciting thing about the work is that they’ve deployed to classrooms spanning 500 students in the UK. Many teachers found it to be good scaffolding, but also constraining.

Christopher Hovey from CU Boulder and NCWIT gave the third and final talk in my session on a survey of why CS faculty do (or do not) adopt new teaching practices. He motivated the survey by pointing out that there are many innovative teaching practices that can help retain students in CS, but that faculty don’t adopt them. They got a very representative sample with a high response rate. Nearly three quarters of faculty respondents said they had adopted a new method. Most had learned about things from colleagues, presentations at conferences, and presentations at their institution. A smaller proportion mentioned learning about methods in popular process or blogs. Faculty were primarily motivated by helping their students; faculty felt limited by the amount of time they had to try new methods.

Competing Goals in Computing Education Research

At the same time as my session, my student Greg Nelson was on a panel triggered by our paper on the use of theory in computing education research. The panel also included Mark Guzdial, Lauren Margulieux, Colleen Lewis, and Leo Porter. I couldn’t attend, but through the grapevine, there were many notable trends in the panel and the Q&A.

  • CS educators at the conference struggled to understand the difference between experience reports and research papers. Some attendees wondered whether it mattered — if it’s useful, who cares? (Of course, as a researcher, I view research as how we know whether things are useful, so…)
  • There are some notable disagreements about the role of invention in computing education research. Some view inventions as not being research, others view them as a key kind of research. (These epistemological debates are common in all fields; I think it’s a healthy argument as long as we converge toward pluralism, accepting the validity of many forms of knowledge.)
  • There’s concern about how haphazardly we all seem to choose research questions, and opportunity to be more collectively systematic. (This is in an interesting tension with intellectual freedom, but I see the concern: teachers desperately need guidance now, but researchers following their curiosities isn’t always helpful to teachers now).

Reception

There was a reception? All I remember was getting a Pierre, waiting in line for a bland meatloaf slider, randomly reconnecting with Ron Becker (my academic great grandfather), and hearing about his epic new book Computers & Society: Modern Perspectives (Oxford Press). Then some sleeping, some more sleeping, and if I remember correctly, some sleeping.

Mark receives the award

Friday

Mark Guzdial’s keynote and award

After a night of headaches, chills, and fever, I wasn’t sure I could do another day. But Mark Guzdial was giving his keynote! I’d given some early feedback on it and couldn’t wait to see how it turned out. I also wanted to be there to celebrate the Outstanding Contribution to Computing Education award that the community had given him this year.

He began the keynote by talking about C.P. Snow’s “The Two Cultures and the Scientific Revolution” which split STEM and liberal arts as a way of talking about the interdisciplinary nature of computing education research and practice. He talked about how computers were viewed early on as a tool for learning everything, invoking Alan Perlis, who argued that everyone should take a class on computing because computer science is the study of process. He also talked about Papert as well, who also viewed computing as a tool for learning. He also talked about Kay and Goldberg’s “Personal Dynamic Media,” which viewed computing as a tool for thinking about anything. He then moved to Andy diSessa’s Boxer, which positioned computational literacy as a way of revealing the causal relationships between phenomena (e.g., velocity is about moving a position by a certain amount, over and over). Mark’s point in sharing all of this history was that computers are the “master simulator,” because they can be anything, and affect how we learn everything.

After this history, Mark argued that this vision of computational literacy for all has not happened. Most schools do not have CS teachers. Even in the UK, where schools are required to offer CS, not all schools offer a CS exam and few students take the classes. The U.S. is the same: only a tiny fraction (<1%) of students in states take CS classes even when a majority of schools offer it. And AP CS A is the most male-dominated of all AP exams. And then, AP CS A has 66K test takers, whereas AP English had 580K test-takers. Overall, more than 90% of students in the U.S. never learn any CS. Bottom line, we’re really far from computing for all.

Mark was most optimistic about approaches to computing for all that integrate into people’s existing learning and work. Bootstrap, for example, which integrates CS into algebra, data science, and physics. He is also optimistic about end-user programming, where people learn computing to apply it to their work. Mark saw other opportunities for integration. Definitions of engineering thinking overlap considerably with definitions of computational thinking. The scientific practices in the Next Generation Science Standards also look similar. He even mentioned “historical thinking,” which has similar parallels. To Mark, this is a strong sign that computing can be anything for any subject.

The next section of his talk talked about where to start with literacy. He argued that there really is a pretty short list of things about programming that have a lot of expressive power, and a pretty short list of debugging skills (which he admitted we don’t yet teach) that students need in the first 30 minutes of learning to program. More generally, Mark argued that even a small subset of computing can offer a lot of power for learning other subjects.

He illustrated this point by giving a demo with GP Blocks and a program that visualized sound, and demonstrated some compelling ideas about human speech. His argument was that the interactivity of computing allows one to get a very different understanding of all kinds of ideas, because it allows one to tinker, experiment, and immediately see the effects of these choices. All of this of course expressed the same original vision in Mark’s summary of the history of visions of computing. The difference now, he argued, is that these environments are just one URL away and now support so many different domains.

One of Mark’s calls to action was finding allies across academia to deepen our own understanding of CS education. We need people in education research, math education research, physics education research, for example. It’s not enough to just focus on CS teachers. He also argued that we need many more computing education researchers in Colleges of Education (which I heartily agree with and am working hard to achieve at my own institution).

His second call to action was for us to innovate significantly more on programming languages and tools that work for the broad range of domains that we have yet to explore. We’ve only been exploring a very narrow range of media for a narrow range of programs. He suggested that perhaps we’re just at the beginning of understanding the many ways we might express computation. (I couldn’t agree more; as someone who’s worked in end-user programming and built novel languages for 20 years, I’ve long felt that our programming languages have only explored a tiny part of a massive design space).

Some of the audience questions expressed excitement about computing in other disciplines, but noted the challenges of overcoming the tall silos in public education that separate disciplines. Mark shared that many of his peers suggested that such integration is far more feasible in primary school, where integration is already common. Mark’s own opinion was that even within disciplines, there’s a lot of room apply to computing; he shared the example of argumentation, which is common in science and humanities, and how computing might support learning about argumentation in these disciplines.

It was so heartening to see a community come together to celebrate one of it’s great leaders, for his vision, devotion, and scholarship. It’s a much deserved award and perfectly timed at this point in the conference’s history! I can only aspire to be recognized in the same way some day for my work.

James Prather starts presenting his award-winning work

Mistakes and Errors

After some more napping, I made it to this anticipated session, both because of my prior interests on errors, but also because of the connections that errors have to mental models and debugging.

The first talk, by Tobias Kohn, investigated the effects of Python errors on behavior. Tobias built a different parser that gives better errors, both by localizing the defect more precisely but also by improving the explanation of the error. His question was whether these improved error messages can better learn from errors. He gathered several thousands of programs that students compiled with his parser. The big question was whether students reactions were appropriate with respect to the root cause of the error. He discovered four things:

  1. About a third of errors in the sample were superficial typing errors.
  2. Despite his parser’s improvement, it was still really bad at correctly classifying the root cause of errors.
  3. Students edits after receiving error messages closely aligned with the root cause of the error reported, even if their edits were incorrect. Some error messages even led students to forming new misconceptions about Python syntax.
  4. Students did not reliably fix errors correctly; they just took the feedback of the compiler and tried to make an edit consistent with the recommendation that would make the error go away. This did not lead to students achieving the learning objectives.

To me, these results suggest that positioning compilers as teaching syntax is highly fraught, but also unavoidable, since this is interface with which learners must interact. I’d like to see a reinvention of parsers that do a much better job and anticipating the range of misconceptions that a student might have, rather than just making smaller tweaks to error messages.

The second talk was by James Prather won one of the research best paper awards. I was really excited to see this because my student Dastyni Loksa helped co-author it, and it was based on Dastyni’s ideas about programming problem solving. The paper concerned students’ general lack of metacognitive awareness about where they are in their problem solving process, and specifically at the beginning of solving a problem. The paper investigated whether asking students to solve a test case immediately after reading a prompt would help learners develop some metacognitive awareness of their understanding of the problem. The underlying idea of this was to compel students to verify their understanding before proceeding with trying to solve the problem. They ran experiment testing the effect of this on solution correctness and learners’ theory of intelligence. The treatment group was more correct and faster at solving the problem (though statistically, this was not significant). The change in growth mindset was similarly inconclusive, because the treatment group had a stronger growth mindset. While the quantitative results were inconclusive, the qualitative data revealed a higher degree of metacognition and a better understanding of the problem. Half of the control group didn’t read the prompt more than once, whereas others re-read it multiple times, but most of them did not solve the problem correctly. The results suggest (though not strongly), that forcing reflection on understanding can help a lot, but that simply re-reading a problem prompt does not. Clearly there’s a need for some replication here with a larger sample; James said they now have a sample of more than 1,000 that they’ll be reporting on in the future.

The third talk was by Rebecca Smith and was on learners’ trajectory of errors throughout a programming session. The paper was a broad quantitative descriptive study. To identify errors, they generated a bunch of test cases for a set of specific problems that students had worked on in the context of a Python MOOC. For example, the most common errors by far were syntax, name, and type errors, and students were pretty similar in their distributions of errors. Most students only revised their program once or twice, but there were definitely outliers of having to revise their program dozens of times, particularly key, index, and divide by zero errors. Like most “fishing expeditions” of large data sets, it’s a little unclear what to take away from such studies. What do we do with this model of Python errors? Rebecca argued that we might use this data to prioritize instructional goals in Python courses.

Benji Xie and Matt Davidson prepare to present

Assessment and instruments

After sneaking in some more rest, the next session I attended was the session on assessment, which began with one of my lab’s papers on an Item Response Theory (IRT) analysis of the SCS1 assessment. Benji did a great job summarizing the work in a medium post. The gist of the paper’s discoveries are that the SCS1 is really hard, and that there are several items in particular that may significantly deviate from the rest of the CS1 scope covered by the assessment. More broadly, the paper attempted also give an exemplar for applying IRT to an assessment. We didn’t really know how the audience would react, but the Q&A revealed that there are a lot of eager questions about tools for doing IRT analysis, IRT models for different types of questions, and how much data is necessary for IRT (the answer is 500+). There were also concerns about how to fit this into busy teaching schedules.

Monica McGill and Tom KcKlin gave the second talk in the session on noncognitive constructs in evaluation instruments. These are factors like self-efficacy, study skills, support of family and friends, sense of belonging, and other factors that affect behavior. In most domains, these are understood to be interdependent, and sometimes even all correlate with each other. The big question they speakers asked was what constructs our community is using, and which ones are we not? They began with a list of noncognitive factors. They then (cleverly) used the csedresearch.org website, which has metadata on evaluation instruments from studies, along with other snowball sampled instruments. They basically found that there a bunch of factors we’re not measuring, particularly around metacognition and social-familial relation. This suggests we need to start building some instruments, but also that we need to find a way of disseminating them to all researchers.

Blair talking about cyber threats

Saturday

At this point, I was feeling a tiny bit better. My voice was coming back, my chills were gone. I was still quite fatigued, but I needed to pull through: keynote, one more student talk to attend, and a session I needed to chair, a closing lunch, and then travel home. I could do it!

Blair Taylor’s keynote on cybersecurity education

I wasn’t sure what to expect from this talk. Blair Taylor has been doing work on cybersecurity education for many years and came to speak on how cybersecurity education can transform all of CS education. She began by telling her own story, which was quite circuitous: majoring in mathematical sciences, working in industry on horse race wagering IT, then 30 years teaching at a community college. She eventually started a doctorate at UMBC, then started at Towson as a visiting professor. She eventually finished her Ph.D. and started doing work on cybersecurity education.

Her work began with integrating secure coding practices in a CS1 class, calling the program “secureinjections.” She also started a program called SPLASH (“Secure Programming Logic Aimed at Students in High School”), which was for secure coding for girls. Her third big project has been a national cybersecurity curriculum (www.clark.center), which has a ton of learning materials that can be composed into small modules of a course or an entire curriculum.

All that work was fine, but for one damning problem with her delivery: in what I think was her trying to be funny, she assaulted the audience with a stream of terrible, insulting, offensive, and divisive stereotypes. Girls like boys. We’re all geeks. Security isn’t political. Girls are more sensitive. Every minute was something new sexist statement, coming from one of the few women who does cybersecurity. Attendees were walking out in the middle of the talk, not wanting to watch the train wreck. (More concerning to me were the people in the audience actually laughing at the jokes, because they all came at the expense at underrepresented groups in computing).

The conference organizers wrote an apology email about the keynote immediately after. They plan on writing a more detailed note later, and thinking about how to prevent it from happening again in the future.

Saba Kawas presenting our work on professional development for teaching accessibility

Teacher professional development

After darting out of the terrible final keynote, I went to a short session on teacher professional development. It had just two papers, one on in-service teachers’ computational thinking practices, and a third research paper from my lab assessing the feasibility of training higher education faculty on how to teach accessibility.

Kong Siu Cheung from the Education University of Hong Kong presented work on a 4 year project on training teachers on how to teach programming. He has a cohort of 80 teachers in their third year and spoke on trying to improve the computational thinking practices. They used Brennan and Resnick’s computational thinking framework, defining practices around testing, debugging, reusing, remixing, abstracting, and algorithmic thinking. They defined all of these and designed instruments for all of them that asked a series of questions about all of these skills, but the example questions he showed all seemed like assessments of content knowledge about programming, not pedagogy. The courses the teachers took aimed at primary schools and focused on content expertise for Scratch and App Inventor, and pedagogy for solving problems that occur in the classroom. They tested out the instruments and show improvements in most dimensions of across three courses, but it wasn’t clear that the instruments were reliable or that the courses were good preparation for what was assessed.

Our was the second talk. Saba Kawas, who was the lead author of the work, shared our effort explore how to education higher education faculty on how to teach about accessibility in their CS courses. We were exploring the idea of how to provide “micro” professional development by creating a mapping between CS learning objectives and accessibility learning objectives, then linking examples of learning materials for each of these mappings. We made a tool that let CS teachers find those objectives that they teach, quickly connecting them to relevant material. However, our goal was not to build the tool, but to build enough of a prototype to assess whether such a tool would actually help higher education faculty actually do some integration. Our results showed that some faculty felt they could use the content immediately; others still faced personal, organizational, or structural barriers to incorporating accessibility topics into their courses. Nevertheless, our evidence suggested that investing in creating such online materials could have a lot of impact if adequately marketed.

Reflections on the state of the field

During the closing lunch, and on my way to the airport, I ran into at least a dozen people I’d wanted to reconnect with. “I didn’t know you were here!” I basically wasn’t. But it reminded me how wonderfully supportive and well connected this community is, both to long-term contributors and to its newcomer. I think it’s these relational bonds that are going to carry it far into the future.

That said, the conference has all of the tensions I believe it’s always had. For a teacher-researcher like myself, it’s hard to come to a conference that is 80% teachers without research experience, because very few of the conversations begin from the premise of skepticism. Rather, the community inherits a lot of the culture of academic CS, bringing overconfidence and feigned objectivity to many critical and challenging open questions about CS learning and teaching. I always have to warn my students before they attend SIGCSE that it’s not a place for deep and nuanced discussions about learning, nor is it a place to get critical feedback about their ideas.

It is, however, a wonderful place to be immersed in the concerns of CS teachers and their perceptions of evidence. It’s a great place for researchers to practice communicating about research to non-researchers, and really sharpen one’s skills in trying to apply basic discoveries to practice, even when there’s a dire lack of evidence to do so. After all, CS teachers need our discoveries now, even if they aren’t ready.

Of course, one final tension is that many CS teacher don’t think they need research discoveries. They don’t see the value of basic research, they don’t think we understand the complexities of their work, and they often don’t trust our form of knowledge. Some have admitted to me that they’re much more likely to trust a teacher’s experience report than a researcher’s research paper, because it comes from a teacher. This is despite the fact that most computing education researchers like myself are also CS teachers, just like them! Perhaps there’s a sense of intellectual superiority that inevitably comes from scientific skepticism and the tenure-track/teaching-track divide in academia.

To me, these tensions are the fundamental work of this community in the next 50 years. We have to find ways of educating computing education researchers how to communicate effectively to CS teachers, and we have to find ways to education CS teachers to appreciate, interpret, and apply research. Only when there’s a seamless dialogue between these two practices will CS teaching improve. And if there’s anywhere we can figure out how to do this, it’s the SIGCSE technical symposium.

See everyone I missed next year in my home town of Portland, Oregon!

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.