Dagstuhl trip report: Educational programming languages
I don’t know about you, but I think I go to too many Dagstuhl workshops. This is my third this calendar year, and tenth ever in 20 years of research, which feels egregious (and my wife agrees). But it also makes me feel guilty: there are so many outstanding scholars that should be able to attend these, especially doctoral students. This is partly why I try to write trip reports: I don’t like that so many important voices cannot participate in these small group events, and I feel like sharing what happened publicly is one way to provide at least some access to some of the invaluable insights I gain at them (even at the risk of creating some painful fear of missing out).
But I’ve always wanted to design another programming language, and this time, release it fully to the world to use. And so during this sabbatical, that’s what I’m doing. I’m being relatively secretive about it, not out of fear, but just for a fun kind of marketing mystique, but I’ve looked forward to this Dagstuhl all year as a week to geek out on what it means to make languages for learning, and learn everything that all of the much more experienced language designers have discovered in making, maintaining, and teaching with this unique class of computational media.
Sunday was mostly a travel day. I arrived around noon, grabbed a quick lunch, and then completely failed to catch my train because it was so full. Fortunately, I caught up with Michael Ball and his travel partner Lauren and we strategized our way to a later and less full train and eventually found our way to Dagstuhl. I had a lovely relaxing buffet dinner and conversation with several of the attendees, then slowly unpacked, forcing myself to stay awake until about 9:30 pm.
Monday: Tutorials, Games, Scaffolding, Design
I started off Monday with a lovely conversation with Jens Mönig about the origins of Snap, the many challenges in doing programming language design work in the open, and the inevitable ways that people find unexpected uses for a language. We talked about some my ideas about using storytelling lore to bring joy to learning a language.
Shriram Krishnamurthi, Mark Guzdial, Neil Brown, and Jens Mönig were the workshop’s organizers. They kicked off the workshop, the COVID policy, the code of conduct, and the Dagstuhl norms, such the open breakfast seating, the assigned lunch and dinner seating, the breaks and snacks sessions, and some coordination on our excursion to a recreation of a Roman villa.
The first day was 10 minute presentations from attendees about their experiences building educational programming languages and systems. First up with Tobias Kohn, who wrote an environment called TigerJython, which uses Java and Python underneath to have very little configuration. Tobias had some nice critiques of pedagogical use of error messages and consoles, observing the ways that teachers often need to use IDE output to facilitate learning. He also talked about the challenges of using Java on iPads, which are increasingly ubiquitous in schools.
Felienne Hermans spoke next about Hedy, a “gradual” programming language, in that language features are slowly revealed over a sequence of instruction. She started by talking about cognitive load, error messages as teachers, and the way many people without CS teachers used compilers as their teachers. She observed that none of this really works, because there’s just so much complexity in syntax that forces instruction to be done in a particular sequence. Hedy avoids this by gradually adding syntactic rules over time, rather than all at once. She also talked about clever pedagogical design choices, like making text console output more fun by feed it into text to speech or embroidery machines and in supporting multiple languages, especially in error messages.
Next my former PhD student Michael Lee presented on Gidget, a programming game that teaches programming concepts through debugging (Mike started this work while he was a student in my lab). He talked about some of the key principles behind the game’s design, such as positioning the compiler as fallible and providing granular access to program execution. Gidget has a whole range of interesting unique features, like frustration detection hints, embedded formative assessments, avoiding terms of violence (e.g., “remove” instead of “destroy”), choosing pro-social game motives (e.g., saving animals), and automatically generated levels to improve learning.
There was a general discussion afterwards. One interesting debate that arose was perceptions of block-based editors as for children, creating some resistance in classrooms. This was particularly problematic in The Netherlands, where most children use blocks in primary school. There was also an interesting conversation about being able to customize or “skin” environments to tailor them to their own languages and visual aesthetic preferences.
During the break I had a great conversion with Michael Ball and Felienne about the many low-level implications of language grammar design on support for multiple languages and cultures. Snap, Hedy, and my own emerging language Wordplay all have dealt with this in different ways, and I tried to get a sense from Michael and Felienne about some of the choices that ended up excluding, such as building particular grammatical assumptions in a syntax.
When we resumed, Diana Franklin kicked us off by talking about learning strategies. She talked about “THIEVES”, a reading comprehension strategy, as an example of way of scaffolding learning, and how that and other research inspired programming learning strategies like TIPP&SEE for programming in Scratch. The idea behind TIPP&SEE was pulling information from a project page and then systematically examining sprites, events, and program behavior. She talked about some of the tensions between producing products and deep comprehension (e.g. people creating very simple things with code, but then struggling when they try to go deeper).
Eva Marinus (Schwyz University of Teacher Education) spoke next about cognitive skills and programming, particularly the cognitive benefits of programming. She talked about prior work on programming showing the cognitive benefits of programming, especially the Scherer and Siddiq paper from 2019, which had a surprisingly high effect size on transfer of “creative thinking”, but also some suspect analyses around sample sizes and outlier exclusion, and was correlation, not causation. Bottom line, we still don’t have strong evidence of transfer of any kind.
After, Ethel Tshukudu (University of Botswana) spoke about conceptual transfer in learning programming languages, her dissertation topic. She presented numerous studies and discovered that syntactic similarities can play a big role and that teacher transfer interventions can be quite important. The most disruptive thing to transfer was similar syntax and different semantics; different syntax but similar syntax led to no transfer. I found her work on instructional strategies for transfer particularly interesting; there’s so much we don’t yet understand about how to make language transfer successful, but Ethel’s work provides a great foundation.
The last speaker was Barb Ericson (University of Michigan), who spoke about Parson’s problems and the many benefits of scaffolding, especially with adaptive problems that helped learners and teachers more effectively learn. The adaptation was largely a hinting system which removed distractors, provided indentation, or combined blocks that needed to be adjacent, but also a between problem adaptation, which changes subsequent problems based on previous problem hint requests.
At lunch, we had a lively conversation about the cultures of art and intuition that often arise in programming communities. We talked about the many surprising ways that this gets interwoven with formal reasoning and how art and logic end up interacting with each other.
After lunch, Barb spoke again about interactive eBooks for learning to code. She talked about Runestone, a way of democratizing books for the 21st century. She toured the many features of the ebooks, including multiple choice, Parsons problems, unit tests, analytics, all forming the basis of deliberate programming practice.
Bastiaan Heeren (Open Universiteit) talked about Ask-Elle, a programming tutoring system that tries to support a mix of top-down and bottom-up program construction, supporting aides like syntax, semantic, and logic problems, as well as refactoring support. The grand challenge that they are tackling is trying to infer what learners are trying to do and offering supportive hints in context.
I spoke next about Greg Nelson’s work on PLTutor, which was a (successful!) attempt to teach programming language semantics at a very low level of granularity, to build more robust knowledge about programming languages. I gave a demo, talked about our evaluation, and shared some of Greg’s unreported experiences using it in teaching and tutoring.
Michael Ball (UC Berkeley) gave a talk on Snap, talking about building blocks with higher order functions, taking blocks as an argument, but also creating new blocks that are effectively higher order functions. He also demonstrated ways of restricting the blocks palette to create micro-worlds, focusing student attention on particular features. A lot of the features focused on giving teachers customizable control over the environment.
Rather than going to the coffee break next, we squeezed in the last two talks so that we would have the afternoon free. First up was Mark Guzdial (University of Michigan), who talked about his experiences with participatory design. He gave examples of participatory design in online degree planning, social studies data visualization, and computation for expression and computing for justice. The key insight of his experience was going to domain experts and learning from them, in partnership, and the need for careful design of materials for partners to comment on.
To wrap up, Felienne gave one more talk about multiple language support in Hedy. She talked about variable names, keywords, numerals, and punctuation. She started by asking people who have localized: some mentioned EU requirements, some mentioned building on prior knowledge, some mentioned students skipping English. There are also practical things like having to switch keyboards to access characters. One consideration is tokenization, which has to be carefully constructed to allow for all languages. Keywords are another consideration, which is relatively straightforward, just allowing keywords to be one from a set. Another consideration is right to left languages, which is also relatively easy, as it’s always just a rendering issue, not a parsing issue. Punctuation is also challenging, as commas come on the left side of words, not the right, but it’s an inverted comma. Numbers are another challenge, as different languages have different numbers; this has to work in tokenization but also in program output. (And some languages have left to right numbers, even though the rest of text is right to left). Her general takeaway was to assume nothing :)
We ended the session with a rich discussion of the opportunities and limitations of participatory design, in general, and in the specific case of programming language and system design. Much of where we landed was on the intricate difficulties of simply being participatory.
After a lovely dinner, we had an informal panel about learning at scale, talking about the intersection between educational programming languages and large groups of teaching. Michael Ball, Barb Ericson, and Kathi Fiesler all spoke and answered questions, talking about the many mundane things that can interfere with learning, including tool configuration and installation, the limits of automation, the many misalignments between requirements and tools, and the endless tensions between different notions of plagiarism. At one point, we went around the group and everyone else shared their experiences, and many reported the same challenges for technical courses. Interestingly, many also reported teaching large non-technical courses and having many fewer issues.
Tuesday: Representations, Planning, Feedback
At breakfast, we had an interesting discussion about finding resources to sustain educational programming languages, and discussed the many surprising sources of funding, such as corporate jobs with education branches, research foundations, and traditional academic positions with increasingly unstable futures.
We started Tuesday morning with more talks about tools and platforms. Kenichi Asai (Ochanomizu University) started, talked about a tool called OCaml Blockly, which is a block-based editor for OCaml code. He built it because students in his 2nd year CS course were struggling with OCaml syntax. It prevents syntax errors, type errors, and scoping errors by only allowing valid edits. It follows the now conventional drag and drop interaction for editing abstract syntax trees. He also added simple refactoring such as renaming. He reflected on how its students use it, and observed that many students were able to transition to editing text programs, though the transition wasn’t always smooth. He also talked about maintenance: it was developed by a student who has since graduated :)
Youyou Cong (Tokyo Institute of Technology) spoke next about an environment called Mio, which is a block-based-editor for Scala for scaffolding use of the How to Design Programs design recipe. She observed that many students give up using the recipe and wanted to find a way of encouraging them to persist with it. It essentially provides a way of constructing program plans and provides feedback on the four design steps in the recipe. Informal student feedback was that it was fun and convenient, but that more experienced students found the scaffolding too much.
Neil Brown (King’s College London) spoke next about “structural expressions” in block editing. He tackled the particular problem of the laborious process of dragging blocks in top down tree order. He demoed frame-based editors, which have a cursor and keyboard shortcuts, then just a text field for expressions, which then get translated into blocks after entry. It’s essentially a keyboard interaction for structured editing. He talked about the downside of structured editing being that the shortest path between two valid states is often an invalid state, which is disallowed. This reminded me of my 2005 study on text editing, showing exactly that.
Jens Möenig (SAP) took the stage next and talked about his motivations for block-oriented programming. He talked about stories and bringing meaning to what people create and the people that have inspired him to change how we compute, such as Mitch Resnick, Seymour Papert, and others. He talked about what he disliked about “adult” programming languages, in how they don’t really add anything other than difficulty, and how programming has to be about more than just understanding computer science, and more about using computing to understand something about the world around us.
During the break I had a great chat with Neil about all of the subtle difficulties of trying to design seamless structured editors. There are just so many conventions to compete with and also so many complex rules that programming languages invent that the design space ends up being very nuanced.
After the break, we returned to more short talks. Kathi Fisler started with a talk on planning (programs). Kathi talked about the many challenges that students face in trying to write programs successfully. She pondered about why planning is interesting from a teaching and learning perspective and worked through an example problem with multiple different solutions. She talked about listing tasks (or “subgoals”) and trying to map them onto the different parts of different solutions. We had an interesting discussion afterwards about the many self-regulation problems that students have with trying to following planning processes and strategies.
Shriram spoke next, talking about how get students to provide plans. He pointed out some interesting things about Snap, which lets people write all kinds of interesting descriptions of their solutions in code, even when it’s not valid. There was a pedagogical move in this of using Snap to solicit plans. On a separate topic, Shriram talked about student confusion about stacks and frame pointers; they asked students to try to describe what was happening during program execution and their descriptions and diagrams were very much inaccurate.
After a great non-public demo by Ben Shapiro, Johan Jeuring talked about giving timely feedback to novice programmers. The big question was when and how to give feedback. He talked about modeling expert behavior, shaping hints based on specific expert milestones, and generating hints at particular points in problem solving. He also noted how frequently expert annotators didn’t agree on expert steps.
After lunch, Kathi talked about higher order functions, such as maps, filters, and other functions that take functions to operate on data structures. She gave an example of two functions that basically have the same structure and how she uses this repetitive example to explain the value of higher order functions. The strategy she illustrated was basically one of promoting reuse. As an alternative, she illustrated a different idea, which teaches higher order functions as input/output pairs that illustrate different higher order function behaviors. They even abstracted this away to just show colored boxes, and not code.
Jadga Hügle also talked about higher order functions in Snap. There were many interesting interactive features of this, such as representing lambdas with a grey wrapper and some interestingly descriptive syntactic features to describe the behaviors of the higher order functions (e.g., map ___ over ___). She also demonstrated this on some really interesting table data manipulations, which processed rows of tabular data.
Shriram then talked about reactive programming in Racket and Pyret. He started by setting context about what reactive programming is. The key idea he shared was that in normal programs, they begin and end. In reactive programming, it doesn’t end; it re-runs every time the program receives a new input from the outside world. He demoed some examples of this in Dr. Racket, and noted that some teachers react to it positively, but it requires a radical change to the language to make work. The other version of reactivity he presented was slightly more complex, using analogies of movies as moving pictures to describe reactive programming as a series of transformations from one model to another. The group then had a bigger discussion about the many ways that many programming language abstractions can make some things easier and some things harder, and what to choose pedagogically often depends on what’s being taught and what kind of abstractions it may benefit from.
In the second to last talk of the day, Barb Ericson took the stage again to talk about peer instruction, which has been well demonstrated to be highly effective at improving learning gains. She talked about all of the practical pain points of using peer instruction, such as finding content, getting clickers, etc.. To address these, she built in peer instruction questions into her Runestone ebook, with an integrated chat.
Lastly, Diana Franklin talked about learning trajectories, which are paths from existing knowledge to some learning goal. She described the notion of thinking about the independent paths that students might take to find their way with a teacher’s help to a learning goal and reminded us that learning trajectories are all about content knowledge, not pedagogy. She talked about some of the benefits of learning trajectories, such as slowing down pace, allowing for more gradual practice, and makes the purpose of teaching more explicit. She described a vision of teaching CS that was much more stretched out across primary and secondary, rather than cramming everything into these really tight timelines.
After a short hike and tasty dinner of fish and ratatouille, we had an informal outdoor panel about evaluating educational programming languages and systems. The conversation was wide ranging, but converged around a few core issues:
- The field really needs to stop asking largely useless is “X better than Y” questions and focus more on understanding why, when, and for whom something is useful.
- The field needs to embrace a multiplicity of methods and epistemologies as well as mixed and negative results, productive student transgression, and artifact evaluations.
- The field needs to center the questions that teachers and students have about systems in its work, potentially engaging in more research practice partnerships.
- The field has to recognize the limits of evidence and acknowledge the power of (pop) culture in shaping what people use and why.
- While there are many systems level issues in academia that warp incentives for all of the above, there are some things we can control, like our journal and conference policies.
After the panel, I had a lively conversation with several about how we might restructure and partner ICER and TOCE to realize some of the ideas above.
Wednesday: Notional Machines, Tables, APIs, AI
We started the morning with talks about debugging and runtime. Tobias Kohn presented on a Python debugger, but more as a tool for program comprehension, not debugging. His approach was to bring a state visualization, much like (and inspired by) Python Tutor, but embedded in an editor. He tried to improve upon the visualization from a layout perspective, but also have color be first class values in the view, allowing for a bit more expressiveness for pedagogy about interfaces. He talked about the particular challenge of object pointers, following a principle of having a single representation of an object in a view, with arrows that point to it.
Kenichi spoke next about his OCaml stepper, which allows for subexpression stepping, which shows the value that each expression’s value in place. The stepper is incremental in that the next step is computed on demand; nothing is precomputed on demand. This required a lot of careful attention to how to manage undecidable, infinite loops.
After Kenichi spoke, Shriram made an argument that we can show many different ways of depicting runtime behavior; there can be more than that visualize different ways visualizing computation. He demonstrated many different tools for doing this and talked about pedagogy for using different representations to help someone understand semantics. We had a discussion about the potential benefits of talking about computation over time as a flat static visualization rather than over time.
After a short break, Kathi talked about tabular data. She’d been teaching an introductory data science course. She made an argument for using tables as a first data structure, especially as an opportunity for talking about social impacts about analyzing data. They offer rich structured data in a familiar format, interesting questions, data engineering, and even motivating programs for reuse. She presented two different representations, one more like SQL, and one more functional, and we had a fun debate about who might benefit more from each.
Neil also talked about tables and a tool for processing tables of data. In his system, all tables are immutable, so every operation creates a new table. The system had many nice features, like extensive autocomplete on expressions for transforming tables, examples that served as unit tests, and many helpful dialogs for each type of transformation. There was also a nice idea of table “unit tests”, which he called tests.
Elena Glassman gave an impromptu talk about a paper to appear on annotating code examples with concepts. She framed it as closely related to planning, in that it helps people in comparing different libraries that have similar purposes. They created concept annotations for library examples, with different concepts that each library invoked and explored how it would affect people’s sensemaking about libraries. They explored how people might use this and found people found value in doing feasibility comparisons, even with the immense volume of information presented.
The last talk of the session was Mark, who talked about teaspoon languages (in that they offer just a “teaspoon” of computation in other domains). His primary goal was trying to bring computing to the vast majority of people who will never take a CS class. His core requirements for a teaspoon language is to be task specific, useful for that task, and learnable in 10 minutes. He gave an example in history and showed just how different history teacher’s requirements are from the conventional requirements built into CS contexts. He showed another example for math, where they worked on image transformation through equations. He showed a third example of teaching combinatorics to generate all the outcomes. He’s generally found that adoption isn’t so much bound to usefulness or usability, but convention and popularity.
After a buffet lunch, we had a few more talks, cutely described with a Britishism as “allsorts”. Mark talked about Hypercard, which most of the group had not seen or heard of. He shared a few interesting features from it, including different privilege levels (which changed how much support and capability a user gets), event handlers (for responding to button events), and the quirky language design choices of Hypertalk. We had some fun with oink sounds and then Mark showed a more modern version of it called LiveCode.
Janet Siegmund spoke next, including about her fMRI work, trying to understand program comprehension. Some of her work found that much of the program comprehension work was language processing, working memory, and divided attention. They tinkered with source with varying degrees of meaning, forcing different types of information processing, revealing that taking identifier information away forced a kind of dependency analysis instead of linguistic analysis. She also talked about a fun study with unplugged Scratch Jr.
Elena then introduced herself and talked about her “Engineering Usable Interactive Systems” course, which taught structured design arguments and uncertainty minimization. She’s adding programming back into this HCI course and had many questions about how to do it. She then talked about Daniel Jackson’s The Essence of Software, as a potential way of trying to introduce software design, and some of the tensions between it and actually engineering designs.
Shriram gave one more talk, this time briefly about a new interface for Pyret, which offers a bit of a REPL with history, much like a chat metaphor, but allowing previous expressions to be editable to fix editors. Then he did a sharp pivot to teaching programming language courses through a pedagogy he called mystery languages.
After dinner, there was a panel on AI in Computing Education. I was only able to make the first portion where the panelists (Ben Shapiro, Michael Ball, and Bastiaan Heeren) gave their positions, but the general sentiment was that:
- We can’t ignore AI; it’s here, and so we have to figure out how to teach its good and bad parts.
- There are significant questions about whether it’s an appropriate technology for learning, especially data-driven AI, which has a tendency to focus on aggregate, normative behavior at the expense of those facing inequities or those with creative, idiosyncratic, but creative insights and solutions.
- There are so many ways in which CS educators should not be teaching AI alone; social science has much to say that students need to hear, and CS educators shouldn’t pretend to be experts on the social impacts of AI.
After my grant meeting, I came back and had a great discussion with Elena, Ben, and Michael Ball about GitHub Copilot and the many caveats in teaching front end web frameworks in HCI courses.
On Thursday, we were done with presentations and began interacting in small groups about topics that arose in the first half of the week. I joined a group talking about “program representations” with Kathi, Ethel, Michael, Tobias, and Janet. Our group discussion was wide ranging, but made several interesting observations:
- There are many questions about why we need representations; are they a means to greater understanding or an essential part of programming and learning?
- There are also many questions about whether automated computational tools for generating representations of program behavior is essential, or just a bias we have as a discipline; is it possible that whiteboarding and sketching skills should get our attention instead?
- There are some ways that representations may benefit from social settings, using communication as a vehicle to generate needed representations as needed.
- Representations may also need to leverage prior knowledge. One example of this is having domain knowledge about the data passing through algorithms, especially personal data. This can promote greater comprehension by leveraging learners’ assets.
- There may be some value in trying to name some of the many representations that we’ve invented and teach people how to use them flexibly to reason about algorithms and program behavior. Maybe they will be compelling to the extent they’re situated in relevant domains and personal data.
In the session after, we crowdsourced a few ideas for some brief shared writing and settled on “What studies should we do together?” and “What have we learned from building, studying, deploying, funding, and/or maintaining tools?” We collaboratively wrote and brainstormed about both of these for about 45 minutes, coming up with dozens of interesting ideas for studies and grants, as well as many reflections on the challenges of building robust platforms in the context of small education-focused teams.
In the afternoon, I joined Mark, Barb, Diana, Michael, Tobias, Eva, and Johan for an outing to Villa Borg, the reconstructed former site of a Roman villa. We had a nice tour, I had a nice nap in the sun on a beautiful wooden bench, and then we had an immense dinner at the villa’s restaurant.
We ended the workshop with two additional breakout sessions. I attended one on project sustainability, exploring how teams can fund and maintain educational programming systems over time, especially given the lack of dedicated revenue streams. We identified several funding strategies:
- Whatever the source of funding, it’s key to align a project’s fundraising strategy with the incentives, constraints, and expectations of the funding climate. For some, that might mean particular kinds of scholarship to justify sustain funding, for others that might mean adoption numbers, and for others still that might demonstrate impact on education systems. Thinking carefully about these incentives is important, as they can often have novelty biases, rigor biases, and technology biases. Accounting for these biases, and ensuring they don’t end up warping the scholarship, requires explicit planning.
- Another consideration is how much funding is restricted toward particular expenses; obviously, having unrestricted funding is the most valuable, as it allows for a project to meet whatever needs come, rather than being constrained. It’s also the least common and hardest to obtain.
- There are almost always politics that influence how long money can be held and what it is spent on, whether it’s a corporate politics game or an academic or foundation politics game. It’s key to have someone who can manage those politics and relationships and ensure that they do not interfere with project goals in problematic ways. That could be a project lead, or a principal investigator, or even a funder or corporate partner who provides cover.
- There was a tangible sense of a continuum from deception to omission when it came to reporting and persuading funders. Everyone agreed that deception is unacceptable, but everyone also agreed that sometimes omission was necessary to amplify the outcomes that a funder might care most about, while hiding other details that might be in alignment with academic or innovation goals, but in tension with funder goals.
- It was quite common for successful projects to have one or two people with semi-permanent positions as the backbone of a project, from a maintenance perspective and from a resource perspective. This might be a corporate job or academic position. But it also means that projects have a single point of failure.
- Backend infrastructure can be a key risk to project sustainability. It requires regular maintenance, cloud costs, and staffing. Committing to back end infrastructure is a significant decision with long term consequences. Using university bandwidth and hosting is one way to avoid these costs, but is often restricted to static hosting, or has costs for university IT. But the quality of university IT service can be highly variable, and can also require some political negotiations to navigate policy restrictions.
- It’s important to consider other sources of revenue. Communities can be a source of revenue. User events can have registration fees and that can generate unrestricted funding. But where to store that money can be complicated and impose accounting challenges. Donations can also be a source of revenue. Sometimes people will just give and this can also be a source of funding, especially when requests are targeted towards those with philanthropic capacity. Some talked about offering nearly meaningless premium services that offer almost nothing extra, but allow corporations that are often reluctant to donate to pay for a service. This might not sustain the project, but it can help sustain it. All of this requires a place to keep money, which may or may not be the backbone organization.
- We also talked about other sources of staffing, such as ways of trying to onboard, supervise, and engage students to contribute, for credit or for modest pay. There are all kinds of challenges of doing this, including ensuring they have sufficient expertise, that there is onboarding for them, and that they get feedback through code reviews or other mechanisms. We also talked about open source contributions and the limited value of “drive by” contributors that don’t have enough context for the project to make meaningful contributions. Some talked about ways of engaging contributors socially first, to get context, and then have them contribute later after they have it.
In addition to just being a relaxing week with great colleagues, it was also an inspiring week. It reminded me not just how creative our community can be in creating learning technologies for computing education, but also how committed: most of the projects I learned about our labors of love done with very few resources. And despite the instability that comes with that, everyone scrappily marches forward, creating the platforms that tens of thousands, hundreds of thousands, and sometimes millions of people use to structure and shape their conceptions of computing. I look forward to the spending the rest of my sabbatical joining this club of passionate, underfunded programming systems maintainers, trying to make the lives of learners a little more fun, a little less painful, and in my particular project’s case, a lot more inclusive.