(Almost) the largest CHI ever, which isn’t apparent at all in this front row seat.

CHI 2019 trip report: the interlocking threads of HCI and learning

Amy J. Ko
Bits and Behavior

--

The ACM Conference on Human Factors in Computing Systems (CHI, pronounced “kai”) was my first academic community. And what a community! It’s grown so rapidly, one might imagine that it’s incoherent, but I find it’s radical interdisciplinary to be its greatest strength. I’ve published most of my work to this community, I’ve learn the most from it, and I always leave the conference with some notable shift in my ideas, perspectives, and beliefs about people and computing.

Unfortunately, for a variety of personal and professional reasons, I haven’t been for two years. I missed South Korea and then I missed Montreal. Missing CHI made me miss CHI, and so this year I was super excited to re-engage, especially since it was in Glasgow, Scotland, which I hadn’t ever visited.

Of course, because CHI is so big (20+ parallel tracks, thousands of people), this trip report couldn’t possibly summarize everything that happened at the conference. Instead, you’ll get my exceptionally narrow view, which this year focused heavily on the intersections between HCI and learning. At the same time, it was a long week with a lot of content, and so this trip report is going to be a bit long. I’ll try to keep it concise!

The conference kicked off with bagpipes, of course

Monday

The conference began with the usual plenary on what is becoming the usual unhinged growth of HCI. The conference was the biggest ever: 3,855 attendees. The attendees spanned 68 different countries (but about half from the US). Nearly half of the attendees were first timers. And SIGCHI continues to be the second largest ACM special interest group (I don’t recall the largest, but seem to recall it being SIGGRAPH).

The scale of the conference was mirrored by the number of submissions. There was a 14% increase in the papers track to 2,700+ submissions — that’s not a typo. That’s 11,175 reviews by thousands of reviewers. The committee ultimately accepted 703 papers. These are absurd, incomprehensible numbers. Although somewhat comical to researchers in other areas of computing, this means that there were about 150 papers awarded best paper or nominated for best paper (5% of submissions).

I don’t know what to make of this growth. Is CHI a bubble? Or are human interactions with digital submissions just that much more interesting and important than the rest of computing? Or perhaps HCI is just such a radically inclusive scholarly community, that everyone finds a space for their ideas? Whatever the reasons, I don’t see HCI shrinking any time soon. If anything, it will continue to be a broad representation of many perspectives on computing in society, drawing upon humanist, artistic, engineering, scientific, and even philosophical perspectives. It’s that intellectual inclusion that draws me back each year. It’s like an intellectual gift bag, full of surprises.

Aleksandra kicks off her highly sensory keynote.

Aleksandra Krotoski reminds us that we have many senses

Aleksandra Krotoski gave the opening keynote. A journalist by trade, her primary argument was that media matters. For example, she talked about how the information density of television is very low compared to radio and other media, and discussed how this allows radio to contain more complex ideas. Another example she gave was her very own business card, which she printed with scratch and sniff ink; her point of this was that the business card format actually had a rich capacity for information not possible in other media. She then discussed some interesting observations about how headphones, relative to speakers, had changed the nature of audio as a medium, shifting from more broadcast-oriented genres, to much more personal podcast-style direct communication. She speculated that this had largely to do with the way that headphones are so centered in the head, whereas speakers in a room are more ambient.

There were a few odd things about her delivery. She was a very successful person, and drew heavily from her many skills and merits, while also appearing to be somewhat blind to the privileged and successful life she lives. There was a lot of chatter at the conference about how the was superficially “cute,” but somewhat ignorant of the many more fundamental issues of justice and inclusion in media.

A panoramic of the huge community of HCI researchers interested in learning

Learning+HCI researchers gather

For a long time, the topic of learning has had very little presence in HCI. From my earliest days attending in 2003, there might be a paper or two about learning, but it was clearly a niche group, particularly dominated by researchers from Carnegie Mellon University (CMU). This was even more amplified for me when I was a graduate student at CMU, where there was a high concentration of learning scientists in the HCI Institute (and still are).

How times have changed! My outstanding doctoral student Benji Xie helped organize a 90 minute special interest group to bring together the many researchers interested in learning and HCI. He did a fantastic job organizing many faculty and other senior researchers, drawing upon the new CHI subcommittee on Learning, Families, and Education.

The SIG was a huge success. As shown in the photo above, there were more than 50+ attendees. The organizers ran it in a “world cafe” format in which each table had one of three topics and a facilitator. We discussed “what does learning mean?” in which I encountered many fascinating views of learning as change, joy, survival, mindset, identity, agency, inequity, and even pain. There were fascinating discussions about the intersections between learning + HCI, covering educational technology, learning about interfaces, learning about computing, reframing HCI phenomena as learning phenomena, lots of opportunities for methods exchange, and the need for HCI in learning sciences and education to reduce complexity. And there was also the more epistemological topic of how to evaluate learning interventions, with discussions of the need for epistemological inclusion, the challenges of measuring change over time, difficulty of measuring learning at points or over time, especially in informal contexts. I found the most interesting ideas to be ones that linked the many interactions between interfaces and learning, such as poor usability as a limiting factor to the value of educational technologies, the learning of interfaces themselves, and the lurking phenomena of learning in many interactions with computing.

Lunch selfie with one of my academic families!

Lunch with Margaret Burnett and her students

I spent lunch catching up with my undergraduate research advisor, Margaret Burnett, and her many other current and former academic advisees. I had many fascinating discussions about API learning with new Ph.D. Student Amber Horvath, disseminating Margaret’s research on GenderMag in Namibia, and the many forms of invisible work that occur in professional practice. I love lunches like these because they’re both reunions, but also a way of networking with new generations of students. It’s probably time for me to start doing the same, but I’m running out of lunch slots each year!

Work by my academic grandchild, Laton Vermette.

Technology in classrooms

One of my academic grandchildren, Laton Vermette (Parmit Chilana’s student), presented a paper on digital classrooms, studying teacher’s practices and motivations. They had a really diverse set of K-12 students in the US and Canada, but because of selection bias, most of the 20 teacher participants considered themselves “tech savvy.” They conducted interviews with each.

They found that most of the adoption of digital technology was to design an ecosystem that jointly improved their productivity and student learning. They used an incredible variety of software — over 107 different applications — but their usage was incredibly diverse too, as most tool was only used by one of the teachers. Many of the applicants were also not intentionally designed as educational technology, but rather appropriated for classroom use and therefore customized or adapted in some way. They described two types of customizations: 1) some that adapted the user interface such as its layout and visual design and 2) some that changed the content of the applications. But teachers were often hesitant to do these customizations, because they feared negatively impacting their students by erecting barriers to learning. They had to strike a balance between their own needs for teaching and their students’ learning.

A few thousand people cautiously eating haggis.

Conference reception

The first day of CHI has typically ended by opening the demonstration and poster exposition. This year’s expo was full of interactive demos, corporate booths, and (radical but unconsidered) visions of our digital future. What I find most fascinating about these settings it how it brings together technologists blindly innovating with the communities most skeptical critical theorists, leading to what I experience as a joyous interdisciplinary discomfort.

I spent much of the reception caching up with many outstanding folks about access technologies, accessible computing, and teaching accessibility, as well as reconnecting with many junior faculty about their research and daily efforts to balance work and life. In many ways, what I learned most was that the next generation of faculty are far more concerned with balancing their lives, enjoying their work, and having fun, more so than maximizing their reputation. It’s so much fun to see my communities embracing a more humane vision of academia than some of the horror stories I hear about other communities in academia.

Microsoft Research reception

I spent the evening at an invitation-only Microsoft Research reception, which was held at the lovely Clydeside Distillery along Glasgow’s river. I was really impressed with the new museum, even though it was relatively simplistic, and it was a fantastic place for conversation and networking, full of little nooks and crannies for chatting.

I spent the evening meeting new people, which was a great chance to practice communicating about my lab’s research on learning programming, design literacy, machine learning literacy, and teaching accessibility. I found that, at least how I communicated the ideas, people found the work fascinating and important. I also got to connect with a huge number of doctoral students about their ideas, which I always find invigorating, because they’re full of new raw new ideas and perspectives. They reshape my thinking, while I also get to help them mold their thoughts into research trajectories.

The beautiful, quite walk back to the hotel Monday night.

Tuesday

Monday was already a long day, but rather than going out for drinks and completely exhausting myself, I decided to rest up for an even longer Tuesday — not just one receptions tonight, but three! I got up early, caught up on some email, and then headed to a morning session on algorithms and explainability.

Co-design creative tools for ideation

Interacting with AI

Janin Koch from Aalto University gave a talk titled “Design ideation with cooperative contextual bandits,” which referred to a collaborative agent for finding inspirational material to support design. The basic idea was to support ideation by passively identifying inspirational images. The most interesting thing about the paper is that it analyzed the contents of a designers’ mood board, extracting visual features of it’s content to find other related content. The designer was also capable of providing feedback to the recommendation system, to influence the types of images retrieved. They tested it with a range of designers of different kinds, giving them some tasks, and asking them to reflect on their experiences with the recommendations.

The next talk in the session was “Designing Theory-Drive User-Centric Explainable AI.” There is of course a lot of work on explainable AI, but most of it is not focused on the human experience of explainability. The basic idea of the talk was to develop “theories of reasoning” about user’s reasoning, then generate explanations from those theories of reasoning. Their theories included reasoning approaches such as induction, deduction, and analogy. Within this framework, explanations for each type of reasoning require specific content. For example, when people are reasoning about causality, it’s necessary to include attribution in explanations. They mapped many types of human reasoning to specific explanation content, then tested it in a medical application.

The last talk in the session was “Will you accept an imperfect AI,” which discussed end-user expectations of AI systems. The big question here was how to manage expectations, given the imperfection of AI systems, and the inflated expectations of the public about AI. They investigated expectations by considering the Outlook Scheduling Assistant, which tries to find recommendations for meeting times, highlighting information about it, then generating shortcuts for directly scheduling them. They showed many different aspects of explanation, including accuracy and confidence indicators and visual explanations. In their first study, accuracy indicators successfully lowered expectations, marginally increased understanding, and marginally increased users’ sense of control. The second study investigated whether these impacts lasted as users used the system, finding that high recall led to a higher accuracy perspection, but high precision led to a lower accuracy perception. Their theory was that false negatives led to a high cost of correcting the error, essentially amplifying the cost of the precision mistakes.

What makes model interpretability so important? A lot of things.

Data science education and practice

During the break, I had a great conversation with an old collaborator Mauro Cherubini about teaching data science to first year economics students students. We talked long term learning outcomes, such as the idea of data science education as a “time capsule”, something that students might not use right away, but would open up later to reconnect with and apply in their later work. This was a more concrete example of some of the broader conversations at the learning SIG on Monday about the really broad space of learning outcomes to consider when measuring learning.

I then went to a talk by my some of my collaborators at Microsoft on data science professionals and interpretability. The big question was what the value of model interpretability was. There were many surprising needs. One was that interpretabilty was key for communicating to audience, as they needed insights to help identify explanations for non-data science audiences. Another was model debugging, to help fix or optimize models. Finally, a lot of models were used for understanding and hypothesis generation, not as an end product.

On my way to lunch, I ran into Andrew Head, a Ph.D. Student at Berkeley, and my doctoral student Benji Xie. We had an interesting conversation about data science notebooks and their inherent messiness. I probed into Andrew’s perspectives on what makes the messiness inherent, and we found some interesting connections between, the practices that data scientists follow, and the practices they learn over time. In complicating process, I think we found some interesting tensions between designing for people as they are (which HCI usually does), and designing for people as they could be (which is what learning science and education usually does). We also connected a lot of these ideas to communities of practice theory, speculating that tools for scaffolding process to avoid messiness are probably great for newcomers to data science, but that reinventing process to avoid inherent messiness is probably something best done by the experts at its center.

Lunch with James Lin of Google

Sometimes its fun to have a really intimate one on one lunch to really dig into ideas. I met up with long-time conference friend Jimmy Lin. Jimmy and I haven’t often had a lot of overlap in research interests, and he’s long been in product groups at Google even less related to my work, but he just joined the Engineering Productiity team at Google, which is quite close to my interests and full of many colleagues in my other community of software engineering. I caught up with him about his new position.

We ended up discussing wide ranging issues of learning at Google, the tensions between algorithms and data at Google, and the loss of editing and content curation in society as Google and other companies have become our dominate media curators. One of the ideas I found most fascinating in our discussion was the imagining Google as the modern analog of the printing press or telegraph. Within this metaphor, our world would essentially be one in which the manufacturers of the printing press and telegraph hardware were also our curators. This was rarely true in the past; it wasn’t the manufacturers of the press that were in charge, but curators like chief editors of the New York Times or head librarians, who were the consumers of printing presses. Somehow, though, the toolmakers are in charge, and our sense of wonder about the tools is blinding us to the complete rejection of responsibility for curating information in the world. We also talked about the primacy of information over algorithms, and the strange modern reversal in which algorithms and automation pretend to be more powerful than information when in reality they are just amplifying the power of information (at least when used as information technology).

Educational games are taking over but poor usability is getting in the way.

Learning and games

After lunch, I jumped into the middle of a session on learning games. I’ve dabbled in this area in some of our work on programming games, and I was curious about some of the latest thoughts on the role of game paradigms in learning.

The first talk I saw was an educational game designed for learning about phishing attacks. This work was by Zikai Alex Wen of Cornell, who had designed a role-playing simulation game called What.Hack. The work built on anti-phishing trainings, such as videos and other evidence-based games in prior work such as anti-phishing Phil. Alex’s primary critique of this these was that they were too scaffolded; they don’t simulate the reality of actually reading phishing emails in the context of life. So Alex designed a game that tried to simulate real life. They mirrored their game design on existing games that make document review fun, inspired by the game “Papers, Please,” an immigration processing game. I really liked this mimicry approach to game design; it leverages the hard work of game design, while also highlighting the challenges of translation of game mechanics from one domain to another. Their user study showed effective translation: learners got better at recognizing phishing emails, unlike the university training videos or anti-phishing Phil. Surprise: authentic practice works. Not surprisingly, the game was also far more subjectively engaging than the videos and anti-phishing Phil. Of course, the true test of all of this would have been retention over time, which the study did not do.

The second talk concerned the poor state of instructional design in educational games for children. The specific focus was on literacy games and the ways in which there were learning breakdowns during gameplay (e.g., boredom, not knowing what to do next, or not understanding some concept). The authors identified many aspects of instructional design as key ways to engage learners in productive understanding; their study investigated the causes of failures to achieve this. They evaluated a few popular educational games and found that the complexity of game mechanics and interfaces to these mechanics were a huge barrier to learning and that challenging concepts were rarely scaffolded well. This suggests that while educational games can be quite effective, they may not be if the interfaces get in the way of learning.

The kinds of metaphorical reasoning supported in the Metaphoria system

Supporting creativity

My last session of Tuesday was on creativity support tools. I attended this because of it’s interesting connections to ability: creativity, especially creativity support tools intended for professionals, is very much about amplifying ability, but also potentially changing ability, which is the point of learning.

The first talk on the Metaphoria ideation tool took a deep dive into investigations about how to support writing. It specifically investigated opportunities in metaphorical creativity. What they invented was a site where people enter a word, and then it pairs it with random concrete nouns to come up with potential ways they might be related. To do this, they piled on a lot of common sense reasoning and WordNet data to link concepts. The most interesting thing was their study of how writers collaborated with the system to ideate. One fear was that everyone would end up writing really similar things; what they found was the opposite: most writers were more divergent than without suggestions, one describing it as “being swept away” into new ideas. Every poet that they tested with ended up using the tool in very different ways, some letting it drive the process, some using it as a peripheral tool for offloading cognitive work, and some very threatened by the loss of agency.

Another interesting paper in the creativity session was a paper by by folks at the University of London on empowering creative work by people with aphasia, also in the domain of poetry. Aphasia is a range of speech and writing production impairments, often induced by stroke. Sometimes this manifests as word recall difficulty, sometimes this manifests as sentence construction difficulties. The authors were interested in supporting creative writing with people who have fundamental difficulties writing. They made a tool called MakeWrite, which followed an interesting “erasing” process in which rather than having to produce or recall words, it involved moving words and deleting words. By using a redaction process, combined with a word generation feature, allowed people with aphasia to construct poetry without relying on word recall abilities.

Our UW, Michigan, and Georgia Tech shared reception

Tuesday evening receptions

Evening receptions are a long-standing tradition at CHI. They started off as a way of recruiting and branding companies and universities, but they’ve emerged as a way of bringing a very large conference down to size. On Tuesday, I attended a small private book signing to celebrate some of the many books faculty at UW have published, then attended the shared reception that UW, Georgia Tech, and Michigan have hosted for several years. Throughout these receptions, I had several fascinating conversations.

The first was with David Ribes, a colleague in UW HCDE. I was sharing some of my experiences in Washington state policy and this led to a much broader and fascinating conversation about the distribution of power in the tech industry, the different and incompatible discourses that occur in arts/humanities and STEM, and the need for translators between the two. For example, we talked about how tech often views the world very narrowly and explicitly plays a game of power brokering, where as humanities often views the world from a justice and humanist perspective and explicitly plays a game of rhetoric. I speculated that power brokering and rhetoric don’t really play nice together, especially in a democratic and capitalist system, and that this limit the impact of humanities rhetoric. David argued that despite this tension, the conflict between these two rhetorics is coming rapidly, and will either lead to policy, regulation, or revolt.

When I arrived at the DUB party, I soon met Eva Wolfangel, a journalist who would soon be visiting MIT for the year to learn about and write about the academia and technology. We talked about the many ways in which the recurring narratives that journalists reuse about technology often ignore the dominant narratives in academia. I invited Eva to visit UW and learn about some of these narratives in more depth.

I also caught up with Nell O’Rourke and Joseph Jay Williams to talk about wide ranging issues related to mindset, transfer, and self-regulation. Nell is doing some fascinating work with her students about the social and cultural origins of mindset. For example, she talked about students forming normative judgements about ability based on such superficial things as facial expressions they saw other stereotypically white male and Asian students wearing, or the speed with which they typed. I talked about my student Dastyni Loksa’s work on developing self-regulation skills as a way to intercede on these mindset problems. Joseph talked about the many challenges of developing transferrable, general purpose problem solving skills such as these, and speculated that anchoring them on everyday skills might be an effective way to develop them.

I never quite made it to the CMU party unfortunately—it was too late, the DUB party was too fun, and I had already been to the distillery for the Microsoft reception. Sorry CMU buddies!

Wednesday

The days were getting longer, the parties were getting later, but my body still wanted to rise at 6 am. I got an early breakfast, caught up on email, and spent a bit of time writing this trip report, then headed to a morning session on the practice of design.

Colin Grey talks about talking about UX online

The practice of design

The first talk that I attended was on the research/practice gap between design education and design practice. Colin Grey from Purdue, was reflecting on the many gaps that students and practitioners report. The big question here was what exactly is user experience practice and how is it evolving? They looked at three activities: exchanging experiences, conducting critique, and professional disclosure, and considered explicitly the conversations on the Reddit userexperience subreddit and the StackExchange User Experience Q&A community. The analysis was a linguistic one. Some notable findings were that they didn’t find any meaningful differences between UI design and interaction design, but also that these conceptions and practices are rapidly evolving. This imposes exceptionally hard constraints on curriculum design.

The last talk of the session was on an honorable mention award winning paper that surfaced the emotional work of design practice. The paper, presented by Rachel Clarke, emerged from a decade of thoughts about how emotions actually shape and enrich experiences, sometimes moreso than functional issues. This is of course particularly true in more sensitive design contexts, such as abuse or illness. But it’s also true of designers themselves, who have to encounter emotions and process their own emotions. The paper argued that we have failed to study this and leverage this in design. It began with a theory of emotional regulation required in professional contexts found in many service professions such as therapy, teaching, and customer service. They analyzed these issues through case studies of design process, finding that many conventional human-centered design methods and processes are fundamentally broken in their narrow focus on tasks and functionality.

More data science education

During the morning break, I met up again with Mauro Cherubini and we talked further about opportunities for data science education in his economics course. We had continued our discussion of the students in his required economics course, which all students at his university are required to take, but also uniquely involves programming with data science notebooks. One of the most interesting ideas that emerged from our conversation was the interconnecting theories of self-determination theory and communities of practice theory, and how they might explain motivation in his required course. We talked about how he might use them to explain mediating factors in the effectiveness of any teaching in the class on data analysis.

Varsha Koushik demonstrates her audio narration programming environment

Inclusive education

There were so many interesting sessions after the morning break, some on online education, some on developers, and some on inclusive teaching, particularly around accessibility. I ended up going to the session I knew the least about, so I would learn the most.

The first talk in the session was by UW alumni Catherine Baker and Lauren Milne with my colleague Richard Ladner. The paper focused teachers of the visually impaired in the United States, and how those teachers use assistive technology with children. They specifically considered what factors teachers use when selecting technology and how teachers think about youth adoption of technologies. Interviews showed that in addition to the usual mainstream technologies (e.g. smartphones), there was broad use of Braille displays, Braille note takers, video magnifiers, screen readers, and screen magnifiers, and that the major factors that shaped technology selection were trying to avoid stigma by matching technology used by other students, followed by cost. The teachers’ perspectives on student factors largely concerned the challenges of learning to use the access technologies themselves, which often required considerable training. The root cause of this need for training was fundamentally software that wasn’t designed to be accessible.

The next speaker, Varsha Koushik from CU Boulder, presented on a tangible programming game for creating audio stories. The platform was an interesting example of a domain-specific language for creative expression. What was particularly unique about this medium was the focus on structured audio narration. This raised all kinds of interesting challenges in how to support the entire process of programming, including not just authoring, but also debugging.

Varsha gave the next talk too, which focused on challenges that students with disabilities face when learning CS. The study focused on something called CodeClub, which focused on IT support skills, physical computing, and block-based programming in Scratch. Students had Alzheimer’s, autism, brain injuries, memory disorders, developmental disabilities, and learning disabilities. They primarily investigated what students created and the barriers they faced in creating it. Most of these barriers were classic accessibility problems: misuse of color for learners with vision impairments, color mismatches between on-screen guidance and printed materials, and incompatibilities between applications and access technologies. The instructors used a lot of strategies to overcome these challenges, just as in the first talk in the session.

Lunch with the Brad Myers academic family

An annual tradition, my former advisor Brad Myers likes to bring together all of his former and current students for lunch to network and reconnect. There were 10 of us this year at CHI, and there was a lot of diversity in seniority and research. I had a great time catching up with Mary Beth Kery about her excellent work on supporting data science developers with tools and practices. I also got to here about my colleague Jake Wobbrock’s plans for sabbatical and another grad school friend Jeff Nichols’s escapades in the culinary aspects of cocktails and research management (intentionally ambiguous parse!)

I wasn’t there, but I supported the message, and my student’ Yim’s leadership and participation.

Accessibility at CHI, advocating for change

I had intended on attending a few sessions after lunch, but I arrived a bit late and ran into my student Yim Register, who had just helped stage a sit-in protest on the poor accessibility of the conference for people with disabilities. We had just had a detailed Slack discussion prior to lunch about how to best communicate feedback to current and future organizers, and I was eager to hear how the protest went during lunch. We had a wide ranging conversation about ways in which bottom up movements can inspire literacy and change over the long term, and sometimes effect change in the short term when leadership isn’t listening. We also talked about how structural change can be hard to implement but have more sustainable impact over time. This was relevant both for the accessibility problems at CHI, but also Yim’s research, which is beginning to investigate universal machine learning literacies and their relation to self-advocacy.

The line into the Glasgow Science Center was enormous.

Conference reception at the Glasgow Science Center

After a brief break at my hotel to take an antibiotic for an ear infection, I made my way back to the convention center to find some folks to walk with to the conference reception. I wasn’t really in the mood for a big bursting reception of social chaos, but luckily I ran into an old grad school friend, Ian Li, who is now a UX engineer at Google.

After catching up about our lives, we walked over to the Science Center and had a long interesting conversation about some of the tensions between data and algorithms, and how they played out in his former role on Google Search, and how they might play out in his new role on a health-related team. I shared some of the many big ideas I’ve learned about information being in an information school, such as the idea that information, like algorithms, can do harm and good in the world, that algorithms can amplify that harm and good, and that in most cases, the value of algorithms is inherently dependent on information. We continued our discussion during the reception, which I found far more interesting than the pretty generic exhibits at the museum. (I’m usually in the mood for play at a science museum, but I think I was just in a reflective, intellectual mood instead).

The Google reception was at the Riverside museum, which was full of early 20th century artifacts from Glasgow history

I eventually decided to leave, and Ian did too, and we walked over to the invite-only Google party at the Riverside Museum. Our conversation shifted to my wife’s recent pivot to nursing, the shift from fee-for-service to preventive models of primary care, and the many ways that these shifting compensation models are changing the role of information in health. We talked about how the particular choices of meta data that model people end up constraining the use of health systems.

Eventually, I broke off from Ian and had a great conversation with Mor Namaan about Information, algorithms, and democracy. We poked and prodded some of the explainable AI work, which I argued focused too much on algorithms, and that instead we should be focusing on how the data that AI is trained on best explains the decisions that AI makes. Mor gave some great examples about how this plays out in search and social media. James Landay eventually joined, which shifted our conversation to trust, agency, and authority, and how those can play a significant role in perceptions of AI.

I eventually ran out of gas and had a nice 30 minute walk home along the river to my hotel. The Hydro stadium was leaking attendees of something, the sky was dripping, and my eyes were drooping.

The Hydro stadium was lit, literally and figuratively.

Thursday

I’ve never liked the fourth day of CHI. It was added early on in the community’s growth to accommodate more presentations, but I always thought the community should instead compress talks, or even eliminate them, finding more innovative ways of sharing. Instead, we have this horrible, exhausting fourth day, where everyone either leaves, or stays, but disengages. Since I was staying for the weekend to give an invited talk at ETH Zurich on Monday, and my wonderful student Amanda Swearngin was giving a talk on her tappability study from her Google internship, I slogged through.

Alberto Monde Roffarello shares work on debugging the kind of trigger action rules now ubiquitous in Internet of Things devices.

Developers, debugging, and learning

After a slow and early breakfast and a looong email session, I wandered in early to a morning session on developers and developer tools. As usual at CHI, the talks were a random assortment of topics, but also a good snapshot of explorations in how to support programming with interactive tools. This was followed by entire second session after the morning break on computing education. Yay!

The first talk addressed the increasingly common paradigm of trigger action rules found in Internet of Things devices, and the challenges this paradigm imposes on debugging. The thing that makes these rules hard to debug is that they are inherently programs that stitch together multiple devices through multiple web services, attempting to produce reliable behavior out of inherently complex and often unreliable systems. Their approach was not to support debugging itself, but to support automated verification at edit time, such as detecting infinite loops, redundancies, and contradictions in trigger action rule programs. What was interesting and new about this system was the way it provided simulated rule conflicts to help explain the warnings. This mirrors work in software engineering on model checking, but better explores the formalism and user experiences of warnings in this under-explored paradigm.

The second talk also addressed debugging in new media, but this time for microcontroller projects. Presented by Mitchell Karchemsky, the core idea in this paper build upon recent prior work on hardware debugging with a new idea of helping experts provide remote support on debugging facilitation (e.g., in teaching contexts). The system provided a remote visualization overlay tool and hardware inspection station that essentially act like the kind of instrumented environment that software debugging supports. The paper was a nice exploration of old ideas in the hardware context.

Coding puzzles vary in difficulty. How can we measure that difficulty?

My grad school friend Caitlin Kelleher, faculty at Washington University in St. Louis, gave the third talk, “Predicting Cognitive Load in Future Code Puzzles. Cognitive load is a theory from educational psychology, which theorizes that learning is constrained by the limited capacity of human working memory, and that the effectiveness of instructional design is partly explained by these constraints. Caitlin and her student investigated a technique for predicting the cognitive load of coding puzzles, with the goal of being able to make adaptive coding puzzle platforms that aren’t necessarily static and sequential, and are therefore more responsive to varying prior knowledge. They measured cognitive load by simply asking for self-report on mental effort, while also observing difficulty. This worked reasonably well from a predictive perspective: it was better than guessing, and it could reliably select the more difficult of any given two puzzles, but there was a lot of room for improvement.

After the break, I went to the 11 am session, which was entirely on the learning and teaching of computing. The first talk of the session explored how youth as young as 10 reason about the black boxes of machine learning algorithms. The authors particular path into this problem was to construct an interactive learning environment that reveals some of the complexity of machine learning, specifically for simple classifiers. It involves children training a physical gesturing advice, labeling samples, and then build classifiers with the data. They had youth engage in a few tasks that revealed strengths and limitations of classifiers, such as the need for negative examples, more examples, etc. Based on a pre/post test that prompted for explanations of how machine learning tasks worked, working through the learning environment appeared to increase the volume and type of examples they mentioned in their explanations of the classifier behavior. Interviews revealed that youth were able to envision new applications of machine learning.

The next talk in the session by Joshua Shi and Armaan Shah (with Nell O’Rourke at Northwestern) presented work on how to leverage games to teach programming problem solving behaviors. This work built on my student Dastyni Loksa’s work on teaching programming problem solving, and on my former student Mike Lee’s work on Gidget. They contributed an educational game called Pyrus that taught planning skills like the kind that Dastyni defined. They borrowed ideas from test-drive development and pair programming, translating them into game mechanics that incentivized planning. In their within-subjects evaluation, their goal was to assess how much more planning learners did relative to conventional pair programming, but also how well this distributed labor between the paired players. Not surprisingly, learners did more planning because the game incentivized more planning. The big question, which the paper didn’t investigate, was whether this planning skill transferred to other programming contexts.

Megan Venn-Wycherley presented the next paper on the sociotechnical computing education infrastructure in UK schools, which covered the hugely important problem of coordinating primary, secondary, and higher education computing education efforts. The focus was on trying to understand the role of technology platforms in mediating the quality of experiences in schools, particularly the BBC micro:bit platform. Using an action research method, Megan found that there were core structural issues with teacher and student incentives, the resource constraints in schools, the structural limitations of IT restrictions, and the limited role of IT in solving these problems.

The personas of the teachers in the data science education study.

The last talk in the session was on data science education in higher education, from Philip Guo’s student Sean Kross, titled “Practitioners Teaching Data Science in Industry and Academia.” The focus of the study was what data science educators are doing what challenges they face, especially relative to more conventional computing education. The instructors they interviewed came from psychology, biostatistics, marine biology, statistics, math, library science, and many other disciplines, including those in industry. The interviews revealed that teachers had widely varying prior experience with programming (often self-taught), widely varying needs for API learning, and key skill gaps in version control, and Unix command line scripting. For teaching, a huge challenge was just setting up infrastructure to scaffold student learning, such as virtual machines that encapsulated an environment for learning, and finding good domain-specific data sets suitable for learning.

My first Ph.D. student Parmit Chilana brought our labs together for lunch on Thursday.

My very own academic family lunch

After the morning sessions, my student Parmit Chilana and I brought our labs together to meet. After a great round of introductions and some food, we got into a fascinating discussion about different motivations for engaging in research. A lot of us talked about curiosity, but a lot of us, like me, also shared our very practical motivation of financial security through job security. This conversation about growing up in poverty led to an even more important discussion about the need to focus our research on much more socioeconomically diverse groups.

Closing keynote

The closing plenary included a keynote by Ivan Poupyrev from Google, an interaction designer who explores blended physical and digital experiences. He’s currently the engineering director for Google’s advanced project’s division, especially in AR/VR, haptics, and 3D printing.

Ivan began lamenting a world in which interface technologies haven’t changed for 25 years. (I don’t know if that’s a problem, but I patiently stuck with his argument). He continued, arguing the the window into the virtual world is staying the same, or arguably, even shrinking. He mentioned Hiroshi Ishii’s vision of tangible bits as an inspiration, and then quickly raised examples of voice interfaces as a peek into the future. But Ivan wants not just ambient interactions, but physical interactions. He wants the world to be our interface, so the computer can be invisible.

Ivan focuses on how to make these things: 1) making things into interfaces, 2) making those interfaces into products, and 3) making these products scalable. In this first thread, he mentioned work by people like Chris Harrison and Shwetak Patel, who exploit our existing physical infrastructure as interfaces. For example, their work uses things like surfaces, water, and plants as interfaces. Ivan viewed this kind of work as pure, basic research, that assesses the feasibility of making the world into interface. Another example was creating really tiny radio signal, which could intercept interaction in an electromagnetic field (project Soli), and interpret gestures.

The feasibility of these things is one thing; showing that these technologies can be actual products is an entirely different challenge. For this work, Ivan wondered how to make it possible for everyone in the world to be able to construct useful interactive objects themselves. They explored this in the context of textiles with touch panels using existing textile factories in Japan and tailors. They used this experiment to approach Levi’s to envision a touch sensitive jacket, which consisted of the jacket, a tag, an app, and services for the jacket, all designed holistically as a single wearable platform.

The range of objects Ivan wants to make interactive

The third phase of this work was scaling the availability of these products. How do you go from a single product to every product, making existing things better? This is hard because every product has it own process, it’s own materials, it’s own unique kind of sensors, etc. The approach here was to support makers in beginning to think about services products could provide. The initial work on this thread is a single small computer that is a universal sensor platform for diffferent wearable products, dynamically adapting to the product in which it is embedded. The vision is to use the platform as part of the material of a product, envisioning its use.

Being from Google, not surprisingly, Ivan’s vision was utopian. The audience, however, was much more skeptical, wondering about privacy, ethics, and other social implications, and fragmentation in the marketplace. Personally, it’s one of the few visions of computing that I find sufficiently humanistic, but I wonder about how to prevent ubiquitous wearable computing from becoming ubiquitous technical support hell, or ubiquitous online addition. Ivan, as a technologist had pretty weak answers to this, but at least he recognized he needed our community’s expertise to inform the pitfalls of this vision.

Reflections

For me, CHI has always been, and continues to be, more like a confluence of communities than a single community. There are so many ideas, so many perspectives, so many different problems and phenomena, that attending is more like sampling from a garden than participating in a academic discourse. That doesn’t mean we don’t build on each other’s ideas — it’s just not a conference where this happens. Instead, it’s all of the smaller HCI conferences, and the other conferences from communities from which HCI draws, where these discourses occur.

All that said, I still find attending CHI as the best way to get a sense of what our broad community is investigating, while still being able to have a focused experience, as I did this year in relation to learning and computing education. It’s a way of seeing a cross section of computing’s thinking in a uniquely holistic manner.

For learning in particular, my focused experience on learning revealed a few interesting things about the current state of HCI:

  • There are a lot more people in HCI interested in designing to support learning than there has ever been.
  • There are many kinds of learning that people want to support, including data science skills, programming skills, design skills, and fabrication skills, and many ways that HCI wants to use computing to support it. This is much broader than I’ve seen in other fields interested in learning.
  • A lot of this work goes well beyond the typical scope of education research, focusing more on contexts, domains, families, and online. This is similar in scope to learning sciences, but with less theoretical grounding and a lot more focus on design and innovation.

In a way, this breadth an diversity is exactly what HCI is best it. There’s no guarantee that any of it will deeply address the hardest problems in learning and education, but it will at least provoke. Next year, I suspect a lot of my students will be sharing work on learning + HCI of their very own…between sessions on the beach in Hawai’i :)

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.