Sitemap
Bits and Behavior

This is the blog for Amy J. Ko, Ph.D. at the University of Washington and her advisees. Here we reflect on our individual and collective struggle to understand computing and harness it for justice. See our work at https://faculty.washington.edu/ajko

A large canvas showing Japanese calligraphy with the acronym CHI 2025.
The conference begins with a Japanese calligraphhy performance.

CHI 2025: Frayed edges

Amy J. Ko
17 min readMay 1, 2025

--

In 1995, thirty years ago on my first day of high school, I attended my first day of Japanese language classes. My teacher, Hitomi Tamura, was warm but demanding: she welcomed us into a linguistic world of precision, formality, and Japanese cultural traditions. It was my first experience in formal language learning, but also the first time I saw a world beyond my sleepy rural and urban life in Oregon.

My first CHI, in 2003 in Fort Lauderdale, felt similar. I met more than 500 human-computer interaction researchers for three days of three parallel tracks, and I once again felt my world open up. There were researchers from all around the world, studying every imaginable aspect of interactive computing. I left that first conference inspirited to be joining a global community of people who centered curiosity, dreaming, and critical questions.

Coming to Japan for CHI 2025 this year, was a collision of these two moments in my life in which education and research transformed how I saw the world. But it was also at a moment in history where these institutions of learning and discovery have never been under more threat. Public funding, including my grants, are being canceled; my work is being politically censored; my own ability to travel safely and even be in public in my own country are being eroded daily by a heartless, ignorant despot. A culmination of 30 years of helping to create and shape a globally community of scholars who pursue truth over profit has converged toward the reckless destruction of a fragile, invaluable institution of humanist progress.

I therefore arrived melancholy, hoping to make the most of what might be one of the last times I attend CHI for some time, one of the last times I leave the United States for some time, and maybe the last time any of us gathers in person to celebrate a pinnacle of collective discovery.

A slide “Computing for a Better Word, Designing for All People”.
Mutale Nkonde begins her talk.

Monday: Sharing and reconnecting

The opening plenary was in the epic Pacifico National Hall. There was a beautiful calligraphy performance, the usual announcements about record attendance (>5,500 people) and record submissions (>5,000 papers), and a somewhat ironic sense of the inevitable growth and success of our community at a time when it is being cut down by authoritarianism. There were small indirect hints of this assault — allusions to attendances who could not make the journey due to funding cuts, or threats to deportation and cancelation of visas—but no explicit recognition of the moment, reinforcing an uncomfortable sense of business as usual. There were also some surreal welcomes by the Japanese science council and a message from the Prime Minister of Japan, welcoming us and expressing the importance of humanist views on computing, also with no acknowledgement of the moment.

After the introductions, we had a keynote from Mutale Nkonde, a PhD student at the University of Cambridge. She had recently founded a non-profit, AI for the People, and came with a queer, Black feminist positionality to share more liberatory visions of AI and the future of humanity. She started by talking about the failures of trying to make change through ethics checklists in industry, and eventually finding more success through policy advocacy, working with the U.S. federal government and technology companies to try to shape policy and regulation. She talked about queer theory and its ability to create spaces of opposition to dominant norms and the importance of centering sociotechnical perspectives in imagining futures of interactive computing, connecting AI to cultures of Naziism in the origins of the internet. She also centered the importance of emotionality and impact in design, asking hard questions about how to account for emotional reactions and the negative outcomes of computing to anticipate unintended consequences and also advance radical good. Much of the rest of her talk focused on the many forms of bias and ignorance baked into uncritical use and advance of AI, disproportionately impacting groups marginalized by race, gender, and class. Overall, it was short and apt, but I don’t know that it stretched the community to new places (except for attendances who haven’t been paying attention to decades of scholarship on equity and justice in computing).

Immediately after the plenary, I went to our session on language and culture, where my wonderful undergrads Carlos Aldana Lira and Isabel Amaya, and teacher collaborator Adrienne Gifford, shared our work on Wordplay, a multilingual, accessible interactive, programmable typography platform. They did a fantastic job sharing our lightning fast 10 minute talk; alas, I forgot to set up captioning, subverting some of the accessibility goals we were espousing in the talk. The rest of the session was full of fascinating talks about language and accessibility, non-verbal communication, and youth perspectives on AI in education. After, we had a vibrant lunch with several of the session attendees, talking about multiculturalism and learning.

The rest of the day was hallway conversations, ranging from the demise of science in the United States, the uncritical, destructive obsession with large language models by industry, and the transformative opportunities of sharing power with teachers and children. After the evening reception, I went to a lively Northeastern party, and then a queer in HCI meetup at a local taproom in Yokohama. It was a riveting and exhausting first day, but I felt calibrated to the community’s reluctant submission to technological determinism, dismay at authoritarianism, and unease about the future.

Alexis takes the stage.

Tuesday: Listening and Learning

On Tuesday morning, I had a buffet breakfast at my hotel and then a tasty latte at Blue Bottle (!), and head to the session on “advances in software development.”

  • Katie Cunningham’s students Arif and Yoshee (UIUC) talked about plan-based LLM-assisted pedagogies, demonstrating the feasibility of generating example programs for instructors to refine into groups of plans to share with students, generating lecture slides and worked examples. Their system, PLAID, helped instructors make plans more productivity and with more ease than without assistance.
  • Erik Rawns and Sarah Chasins (UC Berkeley) presented Pagebreak, a way of scoping computation in computational notebooks, to allow for function use. They generally found that it helped notebook users be a more systematic in their explorations and use of data, while preserving the “messiness” of notebooks.
  • A paper from Yanna Lin (Hong Kong University of Science and Technology) on notebooks explored how to separate output and text, while maintaining links between them, using a multi-column layout instead of a single column.
  • Andrew McNutt (University of Utah) talked about the selection and design of creative coding platforms in teaching. They found three criteria: slowness (productive friction in creativity), politics (being critical about materials and outputs), and joy as key values that teachers considered when choosing platforms. This was one of my favorite papers, and closest to home to our current work on creative coding with teachers.
  • Siyu Wang (Wuhan University) talked about use of Scratch in Chinese. The localization of Scratch is better than text, which is good, but through a series of tasks examining the Chinese and English keywords, they found that many of the translations had misleading semantics, overly technical words, and conflicts with the words that teachers used. They recommendated balancing literal and interpetetive translations, consider cross-langauge word formation differences, minimizing assumptions, and use simple, culturally relatable keywords.
  • Murtaza Ali (UW) presented on non-CS student learning experiences, finding that students were very frustrated by the lack of practical, real-world, domain-specific applications of CS tied to student motivation, and the lack of active learning opportunities.

Next, I went to a SIG on the ethics of interacting with children using emerging technologies. The group of roughly 30 had doctoral students, postdocs, faculty, and even some industry folks from around the world, on numerous aspects of interaction design for children, including ethics, accessibility, emerging technology, social media, STEM education, mental health. We broke out and had some robust group discussion and share outs. One of the key insights I left with was more meta: many attendees held deep cultural assumptions about what parenting is and should be, and projected those into their questions and perspectives. Hearing about the global diversity of these experiences helped break down these assumptions in ways only a global scholarly community can do. Another interesting trend was the sense that technological change, and its acceleration, was a major complexity in managing research ethics, as any best practices we develop get disrupted in just a year or two.

After the SIG, I got lunch with several folks from Germany and the UK and had lively conversations about research funding, interdisciplinarity, and the boundaries of computer science. It took quite a while to wait for a table, so I ended up arriving halfway through a post-lunch developer tools session. I saw half of a talk on vibe coding, creative approaches to notebook version control and more.

After the session, I went to the societal impact session, where I went to celebrate and cheerlead for my colleague Alexis Hiniker. She beautifully characterized the stakes of knowledge about the impact of technology on children. She toured through some of our greatest research efforts to understand the stakes, which broadly found that platforms really are doing everything they can to capture youth attention, to youth’s detriment. Her analysis centered on dark patterns in interaction design, which are fundamentally exploitative and disrespectful, and not playing a meaningful role in addressing isolation. At the same time, her work also found that computing enables children to thrive in ways they could not before, particularly when it helped youth transcend their digital experiences into physical ones. She concluded with a call that kids deserve a better online world and that it our job to create it.

The next award winner was Kentaro Toyama (University of Michigan). Kentaro started from the premise that positive societal impact with computing is extremely difficult, and maybe impossible. He argued that poverty has not really changed, despite incredible shifts in technological change and usage; political division has increased; fossil fuel usage has increased. He argued that design isn’t really capable of changing people, because people themselves have to change. His book, Geek Heresy, articulates this argument, centering on technology as fundamentally an amplifying force for human agency, rather than a transformative force. He built on this further, arguing that unregulated free markets ultimately mean that the least ethical actors win. He argued for collective action to transform and regulate big tech.

The third and final speaker was Tiago Guerreiro, who talked about two decades of community engagement and embedded research. His described the wide-ranging focus of his work on access technologies to enable people with motor and visual impairments to fully interact with computers. He surfaced a recurring conflict: technologists would claim problems “solved”, but throughout, things continued to not work at the edges. He argued that if we continue to design with stereotypes, we will continue to have superficial impact on the diversity of people and their needs. He then positioned deep engagement with communities as a central strategy for avoiding reductive stereotypes, and to such a degree, that researchers are genuinely part of communities, not just interlopers who have decontextualized goals. A key part of this was shaping, deploying, and supporting technologies that impacted their lives, and publishing about them when appropriate, but also deepening knowledge about how to create self-supporting, self-maintaining communities. He connected this to Sasha Costanza-Chock’s book on Design Justice, as a guide for those who want to think about these methods more deeply.

After the session, I had a wonderful reunion with my first PhD graduate, Parmit Chilana, who is now an Associate Professor at Simon Fraser University. We had a lovely sushi dinner, and then I went to the HCI Institute’s evening party on the pier.

A welcome slide for our SIG on LGBTQ solidarity in HCI.

Wednesday: Resistance

I woke up (too) early and got a coffee and toast, then head to a morning session on how generative AI is reshaping things for better and worse.

  • One study by Rana Varanasi (NYU) looked at how generative AI is impacting writing. They looked at 25 writing professionals who had some use of the technology. Some writers were forced to use it, decreasing their morale, and leading to “sneaking in” their own writing to keep their spirit alive. Some expanded their roles to graphic design and management. Others were competing, specializing their writing to stand out from generative AI writing, developing new types of craftsmanship.
  • Kelly Wagman (U Chicago) considered organization use of gen AI chatbots, considering several people in science and operations roles at a large organization. Through a survey and series of interviews, they found use rose over time, partly through augmentative “copilot” interactions, and partly through workflow agents that automated some work (writing emails, generating reports), but they wanted insights from large unstructured text data. Workers saw real risks of leaking classified and proprietary data, producing incorrect information, and loss of operations jobs.
  • Tarini Saka (U Edinburgh) investigated inaccurate phishing advice from AI. In a controlled experiment for different levels of quality of phishing recommendations, she found that inaccurate guidance reduces the quality of decisions, and that specific guidance is more helpful than generic.
  • Eike Schneiders (U Southampton) reported on lawyers use of LLMs. In a relatively controlled setting, they compared responses of prompts by lawyers and ChatGPT, to see if people were more willing to rely on LLM or lawyer generated advice and whether they could identify the source. Participants were more willing to act on LLM advice, and didn’t mind using it when they knew the source; participants could also detect the source.
  • Elizabeth Ankrah (UC Irvine) talked about “socio-tecture” in the context of small and medium African business use of LLMs. Starting from the observation that technology use in African has paralleled or diverged from western use, not because of deficits, but because of variation in cultural appropriation. “Socio-tecture” referred to the idea of the social and business relationships in Africa being entagled in a relational way. They did a contextual inquiry of 7 businesses and found that knowledge was distributed, WhatsApp was key for simple relational communication and knowledge sharing. These and other finding suggested that the design of generative AI was in direct tension with these practices and values.

It struck me that some of these papers approached the work dispassionately, just trying to describe the shifts, but others seemed to bring a lurking corporate interest in highlighting positive use cases. Underneath all of it, I saw anxiety, reluctance, and survival, but very little eagerness around generative AI.

After, I went to hang in the hallway with the co-organizers of our transnational SIG on LGBTQ issues in HCI. We shared struggles and highlights from the week, and then led the session in a loose, community-centered way. The attendees were most excited to connect around mutual aid, intersecting identities, isolation in oppressive countries, academic activism, and the intersections between HCI and trans studies. As an organizer, I couldn’t quite tell how everyone received it, but there was a lot of laughter, engagement, and rich ideas about collective action, which was a good sign.

Lunch wasn’t quite resistance themed: it was with my former advisor Brad Myers and his many former students in attendance. It was nice to catch up with my academic peers, though most of our conversation steered toward the authoritarian turn in the U.S., strategies for institutional survival. I tried to encourage everyone to mobilize and act, but I think many were still in a place of feeling helpless and resigned to capitalist and political forces.

After lunch, I attended a popup session on responding to attacks to science. After a meticulously organized setup, the roughly twenty attendees spent about 20 minutes identifying concrete opportunities for collective action. I was in a group with my colleague Kate Starbird, brainstorming ways to use our existing communities, shared commitments around periodic gatherings (e.g., seminars), and our physical space to create public contexts for collective work. We fumbled through how the infrastructure for the disjoint, multi-coalition progressive left might be different than that of the more top down, thirty year coalition of the conservative right. All of this, of course, is relevant to scholarship on interactive computing because the right is actively defunding our ability to advance such scholarship.

I had great aspirations of a quiet afternoon before the SIGCHI awards banquet, but ended up in a lively conversation about rural/urban divides, and the neglect of small towns in the U.S. and Canada that has contributed to the current moment. It was fascinating to hear the perspectives of people in Canada and Germany, but who had also lived in Pakistan, Beijing, and elsewhere, to try to make visible the complex politics of the U.S. political divides. I don’t think we reached a consensus on anything, but I do think I helped build some empathy for the sense of loss the rural right feels about their way of life over our many decades of globalization.

Ending my day at an invite-only awards banquet, as much as it was a wonderful moment to celebrate the many amazing accomplishments of our community, also didn’t feel aligned with my focus on activism and collective action today. So I decided to approach the event with the goal of radicalizing as many of my renowned peers and elders as I could, without distracting from the purpose of the event. None of that worked out; it wasn’t the right time or place, as most of the event was celebratory speeches over a banquet dinner, and so there wasn’t much time to talk to anyone. It was, however, a nice space to pause, rest, and recognize the amazing careers of many of HCI’s committed advocates around accessibility, poverty, and other intersections between equity and interactive computing. I’m particularly proud of my many colleagues who were recognized, including Alexis Hiniker for her impact on youth experiences with technology, James Fogarty for his work at the intersection of machine learning and HCI, Kate Starbird for her contributions to mis and disinformation, Nadya Peek for her creative imaginings of fabrication, and Cecelia Aragon for her role modeling in and beyond computing.

Seeds of imagination: planting sees of hope with for a 100-year life with AI.
Masako begins her talk.

Thursday: Futures and farewells

After a buffet breakfast, an advisor coordination meeting, and a tasty latte, I went to one last session, this one on programming.

  • Jenny Ma (Columbia) talked about a system called DynEx, an LLM-based code generation app for generating functional user interfaces. The key insight about this approach was scaffolding design ideation to with LLM idea generation and specification, helping guide creators use of LLMs to generate a reasonable implementation.
  • James Mattei (Tufts) spoke next about information needs in automated reverse engineering from binaries, deepening our knowledge about how to scaffold and augment this intricate engineering task.
  • Madison Pickering (U Chicago) examined end user programming information processing tasks assisted with and without LLMs, finding that people of varying programming expertise could communicate simple tasks to LLMs as test cases, but complex test cases were hard for most to communicate, reinforcing a long known fact in software engineering that writing specifications is hard. LLMs consistently struggled to produce correct test code for them, simple or otherwise.
  • Zhongyi Zhou (Google) presented InstructPipe, an approach to generating dataflow programs with LLMs. It helped creators of varying expertise create prescribed tasks faster, but participants still struggled significantly when generating accurate prompts that met their requirements.
  • Xiaohan Peng (Université Paris-Saclay) presented FusAIn, an approach to generating LLM prompts with pen-based interactions, as an alternative to text prompts. It essentially used image selections as prompts for generated images, allowing for an interesting kind of iterative, LLM-fueled compositional iteration during visual design.
  • Ryan Yen (U Waterloo) presented a concept of “code shaping”, a way to iteratively edit code with freeform sketching (awarded a best paper). It essentially allowed for annotations over and about code which are used to generate prompts to revise a program. Participants in an evaluation really didn’t like it many aspects of it, but it was a compelling inquiry into one possibility.
  • Gabrielle O’Brien (U Michigan) investigated how scientists are using LLMs and how they think about verification and vulnerabilities (at least at the University of Michigan, where there is a custom ChatGPT designed for the university’s science). They did some interviews and interaction logs, finding that they are primarily using LLMs to learn unfamiliar programming languages and libraries (Python, JavaScript, R, STATA, Mathematica), that verification strategies are informal and unsystematic leading to acceptance of numerous defects, and that these practices were largely fueled by misconceptions about how LLMs work.

From this collection of mostly LLM-fueled imaginings, I learned that LLM-based programming experiences continue to be of lukewarm value, producing brittle, expertise-dependent experiences and outcomes, without eliminating any of the fundamentally difficult parts of software engineering, while increasing risks of defects and overconfidence. In contrast, the visual design application described by Xiaohan showed a compelling vision of a new kind of visual ideation. This reinforces a belief I’ve been shaping over the past few years that whether LLMs are useful depends fundamentally on domain, application, and context, amongst other things (far from the universal value big tech needs us to believe to prop up their quarterly revenue).

After a coffee break, I did the long walk to the National Hall plenary venue to see the closing plenary, chatting with colleagues and students from around the world along the way. I was tired, a bit homesick, but also a bit trepidatious about returning home to the chaos of research, teaching, administration, and trans civil rights. I hoped the keynote, from 90-year old Masako Wakamiya, would give me a bit of hope. She began by talking about her interest in technology from an early age, directing social networking sites for seniors, teaching in schools, creating art in Excel, and her philosophical inquiries into what computing might mean for the home, for children, or for women. Underlying all of this was a quiet, feminist spirit that positioned computing as a liberatory tool. As she turned her focus to AI, however, she became more critical, arguing that AI doesn’t have the materials or context to predict the future, or think about the future; she argued it will be at best, a co-pilot to our agency. Her utopian view was that at best, it might be a support, providing assistance, play. She described AI as “still a child”, very studious, and within this metaphor, we should keep a watchful eye over it, lest it become a nuisance. But she also described technology in general as always a kind of limited friend, but rarely more.

After a four days of re-immersing myself in the HCI community, I left feeling a bit unsettled. At least in my corner of the world, there is a strong sense of losing my country, my right to be in public, my ability to do research, all while being crushed by the weight of leadership chaos and the decay of pedagogy has LLM erodes interpersonal communication by intermediating relationships. While I found pockets of community at CHI who faced similar challenges, the majority of people I met, whether outside the U.S. or in it, just seemed to be in a kind of denial about the instability of the world and the precarity of the status quo. Some didn’t mention it at all; others hoped that the chaos would slow down; some admitted that we risk losing everything but felt helpless; only a handful were ready to act. The vast majority I met, however, just seemed fully captured by the capitalist LLM hype machine, trying to build their academic reputations around validating LLM applications that might promise jobs and continued funding, in the exact same way industry seems to be desperately betting on a techno-utopian deus ex machina to our global unrest about diversity and climate. HCI has always been a bit dominated by corporate pop culture, but this extreme, at this moment, has never felt more tragic and heartless.

I want to be clear that these feelings about our community are not about its individuals: students are doing what they need to do to survive, and finding a path to action in fascist takeovers is never easy, historically. The organizers have done everything they can to bring us together to have these moments of reflection, and for that I am immensely grateful. People are doing their best within systems as they are, and not everyone will be a radical or lead the transformations we need, as much as we might need them to.

What I wish I had seen this year, at the very list, is an acknowledgement of the frayed edges of our community, our values, and our future, and how they risk unraveling the delicate fabric of our work for decades. I fear, however, that too many in HCI do not see our world as a fabric, or do not believe the frays need mending, or think that if it unravels, that we can simply buy another. I worry that we will soon be in the cold, with only foundation models to keep us warm.

--

--

Bits and Behavior
Bits and Behavior

Published in Bits and Behavior

This is the blog for Amy J. Ko, Ph.D. at the University of Washington and her advisees. Here we reflect on our individual and collective struggle to understand computing and harness it for justice. See our work at https://faculty.washington.edu/ajko

Amy J. Ko
Amy J. Ko

Written by Amy J. Ko

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.

No responses yet