CRA Snowbird 2024 trip report: computing contrasts
Back in 2016, I was fortunate to be invited to the CRA Snowbird conference. It’s a unique event: leaders from around North America in computing flock to the mountains of Utah outside Salt Lake City to talk about the future of computing, develop peer support for new and emerging leaders, and discuss the risks and opportunities of the field, both in research and education. I was invited to speak on a panel about computing education research, enticing chairs to hire in the emerging area. But looking back on my trip report, it was more than just a panel opportunity: it was the first time I felt I had a role in the broader community of computing (especially as faculty in an iSchool), and could see myself as a leader in it.
Fast forward eight years, and I found myself invited again. This time it was as my iSchool’s incoming Associate Dean for Academics, as well as part of the new CRA future leaders program. But much more had changed: living out and proud as queer and trans, being a full professor, and having six years of undergraduate program director experience, I came to the event with a different orientation. One orientation was trepidation: Utah’s decided that I’m not allowed to use public restrooms, and the government assault against trans people in the state has been heightened by the right. I thought of myself as coming to hostile territory. But I also came to the event thinking of myself as the leader I’d imagined I might me eight years ago, excited to learn from other more experienced leaders, and prepare myself for my new Associate Dean role.
After connecting with some local trans community Tuesday afternoon, I arrived to the Snowbird resort in the evening for a happy hour and evening dinner and panel. The happy hour ended up being a bit of a computing ed reunion, with many familiar faces attending to present on panels or represent their units. We gossiped about power and abuses of it; we also talked a lot about tomatoes and the small mammals begging on the patio for little bites.
After drinks, we head into to the ballroom for a buffet dinner and panel. The panel was Kamau Bobb (Georgia Tech), Carla Brodley (Northeastern University), and Jeff Forbes (National Science Foundation). Each spoke about the urgent need to resist the right’s silencing of broadening participation efforts, and offered strategies for continuing the work under the right’s suppression of speech and advocacy. While the remarks were a solid foundation for shared context, the real action was in the more than hour of questions and discussion. It was striking, compared to eight years ago, how much the leaders that came to the mic were willing to talk openly and passionately about racism, whiteness, ability, and civil rights attacks on trans and gender non-conforming people. Eight years ago, it felt like three days of Google cheerleading, with a brief blip of criticality from Kentaro Toyama. While this was heartening, it I also couldn’t help but look around the room, and notice the sea of white and Asian men. I wondered if I was the only trans person in the room.
Wednesday: awards, disability, policy
After breakfast with random leaders, which was mostly sharing about what iSchools are and why they are awesome, we all went into the ballroom to hear from Tracy Camp with an update on CRA. The first big announcement was that the conference would be renamed to the CRA Summit and be moved to Mystic Lake Casino Hotel on tribal land in Minnesota. (Phew! No more hostile territory). She then discussed the outcomes of the LEVEL UP workshops, the many opportunities to engage with CRA, and CRA’s commitment to being community driven.
Nancy Amato then announced some awards:
- The CRA Nico Habermann Award went to Mary Ann Leung, founder and president of the Sustainable Horizon’s Institute, which focuses on broadening participation in STEM.
- The CRA Distinguished Service award went to Lynne Parker (University of Tennessee), who has focused on intensive policy work around AI; it also went to Manish Parashar (University of Utah), who has focused on national cyberinfrastructure.
- The Anita Borg Early Career Award went to Yakun Sophia Shao (UC Berkeley) for her research, mentorship, and service.
- The Skip Ellis Early Career Award went to Martez Mott (Microsoft Research, UW iSchool alumni!), for his research on accessible computing.
- The Service to CRA Award went to Peter Harsha (CRA, Senior Director of Government Affairs), in recognition of 20 years to CRA.
The rest of the morning session was led by Katie Siek (Indiana University), who opened a conversation with Haben Girma, a human rights lawyer who is deafblind and also a Black woman. Haben spoke, starting by talking about her desire for community, her explorations and innovations with touch-based input (e.g., braille displays). She discussed how small changes can have big impacts on communities, and how the biggest barrier is not her deaf-blindness, but ableism. She advocated around planning for the diversity of human abilities and disabilities, and how that requires engineers and designers that are also disabled. She also talked about how often new innovations come from communities of people with disabilities, giving email and curb cuts as examples.
In the second half of her talk, she then turned to stories about her encounters with deaf, blind, and deafblind communities around the world, highlighting the innovations in communication across cultures. She discussed examples of her early days of advocacy in college, learning about how to harness the law to right for rights. She gave several examples of how empowering AI has been at opening up the world of information, but also how strongly ableism is embedded in large language models, creating new challenges about when to trust information. For example, she was testing a machine vision app for crossing crosswalks, and got a false positive in DC, which would have led to her being hit by car had her friend not been there. Q&A focused a lot on AI, techno-ableism, and technology use in classrooms. Throughout, Haben made a consistent call for computing leaders to end ableism in computing education and research.
After a short break were several parallel sessions on best practices for teaching track recruiting, industry affiliate programs, growth and funding strategies, and managing legal assaults on human diversity. I decided on the last session, to do some networking around equity work. The session was mostly attended by department chairs, and predominantly those in states hostile to everyone but cis white non-disabled heterosexual Christian men. The group talked about tactics for working around these unconstitutional violations of free speech, strategies for organizing. I was the annoying radical advocating for strategically organizing a coalition of colleges and universities to intentionally break all of these state laws, to foment a national recommitment to academic freedom. A few people came to me afterwards and agreed, but for obvious reasons, didn’t feel safe agreeing publicly. And many explicitly asked to not be named or photographed, demonstrating the high stakes of assaults on free speech by the right and it’s well funded Christian nationalist policy groups.
After a lively lunch chatting with faculty and a PhD student, and a UW group photo, we joined for the future CRA leaders lightning talks (of which I was one). The group was full of impressive faculty in not-quite-chair roles, and we had a chance to share research, teaching, and service that excited us, 2 minutes at a time. I talked about Wordplay, STEP CS, Teaching Accessible Computing, and my emerging plans for professional learning about equitable teaching at the UW iSchool. Afterwards, it was great fun ot connect with the other leaders and learn about their contexts.
Immediately after was some “not” working time — get it, networking, but not working? I decided to go on a short walk to an observation deck with a group. I’m definitely fit enough to have done the advanced hike that was 6 miles round trip, but didn’t bring the shoes for it, was feeling the limits of altitude sickness, and didn’t want to pass out on a mountain. (I’ve done that once already, on Mauna Kea, and it was not fun!) I had some good conversations about interdisciplinary, intellectual humility, entrepreneurship, and Portland. (I outed myself as a Portland weirdo in my lightning talk). I came back after with plenty of time to cool down, manage asthma from the wildfire smoke, and sit quietly in the atrium to do some reading and writing.
At dinner time, we had a talk from Peter Harsha, CRA’s closest thing to a lobbyist. He talked about four major areas of policy. The first was research security, which stems from a lot of unfounded xenophobia about China. CRA has been trying to ensure that policies are based on actual risks, not perceived ones. The second was mis- and disinformation, where the house judiciary committee has been trying to generate outrage about researchers and tech companies by attacking those researchers and companies. CRA has no easy way to stop these attacks, but CRA has helped coordinate legal cost funds through foundations and defended the value of the research, especially from a national security perspective. The third was CRA’s monitoring of anti-DEI state legislation, which has directly impacted the ability of computing units to engage students in learning, and CRA’s ability to run it’s own programs. The fourth and final topic was the next possible administrations. Kamala’s administration would bring new and unknown priorities, and Trump’s Project 2025 would dismantle the Department of Education, narrow what is funded in research, eliminate student research visas, eliminate tenure, and more. He talked about CRA’s quadrennial papers, which are 2–4 page whitepapers that inform new administrations about computing priorities.
After the talk, there were card games and social tables for an evening of networking. I played Diana Franklin’s fun new quantum computing themed variant on Exploding Kittens … with Diana!
Thursday: Policy, panels, processors, oh my
In the morning, we started with a session on the Biden-Harris administrations actions on AI and workforce. Deirdre Mulligan spoke in her role as Principal Deputy U.S. Chief Technology Officer and the Director of the National Artificial Intelligence Initiative Office at the White House Office of Science and Technology Policy. She began by acknowledging the “obnoxiously long title”, but contrasted that with just how much the science and technology advisor sitting directly in the President’s cabinet. Deirdre covers accessibility, broadband, AI, and others on her team talk about climate, environment, and other areas of science. Her role is to advise the President on key issues and coordinating government policies.
She focused her talk on the Biden administration’s focus on ensuring AI is harnessed in equitable, democratic ways that support the public interest. The focal point was the AI executive order, which focuses on regulatory standards, privacy, and American innovation. She emphasized two parts of it:
- One directive provided guidance on how federal government agencies use AI. It establishes risk management practices on rights and safety, such as biometric use, documentation of intended purposes of AI systems, to model benefits and risks, to evaluate data quality, operational testing requirements, and more. It is almost certainly the most rigorous governance framework in the world, and directly informed by decades of research on AI bias, privacy, and security.
- Another directive focused on an AI talent search for experts who want to engage in public service. This includes data scientists and engineers, but also AI ethicists, product managers, and more. They’re using the U.S. Digital Service, the Presidential Innovation Fellowship, and the U.S. Digital Core, and more to recruit.
Her ask for the community was a unique kind of public service: joining the government for a period of time to help inform practice. She talked about sabbatical terms, for example, as a sweet spot for engagement. I asked a question about how to make the experiences of those who have engaged more visible to faculty and students, and she offered some suggestions about tech to gov career fairs and the need for more storytelling from people in government.
Tracy Camp (CRA Executive Director), recapped some of the emerging tasks for leaders from the conversation:
- Encourage your students to look at ai.gov for jobs
- Host a government career fair or participate in one
- Submit to the NSF Civic program, a new DoD program
- Donate to efforts to change the culture of public service
- Join CRA roundtables about changing the culture of public service
- Think about externships and experiential learning for government service
After Deirdra’s talk, Fernando Pereira (Google) talked about large language models, essentially giving a primer on how they work and the history of discoveries that enabled them. He began with an analogy from Darwin: “He who understands baboon would do more towards metaphysics than Locke”, suggesting that making sense of what these models are may reveal deeper insights about humanity. (Insert social science red flags here). He talked about language models as an information retrieval system that finds likely text that could exist based on text that does exist, using poetry and rhyming as an example. He talked about knowledge, and whether models know things, or just retrieve things. He didn’t engage in the century of research on what knowledge is in information science; rather, he defined knowledge as “islands” of information that are not necessarily explicitly connected, but can be through prompts. He said that his experience is that models of any size have this property of disconnected constellations of likelihood.
A third example he raised was classification of things (once again not leveraging the century of knowledge about classification in information science). He demonstrated on how it classified things differently depending on subtleties of the prompt. He moved on to argue that social psychologists have demonstrated that humans demonstrate the same behavior, citing the books The Enigma of Reason, Language vs. Reality, and The Instruction of Imagination. (He was careful to that the behavior is not equivalent, but parallel.) He went on to talk about other tasks as well, including transformation, reasoning, and how fickle it’s performance is on this human tasks because of acute sensitivity to language. He explained that there’s a fundamental reason for this: language transformer models essentially cannot do function composition reliably.
His core point in all of this was that “reasoning” through language via likelihood isn’t particularly reliable. Neither is human reasoning, but not in a way that’s so unpredictably wrong. He postulated that this was because not all of human knowledge is written down. (I have other explanations: human reasoning is not a statistical model, even though it can be modeled with one). Either way, the implications are clear: language models are not problem solving tools, they are pattern recognizers, and they should not be used for problem solving purposes in isolation.
After the break were some parallel sessions. I went to the session on the future of graduate education. Gillian Hayes facilitates a panel with Alexander Wolf (UC Santa Cruz), Emily Miller (Association of American Universities), Steve Swanson (UC San Diego), and Kinnis Gosha (Morehouse College). The panel largely focused on coming challenges:
- Who is going to pay for doctoral education? Rising costs of education and housing, food insecurity, and who can afford to participate are huge questions.
- What talent are units going to need to support research? Complex teams of graduate students, postdocs, staff, and more may require a reimagining of academic structures.
- Should there be remote PhD programs? If so, how do we do it well, and if not, how do we stop it? What are the risks and benefits of online programs that open access, but frame education as training instead of transformation?
- Will there be graduate students from China in the future? This depends heavily on the next administration, the culture of supporting students from other countries, and the absolute chaos of the U.S. immigration system. Relatedly, are there other parts of the world might want to engage in doctoral education in North America?
- How will we entice students to go into research when industry is so aggressive at creating a culture that only legitimizes for-profit careers? We can’t blame students for seeking economic security, but if we can’t provide it, will there be doctoral students?
- How will unions change the relationship between graduate students and faculty? Unionization is a rational response to cost of living increases, but there’s little sense on the back end of where increased funding would come from. (See bullet 1). As the relationship between students and faculty becomes more transactional, will relational advising be possible?
- How will the much more dire funding realities of the rest of arts, humanities, and social sciences impact computing’s ability to center humanist and social perspectives on computing? The reality of the cost structure and job market is threatening interdisciplinarity.
There was a good series of questions after the prepared ones covering the scope above about other structures for graduate education, naming the values that drive our education, and the immense inequities in access to graduate education. I asked a question about the diversity of supports for scholarship in the history of society (e.g., churches, patrons, industry, government), and how we might imagine the future of scholarly funding, rather than react by defending the status quo. There were some good answers about where the money is (e.g., industry). I liked Emily’s answer about the importance of communicating the purpose of graduate education in positive ways. If I were on the panel, I would have shared a vision that all future educators’ educations should be fully subsidized by public funding (K-12 and doctoral education) if we want future capacity to educate the public.
After a great lunch talking about faculty recruiting in rural communities, sabbaticals, and mystery food, I went to the session on accessibility and generative AI. Richard gave a broad overview of disability in CS, accessibility research, and departmental plans mentioning disability. The panel, which consistent of Raja Kushalnagar (Gallaudet University), Cecilia Aragon (University of Washington), Dhruv Jain (University of Michigan), and Cynthia Bennett (Google). They talked about the many ways that ableism is woven through higher education in undergraduate, graduate, faculty, and staff experiences, and the burden of advocacy that people with disabilities carry because departments do not invest in equity around ability diversity.
Kate Glazko (University of Washington) then talked about generative AI bias, giving examples of representation bias, inaccurate prosthetic uses, stereotyping about ability, and more. She also reported research that found that 1) LLMs show clear disability bias, 2) it is possible to customize LLMs to remove bias but only to a limited extent, 3) generative AI can be a more accessible tool for prototyping, 4) generative AI can be terrible for text summarization for people with brain fog, 5) can be helpful for reducing anxiety about interpersonal communication for some autistic people, and 6) can be horrible for generating accessible data visualizations.
My last parallel session of the day was the future of computing education, which featured Adrienne Decker (University at Buffalo), Diana Franklin (University of Chicago), Leo Porter (University of California San Diego), Alfred Spector (Massachusetts Institute of Technology), and Mark Weiss (Florida International University).
- Adrienne and Mark talked about the future of computing workshop and it’s outcomes, which talked about the need to heavily reimagine curricula around equity and justice.
- Alfred talked about aspiring to teaching “apolitical” ideas of computing in partnership with other disciplines to teach the political parts of computing.
- Leo talked about reimagining CS courses with generative AI, and his book, Learn AI-Assisted Python Programming, studying student learning and teaching perceptions. His big point is just that gen AI can easily solve introductory programming assignments and exam items.
- Diana talked about quantum computing and examples of it’s possible applications, such as cryptography, optimal routing, molecular simulation, and the different ways of teaching it at the K-12 and college level.
- Ran Libeskind-Hadas, the facilitator, talked about X+CS, embedding computing in a natural sciences curriculum called integrated sciences at Claremont McKenna College. Students gain computing literacy from day one, but always framed around contemporary questions around natural sciences.
Questions were abundant. A lot focused on strategies for change, conflicts between different visions of change, the importance of curricular flexibility, and the problem of recalcitrant senior faculty.
After a nice networking session with the CRA Board and other future CRA leaders, it was time for one more dinner with Bill Dally (NVIDIA) speaking about the hardware side of deep learning and language models. He deep dived into GPU hardware designs and how they enabled such large models. It was fun to hear how excited he was about all of the innovations necessary to get repeated increases in speed, but definitely not my jam. I thought the most interesting thing he said was an allusion to the geopolitical dynamics of how advances are such an intimate partner with TSMC in Taiwan, and the intellectual partnership between data curators, software, and hardware. But he went on, and on, deeper into the weeds about number representation. I’m not one to yuck someone’s yum, and many in the audience seemed captivated, so I hope it resonated with others. All that said, it was a depressing way to end a provocative three days about disability justice and policy, listening to a representative of one of the most valuable companies in the world talk about how they will continue to absorb more wealth while increasing the warming of the planet.
That was until the first question demanded an explanation for why NVIDIA was not taking leadership on it’s contributions to climate change. It was by far the highlight of the evening, watching a brief 2-minute spar between an attendee who would not accept a deflection, and a speaker who only had corporate equivocations to offer.
Reflection
Coming to Snowbird again, and to Utah, felt like closing a door and opening another. It was closure on a muted part of my life pre gender transition, and an opening of being proudly out amongst a small but powerful community of leaders. It was closure on a junior period of my life, where I felt like I was always looking up to other leaders, and an opening of a senior phase where I felt like I was amongst peers. It was a closure of a time in life where I could freely explore the world, and an opening of a time where I have to think carefully about where I travel, what rights I will lose when I cross an imaginary geographical line. But it was also a closure of a period of computing that uncritically obsessed over power and dominance, and an opening of a period that leads with humility, questions, and a desire for partnership.
All of these transitions, for me, draw a sharp contrast between the past and the present. This is a time, and this was a meeting, that makes sharp the lines around which the coming debates and battles will unfold: will we focus on the climate, or ignore it? Will we double down on generative AI hype, or draw nuance around it’s limitations? Will North America be a place that loves and supports everyone, or draws hard lines around who gets to be free? Computing, as it has always been in my life, will be at the center of much of this, whether the world likes it or not. And I will keep finding my through it, as a leader, and as a person.