Bits and Behavior
Published in

Bits and Behavior

A blurry collection of unfocused pink lights amidst a background of bluish grey.
What’s possible isn’t always clear, but we have to find out.

Academia needs to dream bigger

When I was a child, my dreams were small: I wished my parents would stop hating each other, I wished we didn’t have to keep moving, I wished we could eat Taco Bell instead canned soup. As I aged, these dreams grew, and I began to wish for independence, for safety, for a right to be myself. I image that my dreams weren’t much smaller or larger than most peoples’: we have lives to live, after all, so it’s only appropriate that most of our dreams are about ourselves.

Working in research, then, was always a bit disorientating. How could I wake up in the morning as a grad student, wishing that I had enough money to take my family on vacation, but then one hour later start wondering about the possible futures of interactive programming tools, or about the inner workings of our minds when reasoning about computation? Working at the deepest foundations of phenomena, or imagining the most distant futures, is just so fundamentally different than trying to secure our material worlds or social space for our identities.

But after doing this for 20 years, I can’t help but wonder about whether we think big enough in academia. So many of the ideas I see — and work on myself—fit in this middle space. A middle space that is abstract and distant enough from today’s realities to seem to most completely irrelevant at first glance. But also a middle space that is not nearly ambitious enough to fundamentally change our material realities.

I’ll criticize my own work to illustrate. Some of the work I’m best known for was my dissertation work on the Whyline, a tool that imagined debugging not as a process of guessing and checking for possible causes of a failure, but as working backwards from a question about a failure to its root causes. To many in computer science, I think this reimagining was profound — it did show a different way of relating to computers when we program them, more as something to interrogate, analyze, and study, like a conversation with a machine about itself and less as an a machine to disassemble.

And yet, that future I imagined as a graduate student wasn’t nearly profound enough. It primarily imagined the world exactly as it was and still is—full of developers fixing bugs and wanting to fix them faster, so that their companies could ship faster, win the market, and make everyone rich, consequences be damned. Had my vision come fully to pass, we would mostly live in the world we have today, with just a bit more software, a bit more faster, and developers’ lives a little bit easier. The scope of my dreams was limited to an industrialized vision of human existence dreamt hundreds of years earlier by the first capitalists.

But when I say we should dream bigger, I don’t just mean in a social or political sense. Many others in computing, disillusioned with the particular route that computing has taken since the 1950’s, have been trying to find ways to imagine different paths computing. That is quite a big dream: it argues that we could have had a different world, and we still could, if only we had the collective ambition to create it. But most do not.

And this is where we have to talk about our institutions, and how they have evolved to limit our dreams. In my experience, in nearly every context, our institutions encourage, incentivize, and reward us to take risks —just small ones. Ones that are just big enough to justify being called research, but not so big that funding agencies, academic departments, and doctoral students have to risk losing the resources that enable them to do research in the first place. A faculty member writing a grant proposal can’t dream too big, otherwise their panelists (e.g., their peers at NSF), will say it is too risky and not worth the money. A peer reviewer of a publication has some incentive to celebrate big risks, but only if they are successful, and not if they have flaws, otherwise they risk being seen as incapable of seeing those flaws and compromising rigor. A doctoral student choosing a new project can dream, but only within the limits of what future faculty hiring committees might deem ambitious enough to bolster it’s reputation, but not so ambitious that it dooms their ability to attract talent and raise funding.

All of these forces ultimately stem from a blending of two competing goals: one of progress and discovery, which necessarily requires taking risks, and one of accountability, where public and private funders desire that we spend some money on discovery, but not too much money, and spend it wisely. Both of these seem essential. There will never be enough funding to take all risks, or even big risks, and humanity certainly shouldn’t use all or even most of its money on the future. And researchers, without resource constraint, will generally want to take bigger risks that are possible within resources.

And yet, despite these tensions and limitations, I think that those of us in academia must find ways to think bigger, even within our constraints. There are a few reasons for this. First, we can’t afford to not fundamentally change our material realities; with climate change promising to disrupt every facet of life, with democratic institutions fragmenting, with income inequality threatening peace, and with the ever present racial, ethnic, gender, ability, and class oppression eating away at our collective ability to thrive, we can’t waste any more time polishing edges and refining long accepted ideas. We have to forge ahead on different visions of the future; the public needs to be able to imagine them, and we need to be able to realize them. Only a fraction of our work in research is helping with these goals.

Second, I think academia itself is at risk if we don’t. The more we focus on near term, incremental, largely feasible work, the more our institution looks like everything else in the world. There was a time when academia seemed to most to be a place of the future. And even though I believe it still largely is, it doesn’t seem that way, because so much of our time and attention goes to being slightly more certain about the things that we largely already understand. I want the public to look at academia and regularly say, “I don’t know what that (invention/idea/social future) is or whether it is good, but I have never seen it before, and I want to us to find out more.

Some of those in computing might be thinking that industry has already created this, in the form of things like X (Google’s “moonshot” factory), or various other industry or not-for-profit R&D labs. Silicon Valley definitely seems to think they can disrupt academia, to address all of its core problems. I’m quite fond of the spirit of these experiments, but they lose me in the details: X projects, for example, are famously only two years long, and time horizon for impact is only 10 years. I don’t think that’s dreaming big enough; those are timelines optimized for changing markets. I want timelines optimized for changing human civilization. And I know industry isn’t going to stomach that, because it can’t ultimately draw the through-line to profit. Academia, however, just barely keeping its firewall from capitalism standing, can.

There are a lot of things we could do to try to reimagine academia in these directions.

I know, I know, all of this seems unlikely. The naysayers will say that easy change is hard, and hard change is impossible. But surely we can dream? And isn’t academia the place to do it?



This is the blog for the Code & Cognition lab, directed by professor Amy J. Ko, Ph.D. at the University of Washington. Here we reflect on our individual and collective struggle to understand computing and harness it for justice. See our work at

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Amy J. Ko

Professor of programming + learning + design + justice at the University of Washington Information School. Trans; she/her. #BlackLivesMatter.