A sad looking robot like face with a single eye in black and white
DALL•E 2’s version of “a sad robot head without a body in a black void”. Credit: All of the people who have shared on the public internet ever.

That time I wanted to be an AI researcher

Amy J. Ko
Bits and Behavior

--

Since the age of 15, I was fascinated by consciousness. I read everything I could about it, from popular books summarizing science to non-fiction memoirs like Aldous Huxley’s the doors of perception. What struck me about the topic was how unexplainable it was: how could we feel existence so richly and convincingly? What was it that allowed an assemblage of half illusory perceptions to become the experience that seemed to be the very essence of our rationality, emotions, decisions, and behaviors?

In hindsight, I think that’s just how I processed adolescence. I couldn’t just handle being, because that would have required reckoning with the puberty I couldn’t block. And so I retreated into my mind, trying to understand what it was, why it made the thoughts it did. I found comfort in the idea that everything I felt could only be partially explained by science, and what was left was a mysterious leap into sentience.

At the same time, I’d become enthralled with bringing things to life on my computer. The quirky animations I made, the 3D worlds, the silly games, and the endless, unwinnable text adventures — these virtual realities were another escape from being. They helped me put my attention on things I could control, instead of things I couldn’t, like the testosterone irreversibly altering my vocal chords and bones. Code was something comparatively malleable; if I didn’t like how I’d made something, I could just make it a different way.

So when I found my way to college, and stumbled into human-computer interaction research about programming, my questions about intelligence lingered. While I helped run studies trying to help people find spreadsheet defects, and studied how people struggled at the intersection of programming and statistics, my wandering mind continued to wonder about the virtual world. What kinds of intelligence are possible inside a machine? Could it have consciousness? And if it did, would changing a line of code be the same as what puberty was doing to me, violently altering a being in a way it did not consent to?

As I begin to think about doctoral studies, encouraged by my mentor in human-computer interaction (HCI), I also explored AI. I remember talking to one of our faculty at Oregon State about some of my ideas; he had taught the AI class I had taken. I told him how I was interested in embodied cognition, and the knowledge that comes from having a body and how it relates to the bodies and spaces around it. I had been reading books about the kinematics of human motion and wanted to create simulations in which artificial intelligences had bodies, and used them to reason about the world, building a kind of embodied common sense reasoning. I wondered what kind of worlds we might build for them, what kinds of intelligence that might help them learn, and what kinds of reasoning would be necessary to think spatially. Most of all, I wondered how much a body, in a spatial, social context, was an essential part of intelligence. I wanted to use AI simulations to answer this.

He told me that these ideas were ridiculous. That intelligence was either a matter of symbolic reasoning or a matter of prediction, and that bodies had nothing to do with it. He told me that me I might be better off studying sports psychology if I wanted to know about bodies, or settling for my little user studies about spreadsheets. He recommended I stay far away from AI, and that if my interest was people, I should probably leave computer science and definitely not pursue a doctorate.

I was mature enough to know he was wrong. But I also took it as a strong signal that the world of AI seemed to have no interest in questions of existence or materiality. And that it wasn’t particularly concerned with asking new questions, but with answers. I also inferred that the field he represented didn’t seem particularly curious: it seemed to be a place where engineers went optimize marionettes, rather than question reality, being, and meaning.

And so I chose HCI. It was a place that seemed endlessly curious about people, about their behavior, and about how the people and technology around them shape their behavior. It might not be a place that was particularly interested the philosophical questions of consciousness, but it was a place that was deeply interested in how conscious beings interact with unconscious ones. That was good enough for me.

That was 25 years ago. I look back on those early days of my curiosity about the mind, and want to tell that teen, bodies do matter. Your body matters. You don’t need to make a mindless robot in a dark void to know that. You don’t need to be a mindless robot in a dark void to know that. You just need to see and feel and be embodied, and that will be proof enough that your mind and your matter are one in the same.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.