There is no such thing as a computational person

We need to get past the essentialist mindset about coding

Ashley Juavinett
The Spike
6 min readAug 29, 2018

--

Like these plants, you too can grow on keyboards. (Photo: Ashley Juavinett)

When our summer intern Daniel first joined the lab, he seemed terrified of MATLAB.

And I can’t blame him — encountering a programming language in the wild is intimidating, especially when you sign up for a summer research program and they ask you to declare upfront whether you’re “computational” or not. Daniel is categorically not.

Daniel signed up for a “biological” neuroscience lab, so he was surprised when he encountered code in almost every aspect our work. We use it to collect raw data, present stimuli, and analyze gigabytes of output. Collectively, our lab codes in many languages, and most of us are proficient in two or three. We’re not labeled as a computational lab, but each of us has had to learn how to code and employ math for one reason or another.

So, I encouraged him to give coding a shot. I borrowed a copy of MATLAB for Neuroscientists, gave him a fairly straightforward task to code, and let him loose. I wasn’t really sure how to train him, but I knew one thing for sure: he could learn how to code if he wanted to.

Coding on the fly

Like many neuroscientists, I never had a formal coding education. I didn’t take any computer science courses — they were encouraged but not necessary (and even now, only a quarter of K-12 schools offer coding classes). I’ve never had to teach anybody how to code, and have only recently begun to consider myself a person who is proficient at coding.

“How did you learn how to code?” Daniel asks me.

“Truthfully, through trial and error. And a lot of Google searches.” Oh, and I probably bugged my gracious labmates quite a bit.

I’ve long thought of myself as a Math & Science Person, so I was open to the idea that I could also be a Coding & Computation Person. We tend to lump all of these skillsets into one larger (STEM) identity: the person who flies through rows of multiplication problems in grade school, excels on quantitative sections of standardized tests, and eventually becomes a coder.

However, not everyone in neuroscience research identifies as math-y. In fact, many have internalized the idea that they’re “not a math person,” a phrase that is loaded with deep-seated beliefs about how intellectual ability works. More complex computational approaches certainly benefit from advanced mathematics backgrounds, but math experience doesn’t mean you automatically know how to code. Still, because we so strongly associate math, coding, and computation, many folks assume that coding is not for them.

Mindsets and labels change how we learn

For the past two decades, Carol Dweck and colleagues have been developing a body of research that encourages educators and parents to reconsider the way that we teach children, especially in mathematics. If you ask students and educators to reflect on what makes people good at math, you’ll find that some believe more strongly in fixed or innate abilities, whereas others believe that intellectual abilities are malleable and can be learned. Dweck coined the terms “fixed” and “growth” mindsets to describe our implicit beliefs about intellectual ability. The punchline for math and science education is startling: students with growth mindsets perform better over time (even in organic chemistry).

The story for computer science is similar. Our beliefs about whether computational abilities are fixed or malleable impact our sense of belonging, how we respond to difficulties, and ultimately, our achievement.

In the U.S., we have strong cultural ideas about what it means to be a person who can code. Fresh summer interns are picking up on more than just experimental skills — they’re also learning our cultural norms. When we ask students to declare themselves as computational or not, the underlying message is that these skillsets are innate, and only some students are computational.

Implicit beliefs about computational abilities also intersect with biases about race and gender. Well-meaning people still have implicit biases about gender, race, and math — I’ll be the first to admit that I too am not completely rid of these implicit beliefs, but I’m actively working against them. Worse than unintentionally weighty survey questions or lab descriptors are the more explicit beliefs that some people are mathematically-inclined, and others aren’t. Even if the predominant and politically correct message is that men and women are equal, there are still outspoken sexist Google employees and racist Nobel Prize winners. Women and minorities especially take these messages to heart, and it can be incredibly damaging.

Early lab experiences are critical windows for young scientists to gain insight about our field and envision their place in it. During this period especially, we should be spreading the notion that everyone can develop coding and computational skillsets (or any other skills, for that matter). These types of positive, growth-oriented messages are impactful, especially for first-generation and minority students.

Subfield names can be misleading

I’m certainly not trying to wash over the interesting subfields of neuroscience. “Computational” can be a meaningful descriptor for labs that do not conduct experimental work. But it doesn’t necessarily work the other way — many molecular biology labs are turning to larger scale analyses across proteomes or genomes, and cognitive neuroscience labs have been well-situated in computational approaches for a long while. And systems neuroscience is increasingly moving into the world of big data as we obtain data from more and more neurons and brain regions.

Perhaps “theoretical” neuroscience is a better term for some labs, and less loaded with preconceived notions about who is mathematical, and who isn’t. Still, I understand that “computational” has become a primary way to describe research that involves modeling and more complex data analyses, so it may be too late to redefine an entire field. We should at least be cautious when we label labs as computational or not when we’re inviting students in — sometimes it’s just plain misleading.

Regardless, if you’re new to neuroscience research, know this: you can be (and may need to be) a neuroscientist who uses code, math, and statistics to study the brain. Coding is a skill, just like learning how to play a sport or an instrument. You might feel like you’re making more mistakes than other people — you’re not. When I first encountered a coding challenge in graduate school, I was greeted with rows and rows of blaring red error messages. Years later, I still get errors, but I furrow my brow much, much less.

In thinking about this article, I sat down with a few of the summer interns to chat about academia and ask them their thoughts about the supposed divide between computation and biology. A few had crossed over in their summer internship, either expectedly or unexpectedly. When we discussed Daniel’s experience of encountering lines and lines of code in a “biology” lab, they all nodded in agreement — they didn’t see a firm divide between biological and computational labs. And they’re right, there isn’t. When we focus on the research itself, the boundary crumbles.

Daniel’s final presentation for the CSHL Undergraduate Research Program (Photo credit: Anne Urai)

Daniel is a first-generation college student from Colombia. He’s overcome many obstacles to be here, and has made amazing strides this summer in MATLAB. During his final presentation, he discussed his analysis of gigabytes of behavioral and electrophysiology data, which he performed using his own code. He’s not a computational summer intern, but he sure looks like one.

Like what you read here?

This article is a piece of my book, So You Want to Be a Neuroscientist? (Columbia University Press, 2020). The goal is to offer aspiring neuroscientists honest, informative insight about our field as well as education and careers in it. You can order the book on Amazon (paid link, #ad) or through Columbia University Press.

Twitter: @analog_ashley

--

--