How can we tell if one dataset is higher-quality than another? Higher signal-to-noise ratio? Bigger effect size? More reliable? This is a non-trivial question. For the purposes of this blog post, I define “high-quality” as comparable to data collected in the lab, and I use the following comparison metrics to assess this:
Open any introductory textbook in psychology or neuroscience, and you will find the adage: the mind is what the brain does.
But in truth, there are still fundamental disagreements in the scientific community about how to study the mind and brain, and how to know when we’ve understood something about them.
Here’s what I mean.
Imagine that you are tasked with studying something really complicated: It could be a brain, a computer program, or an anthill. The question is at what level of analysis to study this system if you want to understand it: At the level of neurons, or regions, or networks, or the computations and algorithms that the brain is carrying out? In terms of source or machine code, or the compiler that converts the former to the latter? …