Social Sciences Will Never Be Physics But That is Okay

An academic theme that keeps rearing its head on the web is the disparagement of all attempts to quantify the social sciences. Economics has been the favorite whipping boy, but no field goes unscathed. These arguments usually sound something like: “people aren’t rational automatons or robots”; “human behavior is erratic and won’t conform to equations”; or the apocryphal “lies, damn lies, and statistics.” This recent piece by Michael Lind takes it to another level:

“Let’s just abolish them all and go back to humanities”

In this series I’ll explore why the study of economics is still viable in an irrational world, and in fact, critical, but in the first segment I’ll just tackle the fear of math and statistics that generally underlies these criticisms of the scientific approach.

The most important thing to understand about math and statistics is that they are simply a language for communicating complex ideas in universal terms. Most critics of using math just don’t speak the language. It is only natural to feel that a lifetime of study in a field can only truly be captured in extensive literature and a diverse academic vocabulary (probably English). But what will you do when the language itself is imprecise? Or when the best peer reviewers speak Mandarin? In an age of globalization, the surest way to bring all of our research prowess to bear on a problem is to put it into symbols and numbers that transcend spoken languages. Economics, for all its flaws, has joined the natural sciences by standardizing and throwing open the doors to the best minds from around the world and it shows by the diversity in the graduate programs. “Mathification” isn’t just about generating the best output but also bringing in the best people.

Adopting a quantitative approach does not mean we have to accept less robust theories and models to attract talent either. Treating the social sciences of human behavior like hard sciences brings more to the table. While it is easy to fret about the lack of reliability with statistical models in the social sciences, it is important to remember what the scientific method is good for: falsification. It is fun to accuse the sociologists and economists of “physics envy,” but physics is continually revising itself as well. Science doesn’t prove things, it disproves them. It is this falsification that modern social sciences benefit the most from.

And have you tried disproving your friend’s subjective arguments? Their elaborate theories on relationships or politics? Those arguments aren’t easy to win. When people disagree subjectively, they can indefinitely maintain differing views in complete confidence that their opponent is wrong and hopelessly misguided. Taking the objective approach challenges us to do better. Express your hypothesis in an equation, run an experiment or gather the data for a statistical analysis, and test your hypothesis. If the data show no effect or the opposite, you are wrong. But that is fine. In our human obsession with being right, we forget how important it is to know what is NOT true. We learn at least as much from learning we are wrong, including humility.

Of course, sometimes we think we get it right! Alpha < .05, R squared above 90%, oh happy day, break out the F statistic, I’ve got degrees of freedom to spare!

This is the point where a cynic could really lose all faith in statistics. Now you have a fancy model that predicts the future and deserves publication and fame and fortune. At least, that is what we all want to think. Sadly, research doesn’t work this way and all it means is we can’t disprove your hypothesis, so we will have to accept it for now (begrudgingly). I think this is truly where the rift forms between the geeks and the skeptics. It is hard accepting a thing as unknowable, even when we have a nice clear prediction in front of us. “When Are You Really Right After You Think You Are Right?” is a hard question and deserves its own follow up piece. In the interest of brevity, let’s just summarize:

I am just as skeptical of geeks with predictive models as the Michael Lind crowd but I am equally skeptical of charlatans who insist that the only way to truly understand the truth is to read their (expensive and long) book on the subject.

A scholar in any field needs to construct a position that can be concretely refuted in order to advance human knowledge in a meaningful way. Furthermore, when we are forced to put our theories into concise, systematic statements which can be shown to be true or false, it elucidates our own thinking and simplifies our task of writing for a broader audience and spreading knowledge. As we expose ourselves to the possibility of being definitively wrong, we offer future researchers the ability to move on and focus on newer and better explanations of the world.

Now you may be thinking I have sidestepped the issue. Sure, we all agree that it is nice to disprove simple, wrong ideas. But what about those complex worldviews which may be right but can’t be distilled into equations. Aren’t we sacrificing the beauty and complexity of the world to create false confidence in our models? To answer this question, we need to revisit my opening premise that math is merely a language and shouldn’t struggle any more or less than any spoken language at describing the incredible and complex world we live in. If we are having a hard time describing what we see, perhaps we just need to broaden our vocabularies. The same concerns the non-mathy critics express are shared by the stat geeks, they just express them in different ways. So I’ll conclude this first segment of the series with the official EcCentric (copyright not quite pending) English-to-Geek dictionary.

This time is different/each situation is unique — out-of-sample forecast

You ignored the contrary evidence — sampling bias

This problem is too complex to model — large standard error, low R^2

Something deeper is going on and driving this — omitted variable bias

Neither explains the other, it is a delicate balance — endogeneity

This seems overly precise for such a challenging problem — overfitting the model

This ought to have happened …— y- “hat” or y “prime”, predicted value

…But it didn’t — error term

I doubt both of these happened together by chance — low p-value (usually below 0.05), high t or z-score

But that only matters when ___ is true — interaction variables

At some level, this will have a disproportionate effect — functional form

These are only a few examples, but it should be pretty clear that if you know the lingo, all academics are trying to tackle the same problems. In the next section I’ll dig a little deeper into when statistics are more likely to be useful in the study of human behavior and when they tend to break down and we will have to resort to our intellect and intuitions.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.