Teaching Social Sciences vs. Natural Sciences, or Where Feynman Might Not Work.

Henry Kim
9 min readAug 12, 2017

--

Analogies are a powerful thing: this is, in a sense, at the core of the so-called Feynman method. The requisite steps are to pretend that you’re teaching a topic to students, keep going back to the literature when and if things don’t make sense, simplify whenever possible, and use analogies to explain concepts.

Long-held words of wisdom on political campaigns is quite the opposite of the Feynman method. I’ve noted before the favorite quote by Sam Popkin on the art of campaigning: “if you explain, you lose.” Underlying this quote is the realization that, on matters of politics, society, culture, or any other topic of social sciences, people believe that they have the essential understanding of how things work. Unlike the sciences, they don’t need to be given an explanation of how things are — they already “know.” They only need to be reminded of why and how what they already believe is right and that should lead them to make certain choices. The catch, of course, is that there are many issues where existing beliefs and values can lead to different choices — as Tip O’Neill mentioned in his memoirs, the economic aid to communist era Poland could be explained to Irish and Italian Catholic voters in Massachusetts as a vile act supporting a godless communist country or help for fellow Catholics suffering under difficult conditions. Same values, same cultural beliefs, completely opposite choices, depending on how the same choices are structured and presented. (This is not necessarily different even for allegedly more sophisticated voters: O’Neill’s memoirs are full of instances where he worked out deals where he extracted supporting votes from members of congress on matters where they were (mildly) opposed, in return for promises to support them on issues where they were (much more) interested and other subterfuge. The lack of clear one-to-one correspondence between “beliefs” and “actions,” especially when “politics” is working properly, is precisely why one should be skeptical of the claims that one could simply crunch voting data, whether in the electorate or legislatures, and discern the underlying “ideologies,” unless politics has so utterly failed that there is no room for wheeling and dealing introducing noise.)

Note that the attitude of “politics” runs precisely opposite that of a “science.” The audience is not looking to learn about things from the “politicians.” The “politicians” are not trying to teach the voters. They are simply finding ways where they can, essentially, help each other within the framework of their values and beliefs about which they already know. Or, in other words, the only people who need to do any kind of learning are the politicians, about the values and beliefs of the voters, only so that they can talk to them better — that is, if they find it so necessary. From the Feynman perspective, the ordering of the process is exactly backwards: analogies are already everywhere in the political universe(or indeed, any other explanations of social phenomena.) The point of an analogy is to draw a complex and unfamiliar topic by invoking something that the audience is already somewhat familiar with. Subatomic particles are completely alien to the audience, but the audience might already know something about the planets or the family. So drawing analogies between subatomic particles to the planetary system might give them a sense of familiarity with the concept that they might not get from the abstract. This works because most people do not naturally think of subatomic particles as being like planets from the beginning. But most people already think of social, economic, or political matters through analogies to familial and other social relationships in which they are already deeply embedded in. (A point frequently raised by George Lakoff, when he’s not too busy doing political advocacy. If you take away the references to liberals and conservatives, the point of the whole book is that different groups of people are characterized by the kind of familiar analogies they use to make sense of politics and society.) The central problem in explaining political, social, and cultural concepts is not to familiarize the abstract to the audience, but the exactly opposite — that politics, economy, and society do not actually work like the social relationships that the audiences are already familiar with, that they need to think more abstractly about them.

Successful “politicians,” whether conscientiously or not, understand this. They do not try to “teach” their audiences things that they do not already “know.” They remind them of the things that they already believe, only to credibly lead them on to suggest that, “….therefore, you should choose X.” Good scientists often make lousy politicians, on the other hand: a good scientist-politician addressing a group of creationists would strive to find some rhetorical tricks that would start from the premise that they are all believers in some core set of principles, and therefore, through steps A, B, and C, each of which they all agree on, they should all accept that natural selection is how ecosystems change. The steps A, B, and C will be hard to find, and I am not going to pretend that I know what they are , or even suggest that they should be the same for all subsets of creationists— but that’s why you should study the audience studiously. But most science advocates do not study their audiences studiously — especially the potential audiences that do not normally listen to them. Instead, they are far more likely to abuse these unfriendly audiences with invectives and insults. Some chance that they’ll get through.

This is where social sciences should come in and why the “science” part becomes much hard. Different groups of people will have different sets of beliefs, values, and norms. The most interesting peoples will not share the beliefs of the scientist. Their choices and behavior, however, will be the product of how they understand the reality — the information disseminated by the reality refracted through the prism of their beliefs, which will not be the same as that through your own prism. The point of analysis is that, if the group X thinks that 1+1 = 3, why and how they came to believe that 1+1 = 3, under what conditions, based on what observations, not to insist that X is irrational and foolish because 1+1 = 2 is obviously true.

NB: This is an interesting lacuna in otherwise insightful 1984. Why aren’t you free to think that 2+2 =5? It is only the Big Brother that insists that 2+2=5, and not being able to say something that is obviously true, to both the characters and the readers, 2+2=4, is the oppression. If the Big Brother insisted that 2+2=4, and enforced this dictate with the same instruments of terror, would that be any less of an oppression, because you agree with that view? Dostoevsky, writing decades before Orwell, had a different perspective:

“Once it’s proved to you that, essentially speaking, one little drop of your own fat should be dearer to you than a hundred thousand of your fellow men, and that in this result all so-called virtues and obligations and other ravings and prejudices will finally be resolved, go ahead and accept it, there’s nothing to be done, because two times two is-mathematics. Try objecting to that.”

The task of a social scientist comes closer to Dostoevsky than Orwell. The premise must be that 2x2=4 is not an intuitively obvious thing — it might be if we are mathematicians. If we are studying a group of people to whom 2x2=5, our duty is to understand why they should arrive at that conclusion, and if we are to teach about this, we need to be able to teach people who believe that 2x2 = 4 that it is not simply due to irrationality and foolishness that some people should think 2x2=5, but a product of a different kind of logical process which is alien to the 2x2=4 tribe— without actually claiming that 2x2=5, because that’s mathematics, not social science, and we can’t object to that. Successfully achieving this still requires going through the first three steps of the Feynman process: it still requires developing a model that is simple and robust, it still requires constantly going back to the literature when things don’t make sense, it still requires having to find explanations that make sense to those who know nothing about how presumptions about how societies work. But, precisely to arrive at the simple model that strips away the presumptions, facile analogies need to be thrown out. Common commentaries on politics, as well as our own preconceptions (especially when we are factually right, so to speak), are the biggest obstacle to this.

In a sense, this is not really a critique of the Feynman process: Feynman, I’d imagine, would agree with every step of the way, as I am aware that he was most annoyed by use of facile, misleading, and foolish analogies in teaching of physics and math. (He had some choice commentaries on k-12 math and science textbooks for this reason.) It is more a comment on the dangers of misusing analogies: analogies may be how we learn, but they are also how we learn the wrong things. Facile and misleading analogies are destructive to successful learning.

PS. There is an interesting analogue between my thinking on this topic and what Orwell had written in 1945, in an essay aptly titled “What is Science?” The following passage deserves attention:

This confusion of meaning, which is partly deliberate, has in it a great danger. Implied in the demand for more scientific education is the claim that if one has been scientifically trained one’s approach to all subjects will be more intelligent than if one had had no such training. A scientist’s political opinions, it is assumed, his opinions on sociological questions, on morals, on philosophy, perhaps even on the arts, will be more valuable than those of a layman.

Orwell is writing of opinions of “scientists” outside their field of expertise, whether they are necessarily “better,” by virtue of their scientific training in the narrow sense, than the layman’s. I am raising a version of the same question, whether the layman’s opinion, by virtue of being “factually wrong,” is necessarily worse than that of a scientist’s? Let us remember that what a layman thinks about evolution, for example, is, in the end, his opinion, reflecting his values and beliefs but not necessarily a product of carefully reasoned and thought through process. Nitpicking over people’s opinion is simply asking for a fight. To teach a creationist evolution is not to force a change in his opinion, but ask him to acquire a scientific habit of mind and to subject his beliefs to the test, without prejudice about the conclusions that he might reach. Or, as Orwell put it:

…scientific education ought to mean the implanting of a rational, sceptical, experimental habit of mind. It ought to mean acquiring a method — a method that can be used on any problem that one meets — and not simply piling up a lot of facts.

But at the same time, it does pose the challenge to the “scientists,” defined in the narrow sense. So you believe X about the people whom you do not understand very well. Are you sure about this? What do you (really) know about the way the other people(s) tick? Unfortunately, this is a dangerous question to ask, to ask people to seriously consider how people can really believe 2+2 =5, or the equivalent statements about contemporary political and social matters, all of which are taken to be “intuitively obvious” — without delving into simplistic and caricatured answers. The problem with social science is that the practictioner and the scientist need to do exactly the opposite things — a political scientist cannot act and think like a politician, an economist cannot think like a corporate executive, and so forth, much more than perhaps an engineer relative to a scientist. I suppose there is a huge value to a political scientist who can act like a politician but think like a scientist (or, indeed, any scientist) but that is a challenging goal to accomplish.

PPS. I think another way of describing this is that, in a sense, topics of social science are intuitively more easily understood, but not so at the conceptual level, while the opposite is true with natural sciences. In the end, nobody sticks through analogies beyond the basic level of teaching natural sciences. The analogy between planetary system and subatomic articles are made at very elementary levels, but one would be crazy to stick to it after the students’ feet got wet, so to speak. The more abstract, complex concepts need to be to be introduced, one way or another, even if that means having to introduce other analogies that challenge the previous analogies — in fact, there should be conflicting analogies, beyond some level — logical contradictions, I firmly believe, prompt the search for an understanding, going back to the story of blind men and the elephant. But for topics in social sciences, unless we are dealing with autistic people, an intuitive understanding of how society operates, along with simple-to-simplistic analogies, already exist (and are often incomplete, wrong, or misleading)! They don’t need a planetary analogy because most students already think social science equivalents of subatomic particles are already like planets. They need more abstraction, or at least a departure from the common analogies, not further reinforcement of things that they already think they know.

--

--