Everyone Has a Worldview

Freisinnige Zeitung
18 min readJan 20, 2018

--

I have written about what I call “worldviews,” see, for example, this post here. I think of them as intuitive panoramas of how the world works. They are made up of assumptions about what you think is the case, what the relevant entities are, how they are related with each other at the same time or as storylines over time. I will try to flesh this idea out below.

One misunderstanding might be that some of my comments on various writers or political movements could sound like this: “Gotcha, you have a worldview! And that’s why your argument is wrong.” But that is not what I mean. I don’t think there is anything inherently wrong with having a worldview. Everyone has one, and it cannot be otherwise, and so it is besides the point to fault someone for it and to dismiss their arguments in this way.

In my view, it works the other way around. Arguments have to be evaluated on their own merits. If they are good, there is no reason to reject them because they were perhaps informed by some worldview. An extreme example would be a mathematical proof. Pointing out that the author was a Communist, bourgeois, American, supporter of a minimum wage, Buddhist or whatever is simply irrelevant. It is like you cannot argue that a mathematical proof was only meant as ironic. If it works, it is still correct, no matter what you have in mind otherwise.

However, if you find that an explicit argument is patently wrong and still someone finds it self-evident, one possibility is that there is an implicit argument behind it that comes from a worldview. Even then it does not have to be so. Suppose you find a gross mistake in a mathematical proof. In that case, you do not have to speculate on how it was motivated by intuitive thinking in the background. It is probably nothing but an outright mistake. Intuition may also play a role here because someone gets carried away with an argument and that’s why they miss some glaring problem with it. But mostly it is just a mistake.

The right way to go about such observations is this: You have to address the explicit argument first and show that it is wrong and maybe also in a way that is inexplicable. Only then can you wonder as a second step whether an underlying worldview might behind this error. Often that is really the case. Some things seem too obvious for someone to be questioned, and then this can introduce false assumptions into an argument.

Another pitfall may be that an author perhaps even realizes that something is not correct with a proof, but is still so convinced it must work in some way that they do not bother to make a stringent argument and feel satisfied with something that an outside observer views as an obvious logical leap. All in all, my point about worldviews is a postmortem after something has gone wrong.

— — —

Let me now try to flesh out what I mean by a worldview, so this is not all vague talk. My understanding is that what is behind it is probably not one system, but many that are integrated. There are some very low-level functions, then their outputs go into higher-level functions. They are part of our human nature. Some of it is pretty fixed and some of it can also be more or less plastic.

Here are three examples:

(1) The auditory system is structured on different levels, the most basic of which is very old. We share it with many species. Basically, what it does is first acquire the data, pressure waves that come in. Then those are processed in various ways. One is to distinguish a background from sounds that may be important. The former might be some type of general noise, it could be the sound of the wind or of the waves. This part is mostly blanked out early on. You have to focus on the background sounds deliberately to notice them.

What your auditory system is after are the sounds that are not part of the background noise. You try to identify where the sources are in space (better in the planar dimensions, worse in the horizontal direction) and you track them over time. So, a bird sings in a forest at a close distance to your right. Or it might be a call from someone far away behind you. The tendency here is to interpret such sounds as coming from specific sources that have continuity over time. The bird might not sing at this very moment, the person might not be calling, but you make a mental note that they are still there. So when they continue with their sounds you attribute further information to them when it comes in. The sources are actors on a scene.

Basically, you create a mental representation of the sources that are around you. I would say that this explains many things about how we perceive music, a topic I will write about in further posts. We not only use auditory data to create this scene, but also data from other sources, eg. from what we see or what we have as background information. You don’t have to start from scratch and infer that it is a bird or someone calling. Instead you might already know you are in a forest and expect something like a bird. And you might also know that other hikers follow behind you, so it is easy to conclude who is calling without a further analysis.

Integrating different senses can also help. There is, for example, this funny phenomenon in a cinema: You see the actors move their lips, and you hear what they say. But the sound comes not from the screen, but from the speakers. However, your mind simply forces the two together, and you see the actors speak on the screen, and not actually the screen, but a three-dimensional scene. As this example shows, there can be pitfalls where assumptions mislead you and it can be so automatic that this is hard to resist. It takes a lot of conscious effort to disconnect the lip movements from the sound you hear. And once you stop paying attention, you do it again.

An important point here is also that there are stringent constraints. You can track one voice over time, maybe two or even three or four at the same time. But then it becomes diffcult or even impossible to go beyond this. That’s when your mind tries to integrate the data in a different way: This is the background noise on a cocktail party or this is a dozen people speaking, but more as an ensemble, not as individuals. You create a meta-actor in the scene: “this group of people over there” or something like that.

(2) The visual system works in a similar way and is also organized on different levels. On a basic level, you do the same thing: distinguish an unimportant background from the important entities. And then your mind creates a mental scene from that. Actually, you can only focus on something that is the size of a thumbnail, the rest is blurry. Your glance moves around and captures more, but still pretty little. However, the impression in your mind is that you see the whole scene although that cannot really be the case. You don’t notice that you blink your eyelids either and do not see anything for short periods of time. And if someone switched the light out, you would still have an idea where you are and what is around you.

The important point here is that on this intuitive level, which is not conscious, your mind interpolates a lot from sometimes rather scarce data. You plug in a lot of background information. Some of that is probably innate, other parts are learned, but perhaps often only as a choice from an innate menu or with limited plasticity.

The embedded assumptions can lead to funny visual illusions. And as in the cinema example above, there are also auditory illusions, and illusions that result from falsely integrating different senses. All this so far, was rather low-level. It is still fascinating how it works, and so the “low-level” is not meant as a judgment. Trying to make a computer do this is extremely hard work. My knowledge here is dated, but as far as I know this is in many ways still too hard.

Here is another example that plays out on a higher plane:

(3) Suppose you read a novel about fictitious events. Nothing is there or behind it. However, it is still easy, at least if an author is good at telling a story, to get dragged into it. A vivid picture arises of different characters with their motivations, thoughts, and emotions, how they interact with each other, how they are related, where they come from and where they are going to, and so forth. You also automatically, separate an unimportant background from the real story.

Here again, a lot of background information goes in: perhaps innate assumptions about how people behave, but also your life experience. If you read the same novel at different times in your life, it may be like a new book. What you found convincing when young, may now seem silly. What did not mean anything to you earlier, might now resonate deeply with you because you have had similar experiences and can understand the internal logic of events or how characters behave.

And again there are constraints: There are only so many characters your mind can track. If there are too many, they are lumped together as groups, and there is again a tendency to view them as some kind of meta-individuals. Much of what you perceive is there is actually not there, but only your interpolation. The same goes for a movie. If you think about it, it is silly to interpret a sequence of scenes as actual events when they have actually been filmed at different times. You can still feel with the characters although they are only actors who may have had no such feelings during the shoot because they never realized where the scene would land in the movie.

— — —

I would say that the intuitive part of our minds is not specifically human. My hunch is that many animals have something similar although it is perhaps not as sophisticated. We have language and can hence organize information better. But then the auditory and visual systems of animals work similarly to ours. They will also do the same things, like separating background and foreground, identifying important actors and tracking them over time, creating a mental representation of what is around them, even expectations about what other actors are up to.

As I explain in other posts, I am deeply skeptical about Darwinian explanations, also in the version of “evolutionary psychology.” Still, it is unmistakable that this intuitive part of our minds is extremely useful. We can find our way in a complicated world and interact with it. That’s why I don’t think of it as silly or stupid. Quite on the contrary, it is fascinating and very clever.

However, there are also pitfalls here as I have alluded to already. There are stringent constraints on what we can do, eg. how complicated things can become. We often have to work with very little input and interpolate a lot from background assumptions. Since most of this does not works on a conscious level and is very fast, it feels self-evident, but is also hard to challenge when it goes wrong.

And it can go wrong. We may easily dismiss things as unimportant noise, interpolate in wrong ways, integrate different senses incorrectly, or suffer from mistaken background assumptions. Intuitive thinking seems to be made for a rather small range: Try to understand what is around you, try to understand who the actors are and what they are up to, but only rather few of them. We also tend to assume that actors are humans where we can draw on background knowledge.

What can cause additional problems is that intuitive thinking is broad. It handles factual assertions as well as ethical and often even aesthetical jugdments in one go. Consistency is often more important than correctness. It is perhaps better to find your way with somewhat mistaken assumptions than being shellshocked because you try to get everything right.

That’s like learning to drive a car is hard because you have to do it consciously. Once you have learned it, it is automatic and that works much better. However, it can also lead to errors. A stark example is a hot iron that falls down. Before you even start to think about it consciously, you try to catch it in mid air, which is stupid. Mostly, it is a very good reaction to catch something that falls down, so it won’t break, but sometimes it is not.

When we go beyond a small range, we try to work with similar techiques that may no longer be appropriate. Noone can really understand intuitively how a society of millions of people works or an economy or a political system. Still, we are tempted to handle it in the same way as the little world around us: We lump people together until we have a manageable number of actors, then we impute human motivations to those collective persons and treat developments like a novel on a meta-level. This is almost impossible to resist.

Even though intuitive thinking outside its normal range is dubious, it need not lead to false conclusions. So just pointing to the fact that you do it is not an argument that you are wrong, only a warning that you might easily mess up. I hope it is obvious that that can happen and hence that a feeling of self-evidence is not good enough as a proof in and of itself.

That’s where rational thinking comes in, which is slow, cumbersome, and often frustrating, so we tend to avoid it. We focus our attention consciously and keep asking questions like: Is this really true? Have I not missed something? Could there be another explanation? We will also split a complicated question up into smaller chunks that are easier to handle. Then we can walk through the steps and check them individually whether they work. Only then are we convinced that the whole argument is correct although we may not be able to understand it in an intuitive way.

The most extreme example of such an approach is a mathematical proof. Mathematicians do this for a reason. It is possible to develop your intuition somewhat by adding further concepts, and that can help a lot. However, that is not different from what other people do. Basically, mathematicians cannot work with only intuitive thinking either. And since you are outside a narrow domain, intuition can mislead you even so more than otherwise. There are famous examples where very smart people had an intuition that later proved wrong. Hence the point about a mathematical proof is to overcome this with diligent rational thought that splits the problem up into small steps, exactly because mathematicians are also human beings.

Despite all these problems, there is nothing wrong with generating hypotheses via intuitive thinking. They can be right. There is no general argument that they have to be wrong. But then those are only hypotheses, and we should be careful and check them. It is a valiant goal to do that all the time. But then there are also stringent constraints: There is only so much time and energy for rational thought. Most of the time we will and must do with intuitive thinking. And that’s why even the “most rational” people will often draw upon a “worldview,” by which term I denote the whole ensemble of ideas that we work with on an intuitive level.

— — —

What to do about all this?

One way to go seriously wrong is to pose the question as: What is better: intuitive or rational thinking? This is a false alternative because we have to work with both, and while we can perhaps avoid rational thinking, we cannot do so for intuitive thinking, or else we would have to shut down all mental activity.

One extreme answer is the Enlightenment worldview (!): The underlying assumption is that human beings are or should be only rational. Intuitive thinking is rejected as irrational, the opposite, and worthless or even worse. As I have explained above, I do not think that is true. It is not the opposite, just a different approach that is prone to certain errors. But Enlightenment thinkers usually do not see it that way. They view it as their goal to purge your thinking of its intuitive side and make it wholly rational. The ideal is mathematics and physics where rational thinking really is all there is to it.

However, this view of human beings leads to problems of its own. If I am right that you cannot avoid intuitive thinking and for good reason should not reject it outright, Enlightenment thinkers just delude themselves. Instead of purging their minds of intuitive thinking, they engage in it without remorse because they are not aware that they are doing it.

One of the funny outcomes of the Enlightenment was, for example, the silly view that all problems could be easily solved just with a little rational thought. Shine the light of reason on something, and anything will turn out to be very simple. All you perhaps need is some new calculus to handle it, but essentially it is so straightforward that you can write it down in a simple proof like in mathematics. Hence all you have to do is set some smart minds to work, and everything is possible.

However, where does this assumption come from? Not from rational thought, but from a knee-jerk intuitive assumption that everything is like Newton’s theory in physics: there are just a few natural laws and everything can then be deduced from them via calculus, now literally. That’s why Enlightenment thinkers can fall for ludicrously false explanations if they look like mathematical proofs.

A funny example of this is Jeremy Bentham who tried to reduce political decisions to an optimization problem for the “greatest happiness of the greatest number” as if optimizing for two different objective functions at the same time were even possible in most situations. He then went on to postulate he had found a “beatific calculus” to exactly calculate “happiness.” But it is at best a sketch at that. Still, despite this, Bentham felt confident that he had solved the problem. This had repercussions with many Liberal, and even more so Socialist thinkers.

But you don’t have to take Jeremy Bentham as an example. In my experience, especially mathematicians, physicists, but also economists suffer from this syndrome. They are always confident that in everyday situations all you have to do is shine the light of reason a little on something and a solution will be forthcoming while everybody else so far has been too stupid to do this. It can be hilarious when the supposed solutions are far inferior to what other people can handle with intuitive thinking. If you have been around such people, you will know what I mean.

— — —

The opposite extreme is equally silly, though. The Romantics understood that there was something wrong with the intuitive assumptions of the Enlightenment, and that there was and should be more to the human mind than only rational thought. But they then drew the conclusion that you should throw yourself behind intuitive thinking alone. If you can feel it and it comes across as self-evident, it must be much truer than what you can ever understand with your reason. It is no coincidence that the idea of a “worldview” (German: Weltanschauung or Weltsicht) arose in this context. I use the term in a descriptive sense, but the Romantics would have viewed it as normative: That’s what it is all about and should be.

But then that is equally silly, There is no need to disparage or even deny that our minds often work with intuition and not with reason. However, that does not do away with the problems that come with intuitive thinking: the severe constraints on what we can handle, the biases, the knee-jerk conclusions that can be patently wrong, especially outside a narrow domain. A strong feeling of self-evidence cannot replace truth. At best, you can generate good hypotheses. But those have to checked to be believed, and you can only do it by using the rational side of your mind. Feeling right about something is okay, but being right about it is better, especially if it is important.

— — —

What’s my conclusion from this?

First off, our minds have both sides, an intuitive and a rational one. That’s what we are as human beings. Trying to purge one side is silly and pointless because it throws something important away or is just impossible.

As a first step, we use intuitive thinking, which is easy and feels natural. There is nothing wrong with that, and it is no problem to stop there for most things. A consistent worldview that may be wrong on some points is perhaps in some sense better than trying to have none at all and not being able to make any decisions on anything.

But if something is important, we should not be lazy and shun rational thought. We have to use our rational side to get things right. That means work and is perhaps not as satisfying as a feeling of self-evidence. It is also slow, and we may have to suspend judgment for a long time. But then being right is a good thing, especially when it is about something important.

The two sides of our minds are not disconnected. Just like we can generate food for rational thinking by intuition, and that may work very well, we can also work on our worldviews. What we understand rationally, can then be integrated into our worldview. And doing so will make it better although much of it will still be beyond reach for our rational side.

As an example: We now know intuitively that the earth is a sphere. People in earlier times assumed it was a plane. And actually, if you think about it, that makes intuitively much more sense. We are at some point where the curvature of the earth’s surface is minimal. So an approximation by a plane is quite good and much easier to work with. If you have a city map, it is nonsense to demand that it should have some minimal curvature. You can work with it also if it is flat and that is fine. And once you draw this conclusion, it is natural to assume that the earth is a plane as a whole. The problem is only with going outside a narrow domain.

It is much less intuitive to think of the earth as a sphere. You will have the same feeling as I that the people on the other side of the world must hang with their heads down and will fall into space in an instant. Still, over time we have integrated the correct view into our worldviews, apart from some holdouts. And it has its uses, eg. if you plan to go on a cruise around the world, you do not have to worry that the ship falls from the edge if you go too far. The correct idea was initially only in some people’s minds, the first were some in ancient Greece. But it then spread slowly and became ubiquitous.

So, there can be real progress with worldviews. And it is useful to update them and improve what can lead you astray, especially when you go outside the domain where a simpler intuition works just fine. That is David Stove’s simple point against postmodernist thinkers who want to conclude that “everything goes.” Who says that flat-earthers are stupid when they just have a different “narrative?” And Stove’s answer is: Look, there has clearly been progress in our knowledge of the world. Not everything is equally good, some views are better and others ludicrous.

To take another example: It is simply better to know that infectious diseases are caused by bacteria, viruses, fungi, etc. because you can do something about them and it works. And it is much worse to assume that infectious disesaes they come from “miasmas” (putrid air) or witches or because the Jews poisened the wells because that’s all nonsense. Unfortunately, as these counterexamples show, it is not so that everybody has absorbed these insights yet. But then it has at least become pretty common by now.

As these examples show or also the one about how we still work with the idea of a flat earth on a small scale, older worldviews can still be around. And I think that is very often the case. Sometimes as with the assumption of a flat earth, this is perhaps innate. In other cases, it is an easy conclusion for intuitive thinking and will always arise anew when people disregard rational arguments. Unfortunately, much of what we take for granted is of this kind, and it is probably much more common than we would like to concede. Especially, if many people share the same assumptions, as a “culture,” it is easy to miss the underlying claims that can be quite silly.

What this shows in my view is that we should not overestimate the progress that has been made while acknowledging it at the same time. If you have a scale from 0 to 100, where 100 is perfect insight, then going from 5 to 20 is a huge progress, and can still be rather imperfect. And then even if some people have made the move to 20, this does not have to mean that everybody has. Large parts of the population might still be at 5 or somewhere between 5 and 20, and some might even revert to 0. That’s why it is both important to make progress from 20 onwards, and spread the previous progress around.

Hopefully, I can help with both a little, but probably only very little. It can only be done by rational thought and by working on your worldview to make it better at the same time and so your intuition is more often right. That also applies to me: I also have a worldview and there must be a lot wrong with it. It is easy to see this with other people, but for yourself you have a blind spot. And it can be extremely hard if everybody has the same worldview, which is on a cultural level.

I will try, and I will fail, and I will still try again. And I will have a worldview all along, with all the good and bad that comes with it. Nothing to be ashamed of, we are all humans after all.

--

--