# Science for Heretics

**(extracted from the book Science for Heretics by Barrie Condon)**

*Numbers Shmumbers: Why Being ‘Bad with Numbers’ May Actually be a Good Thing*

*The invention of the laws of numbers was made on the basis of error, dominant even for earliest times, that there are identical things (but in fact nothing is identical with anything else)* Nietzsche

Nietzsche was a German philosopher who died in 1900 perhaps from a brain tumour, or tertiary syphilis, or from an early form of dementia, depending on the source you choose to believe. At the age of 44 he suffered from a complete nervous breakdown that may have been precipitated by witnessing the savage whipping of a horse. After that he never really recovered his mental faculties.

So he seems hardly worth quoting at all but many authors find themselves compelled to do so. This is because he was a dangerous and heretical thinker who was very good at producing rather terse and pointed quotes. ‘God is dead’ is probably his most famous; ‘That which does not kill us makes us stronger’ is another good one. Indeed one whole book of his was given over to these aphorisms [1]. This plethora of quotable phrases makes Nietzsche irresistible to many authors because you can usually find one to support whatever case you are trying to make. So wide ranging are these scatter gun quotes that Nietzsche’s work was at one point misappropriated by the Nazis, though he appears neither to have been anti-semitic nor nationalistic.

But Nietzsche is worth quoting here because he was one of the first of the modern philosophers to harbour grave doubts about science and also about its most powerful tool: mathematics. Much of this doubt came from his own maverick nature but also from his early career as a classical scholar of Greek and Roman texts. Concerns about mathematics have been around for thousands of years but, eclipsed by maths’ apparent success, they are often forgotten. These concerns had produced disbelief and even anger in some mathematicians but have, over time, been airbrushed out of its teaching so that few realise they constitute any problems at all.

In this chapter we will examine some fundamental problems with mathematics and how a lack of understanding of the innate unreality of mathematics has led to some of the most absurd concepts in science (Black Holes: I’m looking at you!).

For those who don’t like or understand maths I want to reassure you that we’re only going to use one simple arithmetical equation in this chapter. Lest you think I am being patronising to those who aren’t ‘good with numbers’, I can assure you I mean quite the opposite, as you will see later.

Here comes the simple but extremely dangerous equation:

1 + 1 = ?

What’s the answer?

Before we go any further, I want you to focus on the certainty you feel about knowing the correct answer. I’m going to take a wild guess and assume the answer you came to was ‘2’.

Well now I’m going to tell you the most awful truth about that equation. A truth that you never hear in schools or universities and that has, in many ways, been ignored for centuries. A truth that some might even argue is the root of all the evils in the world.

The truth is simply that, in the real world, 1+1 NEVER equals 2.

How can that be? Add one thing, say a soccer ball, to another identical soccer ball and you’ve got two soccer balls, surely?

But the fact is, as Nietzsche pointed out, that there are no two things that are identical in the whole universe. From grains of sand all the way up to galaxies: all have differences. Some of these differences are extremely subtle but the fact is they are still there. No matter how things are closely machined by nature (for example sand grains) or by man (for example footballs or sugar grains) there are always tiny differences.

Now you may consider this nit-picking. Grains of sand are so similar that it hardly matters that they are not identical. And for most of the works of man, for example shovelling sand into a cement mixer to help construct a building, that’s true. But by making that simple approximation, by ignoring the leap we’ve taken for the purposes of simple expediency, we’ve disregarded what may be the most profound truth of all.

Science, in whatever shape or form, is the search for truth. All hard science, and much of the so-called ‘softer’ social sciences, is based on numbers. It’s how we handle the universe. And all our equations and calculations are based on simple arithmetic. And that arithmetic is based on one plus one equalling two.

But, however you cut it, all our mathematics is built on a fundamental untruth because it supposes that two things can be considered identical.

Basing whole systems, whether they be scientific or political or whatever, on untruths can still work, for a while anyway. You could, for example, design a well lubricated mechanical system to run forever by pretending that friction didn’t exist. It may run for a while but, sooner or later, any such system will fail. Nazi ideology was based on the idea that Germanic races were inherently superior to others. How did that work out in the end?

For quite a while now I’ve been putting forwards these arguments to physics colleagues and friends. This has elicited a range of comments and ripostes. Let me try and summarise these comments, leaving out the occasional rude word, and supply an appropriate response.

*What about atoms? They’re identical. One carbon atom plus one carbon atom equals two carbon atoms. Give me a break!*

The truth is I don’t know if atoms are identical and neither does anyone else because we can’t see down to that scale. As far as we are aware they behave pretty much identically using our coarse methods of measurement. This might lead some to suppose they are identical but it certainly isn’t proof. There are tiny differences in everything else we can see, so why shouldn’t the same thing apply at the atomic scale? In earlier times, grains of sand, fleas and just about anything else that was tiny were thought to be identical until the microscope was invented. The finer the detail we can resolve, the greater the variation we inevitably find. The idea that atoms are really identical supposes a radical departure from our real world experience.

*It doesn’t matter because maths works! Where would your iPAD and your smart phone and your Kindle be if it didn’t? You’re wasting my time.*

Maybe. Maths does indeed work, within certain limits we will discuss later, and will continue to work up to a point. Then it won’t, just like any other system based on a fundamental untruth. Maybe we should be at least cognisant that this approach sets fundamental limits on our perception of reality. Nothing is identical to anything else but in everything we do, from calculating population size to ordering a round of drinks, we ignore this. What are we missing by ignoring the one thing which appears to be universally true: that nothing is the same as anything else?

*1+1=2 was never meant to be taken literally. It’s about representation. One dollar bill plus one dollar bill may not give you two identical dollar bills because of slight variations in their physicality such as their weight (and not least because of the varying amounts of cocaine adhering to them), but what we are actually talking about is the representation of their worth.*

Though a more subtle response than the others, it is falling into the basic trap set by the equation in the first place. Financial value, corresponding to what a dollar represents is, like the identical objects required for arithmetic, a notional concept that resides only in our heads. It is not some hard, absolute, real world thing, though I can’t help but accept that not having financial worth, notional or otherwise, can make life very real indeed.

*The equation is only an approximation. Two grains of sand may vary in terms of their shape and mass but they are approximately the same.*

That’s a better answer but it only comes *after* you’ve pointed out that the equation is never actually true. Ask a hundred people what 1+1 equals and I doubt a single one will say: *approximately 2*.

And it is that word *approximately* that is so crucial here. Two objects can only be identical enough for the 1+1 to equal 2 if we determinedly ignore all the things that make them different. If, for example, we ignore the different shapes and masses and colours and elemental contaminants that go into each grain of sand. One Nelson Mandela plus one Joseph Stalin can indeed equal two humans but only if we ignore all their manifold differences.

The point is that even this most simple of arithmetic is an artificial construct that never directly corresponds to the real world. And yet this artificial way of thinking is drummed into us from an early age. It’s one of the ‘three Rs’: reading, ‘riting and ‘rithmetic. Yet most teachers, themselves unaware they have taken this step into unreality (in other words treating some things as equal by the simple expedient of ignoring all the aspects of them that are different) do not pass on this awareness to the young people they teach in turn.

Perhaps this is why some people struggle with arithmetic and mathematics generally. These are often intelligent people who are perhaps labelled as ‘artistic’ because they are ‘bad with numbers’. Maybe the reason for this is that arithmetic doesn’t make sense to them at a fundamental level. Perhaps they are innately aware of the complexity of life where nothing is identical to anything else. Yet in their maths classes they have to pretend things are identical and will get punished if they don’t toe the party line.

The problem for such people is exacerbated because they are taught arithmetic when they are only a few years old, when they are far too young to articulate any feelings of unease. Arithmetic is taught in a dogmatic and unquestioned fashion. How can a six year old stand up to that? Instead they retreat, tail between their legs, convinced they are somehow inferior because they are ‘bad with numbers’.

As well as making perfectly intelligent people feel inferior, the nature of arithmetic, or perhaps the mind set from which it springs, arguably paves the way for some unfortunate consequences.

Early teaching of arithmetic is perhaps the first and most forceful way we are inculcated with the view that ‘things’ can indeed be regarded as identical. It is a powerful tool. After all, if in later life you own a pub then it is important to work out how many bottles of beer you need to order a month. It doesn’t help you to focus on the slight differences in the shape and weight of the bottles and the minor variations in their contents.

The propensity to regard two objects as equal probably didn’t originate with arithmetic, but instead may represent aspects of how our brains are hard-wired to work, as we will see later. A universe where everything is different from everything else is a scary place. If you were on the African plains ten thousand years ago, and were starving and needing to hunt an antelope, then you couldn’t allow yourself to be distracted by the fact that each one was different. Instead, you needed to focus on the commonalities: for example the footprints antelopes make, the way they move and so on.

For our ancestors, each animal didn’t have a unique identity, they just become *antelopes*. It’s how our limited, as opposed to omniscient, minds handle the notion of them. In a complicated universe one has to simplify to survive. There will, of course, be some nuances on top of this; each animal won’t act identically and an experienced huntsman will factor this in when they track and hunt them.

As man developed away from hunting single animals and supplementing his diet by picking the odd misshapen fruit, he made the move towards agriculture. In doing so he began to deal with many animals such as chickens, or cultivated vast orchards of fruit and vegetables. Selective breeding over time made all these animals and plants more and more alike and this made the need for developing mechanisms to handle these greater numbers even more compelling. Arithmetic was invented and the awareness of the non-identical nature of animals and plants began to fade away. What room is there for individual identity in a factory farmed chicken amongst a multitude of others?

And arithmetic spoke to something in the way our limited minds worked. An example is found in the matter of tribal identity. This concept allows us to deal intellectually with a ragtag bunch of wildly different individuals. If you want to warn a child not to go into the territory of a few hundred hostile individuals, you don’t want to go through a list of all of them. It’s easier to define them all as a specific tribe.

It’s a handy, shorthand way of dealing with a complex situation but the problem is that by stripping out the complexity of individuality it starts to characterise all the people within each tribe as the same, especially tribes which aren’t friendly.

And that is the first step on a very slippery slope. As soon as we start to mentally handle large groups of individuals by ignoring their differences, then we start to see them as ‘all the same’ and terrible things can happen. To Hitler, Jews were an undifferentiated mass that meant his earlier bad experiences with a few individuals tarred them all with the same brush.

Perhaps the earliest and clearest example of this linkage between arithmetic and evil can be found in the ancient concept of decimation. The word is often used wrongly nowadays to denote a massacre in which all, or at least many, of a group of people are killed. In fact this was a technique originated in the early days of the Roman Empire. It was employed as a form of punishment for groups of people. The first recorded use was in 471 BC where decimation was applied to soldiers in a legion that had shown cowardice or had misbehaved in some way. Men were divided into groups of ten and drew lots. The person who drew the shortest straw was then bludgeoned to death by his colleagues. In other words, only one man in ten was killed.

No attempt was made to discern individual guilt or to discriminate in terms of any actions, good or bad, perpetrated by each individual. Instead they were all equated as ‘the same’ and their level of guilt was considered exactly equal.

Incidentally, though decimation suggests one in ten were killed, the number could be one in five or whatever was thought appropriate by the one doing the decimating.

Decimation did not begin and end with the Romans but was practised by the Italians during the First World War and the Soviets during the Second. Indeed a Soviet general at Stalingrad personally shot every tenth man until his ammunition ran out [2].

And that’s only what armies did to their own troops; decimating their enemies, such as captured prisoners, was also practised.

But the arithmetical ‘quotas’ which exactly equated one man with another and consigned them to death in vast multitudes, reached its pinnacle in Russia at the time of Stalin. It was known as the Great Purge

Russia faced huge potential unrest due to a famine largely caused by Stalin’s forced collectivisation of farming. In order to tame this general disenchantment, Stalin developed an essentially random mechanism to keep Soviet citizens in a state of perpetual terror. Stalin’s stated aim was to reduce the reservoir of terrorists and spies. Later on, it was used to reduce the threat from other wings of the communist party led by Bukharin and Trotsky.

Perhaps a million people were killed during the Great Purge or died due to the terrible conditions in the prison camps they were consigned to as part of other arithmetically determined quotas. Victims came from the top to the bottom of Soviet society, with five out of six of the original Politburo members succumbing, three out of five army marshals, eight out of nine admirals. Intellectuals of all persuasions were imprisoned and perhaps only a quarter survived. Peasants, churchgoers and clergy suffered similar fates.

At first, victims were targeted because of at least some suspected activities; for example the study of sun spots was considered un-Marxist and nearly thirty astronomers paid for this with their lives. However, the situation became even grimmer when the Soviet leaders resorted to arithmetic alone. Top down calls provided the actual numbers to be executed within the military and across the regions of the country. Tens of thousands of executions were ordered without naming any specific individuals. Local party officials, to show their zeal and loyalty, sometimes asked for their quotas to be increased.

Arithmetic can clearly be a dangerous business, playing as it does to the limitations of how our minds work, so where did our interest in numbers come from and how did it lead to other aspects of what we call mathematics?

The History of Numbers

Counting has been around for at least 30,000 years, as the 55 marks in groups of five found on a wolf bone in the Czech Republic would seem to attest [3]. The grouping in fives is presumably because of the number of fingers on a human hand. Bearing in mind the material on which the marks were made, it’s surprising the person had any fingers or even a hand left at all. That was one tough arithmetician!

Nowadays we’re used to counting in units of ten and it was the Egyptians who started using this decimal system in about 3000 BC.

Pythagoras took numbers to a new level in the 6th Century BC. Though a Greek, he learnt his numbers from the Egyptians but took the whole business farther, transcending their basic use in counting. Numbers became sacred things in themselves. ‘Number is the first principle’, ‘Number is the essence of all things’, ‘All is number’ he wrote.

From being used to count chickens to becoming sacred is quite a jump for arithmetic, but this is only the first example of where something that was really only a tool has become the subject of veneration. As we will see, in the millennia that followed Pythagoras, scientists and mathematicians would often elevate the tools of their trade to the point of worship. The tools were the theories (mathematical, physical, biological) that tried to explain reality, but somehow in the process these often *became* that reality. The tools became the Laws.

Describing the history of numbers is dull stuff. Books that deal with it struggle to find much of humour so, when it comes to Pythagorus, authors inevitably focus on the issue of beans.

Poor old Pythagoras couldn’t stand beans. Not only were they the cause of flatulence but he also thought they too closely resembled human genitals. No, I’m not sure why either.

So great was his aversion to beans that he would rather have had his throat cut than run across a bean field. And indeed one day, chased by his enemies and finding his way blocked by just such a bean field, that is exactly what happened.

Pythagorus did other unusual things. He began his own sect and even, at one point, claimed he was a god. The sect was secret and new disciples weren’t allowed to speak or to make any other noises during their first years of membership.

Pythagorus even had a man called Hippasus killed because he had the temerity to give away the most dangerous secret of Pythagoras’ sect. I am going to explain what this terrible secret is so you’d better brace yourself for a major revelation.

In Pythagoras’ perfect world of sacred numbers, any real number could be expressed as the ratio of two whole numbers. For example: three-and-a-half can be expressed as 7 divided by 2.

All was indeed perfect until Hippasus came up with his filthy heresy, namely that some numbers could not be expressed as the ratio of two whole numbers. One example of this is the pesky number pi (3.14159… and on and on).

If that’s not worth killing someone for, then I don’t know what is!

As we will sadly and repeatedly see, Hippasus was only the first in a long line of people to suffer and even die because they came up with an uncomfortable truth that did not conform to prevailing theory.

That’s a terrible shame and waste because theory may *never* be the truth.

It would appear from Pythagoras’ statements such as ‘All is number’ that he was perhaps the first one not to understand that numbers could only ever be an approximation to reality. Certainly Euclid, whose work followed on from Pythagoras, stated as his first ‘common notion’ that: ‘Things that are equal to the same thing are also equal to one another.’ This implies that, as far as Euclid was concerned, there were at least three identical things in the universe whereas the truth is that there aren’t even two.

The Greeks thought all of nature could eventually be understood through mathematics and that all its workings could be unearthed by mathematical reasoning.

They’re all long dead, and therefore can’t sue, so I’m going to blame them for all the problems with science and mathematics described in this book.

That said, it must have been easy and comforting to be taken in by this way of thinking. If you’d never conceived of numbers then they might indeed seem like powerful magic. For example, any system which uses coinage is only possible by the rules of arithmetic. Numbers make many things possible and seem to put the world on a firmer footing. It’s not surprising that some ancient peoples thought numbers held magical powers, often using them in their religious rituals.

The early Muslim world did a lot of thinking about mathematics and numbers (they weren’t afraid of the irrational ones like Pythagoras) but then Muslim theologians pretty much stopped further development. The reason seems to be that they were concerned it would uncover secrets Allah might want to remain hidden.

It would seem that in the Roman, Greek and Muslim worlds there was a widespread belief in the absolute, and indeed sacred, meaning of numbers. Left far behind was the awareness that numbers were just simple tools that could only ever reflect reality in approximate ways.

Even in these supposedly enlightened times this is still essentially the case.

Before we leave the Greeks and Romans we should quickly discuss another aspect of mathematics that is neither an accurate reflection of reality nor something immutable and sacred, yet it is taught in the most dogmatic of ways. I am referring to Geometry.

**Geometry.**

‘There are 180 degrees in a triangle’. That’s a mantra we all learn at school. It is also, in the real world, never true. Never ever. In the real world no triangle has perfectly straight sides. Whether it be tiny unevennesses in the paper we draw it on, or the pixelation on a computer’s digital image, lines are never exactly straight.

In a sense this goes far deeper than inevitable tiny imperfections in our materials, at least according to what is now the conventional wisdom of General Relativity theory. This says that space is innately curved due to the proximity of any objects with mass, like planets or even just your iPhone. In space, and everywhere for that matter, there is no such thing as a straight line.

With no straight lines in the universe, there can be no perfect triangles with exactly 180 degrees. ‘There are 180 degrees in a triangle’ is therefore never true.

It’s not that mathematicians don’t know this but again, like Pythagoras making numbers sacred, they will instead talk of the perfection of their subject. For example, according to Berlinski: ‘…the Euclidean triangle, at once perfect and controlled, a fantastic extrapolation from experience, and entry into the absolute’ [4].

The truth is that all we know of the universe is the imperfect reality we experience every day. All else is falsehood. Mathematicians have produced a fantasy. Sort of like *Lord of the Rings*, but with more utility.

So even our most basic mathematical concepts are flawed because they do not correspond to reality. That would be OK, just like *Lord of the Rings* is OK, if left confined to an imaginary world. Unfortunately this imaginary world of mathematical perfection breeds concepts which bleed back into the real world. Things which are in fact phantasms are imbued with realistic connotations and as a result spawn some of the most absurd notions in modern physics, as we will see in the next three chapters.

If you base reality on any fantasy then, sooner or later, you’re going to come a cropper.

Here are the first two of these mathematical phantasms:

**Zero and Infinity (‘maths’ evil twins’)**

Zero is a concept that has wormed its way into our consciousness, though it was not always so. In fact for most of mankind’s mathematical history the notion of zero did not exist.

Man may have been counting on wolf bones, or whatever was to hand, for 30,000 years and yet never once had they had the need for something called ‘zero’. After all, when you count your fingers you don’t start with zero.

So where did zero come from?

Back in about 300 BC the Babylonians had something that could be confused with a zero, though in fact it was more like a spacer. Where we count in tens, Babylonians counted in sixties. So, for example, to differentiate 16 and 160 they might have 16*, where the * is the spacer.

So the Babylonian spacer didn’t actually represent ‘nothing’ as zero does nowadays.

Even the Romans didn’t have zero which is why their calendar doesn’t have 0 AD. Instead it starts at 1 AD, and that is why purists maintained that celebrations for the new millennium should have started as midnight approached on December 31st 2000, not 1999 as everyone else thought. However, the appeal to the modern mind of nice round numbers triumphed over this pedantry.

Zero also went against an axiom of Archimedes, the third century BC Greek scientist and mathematician: add a number to itself and you get a different number. Of course adding zero to zero just gets you zero so that would no longer work. Bearing in mind that ‘axiom’ means *self evident proposition requiring no proof *then that would really be a spanner in the works if zero did exist.

It’s also a bit unsettling that when you multiply any number by zero you always get zero, no matter how big the first number is.

It’s like zero is screwing up the whole number system. No wonder the Greeks and Romans didn’t like it.

It also raised fears of the ‘the void’ amongst the Greeks and, later, the Christians. God had created the world so the idea of void suggested somewhere where there was no God. This was frightening and indeed the void even became identified as the Devil himself.

Indian mathematicians, coming from a Hindu philosophy where everything sprang from the void, were however much more comfortable with the idea. Brahmagupta, the Indian mathematician, introduced rules for trying to deal with zero in 628 AD [5].

Indians came to the concept of zero by way of negative numbers, themselves first mentioned in China in the Second Century BC. Negative numbers are far from being an obvious idea. In terms of flesh and blood reality, how can you have a negative ox, for example? You can perhaps have one less ox than you used to, but that’s not the same as having a living, breathing negative ox. The only ‘real world’ meaning for negative numbers is essentially in regard to debt in one form or another, which again is only a human concept and has no absolute meaning in reality.

However, if you do accept the notion of negative numbers then suddenly you have a discontinuity, or break, between negative and positive numbers. The idea of zero filled that gap in the sense that it came between minus one and plus one.

Zero therefore came from the non-real world concept of negative numbers. To illustrate just how non-real negative numbers themselves are, consider what happens when positive numbers are divided by negative numbers. According to the rules of arithmetic this leaves a negative number as a result. When pupils come across this for the first time they invariably dislike it because they can’t equate them with everyday reality. I bet a few of you still don’t like it today. After all, if you take six cars and divide them by minus two cars (what are minus two cars for a start?) you get minus three cars. Earlier on I suggested that, in the real world, negative numbers might be associated with debt but even this breaks down in this example. If you divide six cars you own by two cars you owe some else, how do you wind up still owing three cars?

Perhaps in order to disguise what nonsense this all is, mathematicians have the temerity to define positive and negative numbers as ‘real’ numbers, perhaps in the same way that Orwell in his book *1984* called the bureaucracy for waging war ‘The Ministry of Peace’. *Doublespeak* Orwell called it.

Getting a negative number when you divide a positive number by a negative number may ‘fit’ as far as the mathematics are concerned, but it doesn’t ‘fit’ reality.

Nowadays mathematicians go even further in their fictions concerning negative numbers. The square root of minus one is central to some of the most important techniques they use. Indeed, I used one such technique myself nearly every day of my professional life as a physicist (it was the Fourier Transform but, don’t worry, I’m never going to mention it again).

Even if you assume you can get a negative anything, then how can you find the square root of this? This is a number which, when multiplied by itself, can produce the negative result. Perhaps not surprisingly, mathematicians took to call these strange new things *complex* numbers. Rene Descartes made up another term, calling the actual square root of minus one an ‘imaginary’ number. Tellingly, this tag stuck.

Squaring any other negative or positive number, except for this imaginary one, gives a positive number. Even minus 2 multiplied by minus 2 is plus four. Yet multiplying an imaginary number by another imaginary number gives a negative number.

This weirdness led many mathematicians at the time to disbelief and even anger. Yet these determinedly non-realistic numbers found utility in certain mathematical processes which in themselves had major and often beneficial real world impacts. For example, such mathematics is used in the processing of signals from everything from radio waves to the vibrations from seismic surveying.

This utility made it easy to paper over the rupture between the real world and the world of mathematics. Over time, negative and imaginary numbers became taught in schools as a stone cold fact. No matter how greatly school kids were baffled by the concepts, no matter how divorced from reality they seemed, they had to learn them if they wanted to pass their exams.

But let’s get back to zero. Zero means nothing. Certainly, in reality, there is no place anywhere in the universe where there is ‘nothing’, not even in the vacuum of space. No matter how far from the nearest star, space always contains something whether it is just a few atoms of matter or photons of radiation. In reality there is no such thing as ‘nothing’.

Still need convincing about how strange zero is? If, according to the mathematics of zero, multiplying 1 by 0 gives you zero then, presumably, by then dividing this all by zero we should get back to 1 again.

Nope. According to our mathematicians it actually comes to zero.

It’s almost as though, even within the framework of numbers (themselves unrealistic as they assume objects are identical) zero just doesn’t belong.

Despite huge problems like this, Islamic scholars followed the Indian mathematicians’ approach to accepting the concept of zero though it would take a lot longer for the West to follow suit. Indeed it wasn’t until the 13th Century AD that the West cast off the stranglehold that Greek scientific and mathematical thought had regarding the devil-like void (zero).

Western scientists began to believe that zero existed…

…or rather didn’t exist, as zero is supposed to mean nothing.

The idea of zero is so strange that language, like mathematics, has difficulty coping with it and begins to break down.

Whether zero exists or not, it’s a concept that percolates through all aspects of modern science, particularly physics, and helps engender theories that purportedly explain everything. A concept which itself may be entirely unrealistic, is widely used to explain reality.

It gets much worse, though. I hope you have found this section on zero mildly disturbing at the very least. If you have, then you’d better hold onto your hat when you read the next section because Pandora’s Box really does swing open.

Infinity

*There is one concept that corrupts and confuses the others. I am not speaking of Evil, whose limited sphere is Ethics, I am speaking of the infinite.* Jorge Luis Borges in *Otras Inquisitiones*

The worst thing about zero is that, according to received wisdom, if you divide any number by zero you get an even more disturbing beast called infinity.

Nobody wanted the infinite. Right from the start Aristotle said that mathematicians ‘do not need the infinite’. The reason for this is that in the real world nothing is infinite, a point which just about everyone today has forgotten because of habituation to the concept. There may be an awful lot of grains of sand on the beaches of the world, or atoms in the universe, but that isn’t what infinity means. Infinity means without limit, or without end, or an indefinably great number.

Part of the hang-up about infinity, as with the concept of zero, came originally from religion. If God created the world/universe then before that there must have been that nasty devilish void. If you wanted to avoid that idea then you could instead believe that the universe had always existed which would mean it was infinitely old. From that point of view there is either infinity or there is zero, but it doesn’t allow both.

Thus concepts of zero and infinity are either inextricably intertwined or, from the latter view, mutually exclusive. The heretical view is that both are entirely fictitious.

Infinities and zeros, appearing like vermin in equations, send scientists through all sorts of contortions to make their equations ‘work’. For example, when mathematicians divide zero by zero they have to believe they get the number one. These contortions spawn all sorts of outlandish concepts.

Infinity strains credulity even further because it needs to have its own rules otherwise it just doesn’t work. For example, no other number can be equal to or greater than infinity (Galileo’s rules). Divide infinity by any other number and you get: infinity.

Scratch the surface and the concept of infinity really doesn’t make sense. For example, think of the infinity of even numbers. There will also be an infinite number of odd numbers. Does that mean there is a double infinity when you include both even and odd numbers. No, according to mathematicians there is just an infinity.

Many mathematical equations or expressions may routinely produce numbers that increase to infinity but that does not mean they have any correspondence with reality. Though habitual exposure to the concept has dulled our awareness of just how outlandish it is, in the same way as a lifetime of heavy manual work produces calluses which dull the initial pain, the thought of infinity is nevertheless still anathema to many mathematicians and theoretical physicists. Indeed, when infinity appears in physics equations purporting to represent reality, the first thought physicists have is that something has gone wrong. Generally it leads them to consider that whatever the model representing reality is being used, it cannot be taking into account all the relevant factors.

Nothing in the universe that we can see is infinite. Even cosmologists don’t believe the universe extends to infinity. Time might possibly be infinite but we don’t know one way or the other. Even physicists accepting the present received wisdom of the Big Bang theory believe our universe ‘popped’ into existence out of a fluctuation in the zero point energy (more on this later). This suggests that time is not infinite even under their belief system.

With no evidence that anything in reality is infinite, this has led to considerable dispute amongst scientists and mathematicians as to what infinity actually means. To try and bring this more under control, a mathematician called Cantor divided infinities into three types:

1) *Mathematical *infinities: these appear in equations and mathematical expressions. They are abstractions and only exist there and in the minds of men

2) *Physical* infinities: these actually appear in our universe

3) *Absolute* infinity: the sum total of everything

Famous mathematicians and philosophers have believed in different combinations of these, illustrating just how tricky or nonsensical this subject is (take your pick). For example:

Abraham Robinson (Mathematical No, Physical NO, Absolute NO)

David Hilbert (Mathematical YES, Physical NO, Absolute YES)

Bertrand Russell (Mathematical YES, Physical YES, Absolute NO)

Kurt Godel (Mathematical YES, Physical NO, Absolute NO)

LEJ Brouwer (Mathematical NO, Physical YES, Absolute YES)

George Cantor (Mathematical YES, Physical YES, Absolute YES)

*(see references [6] and [7] for even more runners and riders in the infinity sweepstakes)*

No wonder physicists are uneasy about infinities when they crop up in their equations. When they appeared in Einstein’s work he shied away from them. These annoying infinities produced something called a ‘singularity’. This is a point in space where gravitational forces would, according to the mathematics, cause matter to have infinite density and infinitesimal volume, thus making both space and time infinity distorted.

Einstein said of this mess: ‘A singularity brings so much arbitrariness into the theory… it actually nullifies its laws. Every field theory…must therefore adhere to the fundamental principle that singularities of the field are to be excluded’.

Peter Bergmann, a collaborator of Einstein’s, went on to say ‘…a theory that involves singularities…unavoidably carries within itself the seeds of its own destruction’.

Despite these warnings from Einstein, perhaps the greatest scientist of all time, other physicists and non-scientists take at face value these infinities and construct all sorts of esoteric phenomena out of them. These include Black Holes.

In engineering, where science meets gritty, complex reality, if an infinity appears then something is definitely wrong and it always means that some component, like friction, has been left out of the model or equation.

In quantum physics, where calculations often throw up infinities, physicists resort to splitting the calculations into finite and infinite components (see ‘renormalisation’ in the next chapter) then effectively subtracting out the inconvenient infinite components. As we will see, this is basically a fudge on a mind-bending scale.

So far we’ve only been talking about the infinitely large, but what about the infinitely small? It must be said that the notion of something infinitesimally small has had great utility in a branch of mathematics called calculus. This involves integration and differentiation but, if you are faint of mathematical heart, don’t worry because we are not going into those. This is a field which has found massive and successful application even though it is apparently based on a nonsense.

All forms of calculus are based on ‘infinitesimals’. These are numbers which are smaller than any other numbers but are not zero. These infinitesimal numbers can be added to themselves as many times as you like and yet they will still remain less than any given number. In other words you can add as many infinitesimally small numbers as you want together and you still somehow end up with something that is infinitesimally small.

Already this isn’t making sense and people have been arguing against it for centuries, even doubting Isaac Newton’s calculus which is based on these beasts. Bishop Berkeley, an eighteenth century mathematician and philosopher, argued that the calculus lacked rigorous theoretical foundations because sometimes their adherents used infinitesimals as positive non-zero numbers but other times as actually meaning zero. No wonder Berkeley called infinitesimals ‘The ghosts of departed quantities’.

Newton, vying with Einstein for the title of Greatest Scientist of All Time, tried to gloss over this nonsensical aspect.

As Friedrich Engels [5] said when the infinitely small and infinitely large were introduced into mathematics: ‘The virgin state of absolute validity and irrefutable proof of everything mathematical was gone forever, the realm of controversy was inaugurated, and we have reached the point where people differentiate and integrate not because they understand what they are doing but from pure faith, because up to now it has always come out right’.

And indeed the attitude Engels was describing is still widespread in the scientific community.

Whilst many mathematicians objected to infinitesimals, more and more calculus was performed and with apparent success. Over time the fundamental nonsensical notion on which it was based became tacitly accepted. Another crack was slowly papered over.

But no matter how well something appears to work, if at its heart it is nonsense, then it should never be mistaken for the truth. Calculus has had a good run for three hundred years but it is based on assumptions which have no correspondence with reality.

The wallpaper may make the room look nice but it also hides nasty cracks in the supporting walls. Sooner or later the whole building may fall down and, judging by the insane concepts physicists have to resort to when it comes to dealing with the very large and the very small, it’s already starting to crumble.

Not that it was ever on firm foundations to begin with as we will see in the next section.