Trigometry, Polynomials, Logarithms, and Being Wrong on Purpose

In realistic mathematical inquiry, we almost never try to find exact answers to complicated quesions. Whether we are studying the whirring of an engine, the ups and downs of the stock market, or the bumps and dips of a mountain range, mathematicians always begin by asking, “How simple can we pretend this system to be? What do we have to pay attention to, and which annoying complications can we ignore?” This need to simplify problems in order to make them workable pervades every level of the mathematical process, from the simplest calculations to the deepest insights about the nature of information. I’d like to share a few cases where this theme is important to high school math but are rarely discussed in the 21st-century classroom. I know this is long for a text piece, but I hope that in video or graphical form it would be a manageable amount of content.

Any third-grader can tell you that addition is easier than multiplication. To multiply two numbers, after all, we essentially perform several addition operations with careful bookkeeping, and it’s always easier to do something once than to do it several times. Astronomers in the 16th century had the same problem: as their measurements became better and better, they had to multiply larger and larger numbers in order to make predictions about planetary orbits. The same problem arose yet again in geographical surveying, where accurate maps depended on precise calculations with inconveniently complicated data. Some people were spending essentially of their time doing arithmetic. Something had to change.

In the late 16th century, a Scottish landowner named John Napier, with some help from other European mathematicians, solved the problem by noticing what high school students would now recognize as an exponent law: x^(a+b) = (x^a)*(x^b). If you add two numbers in the exponent of a power, you get the same answer as if you multiplied the powers with each exponent separately. Napier and his contemporaries realized that if you had a table of powers, listing the results of raising a given number to a vast number of different exponents, you could rewrite multiplication problems as addition problems by looking up the appropriate power in the table. These reverse powers, known as logarithms, were the biggest advance in calculation between the adoption of Hindu-Arabic numerals and the invention of the mechanical calculator. It is largely because of logarithms that 17th-century astronomers were able to turn decades of astronomical observations into a coherent mathematical picture of the heavens.

Another topic that takes up a great deal of time in high school without really adequate motivation is that of polynomials. Why do we study them? The typical examples given in high school courses involve volumes, areas, prices, projectile motion, and the occasional parabolic suspension bridge. These are definitely real areas of application, but they don’t touch the most important properties of polynomials.

Polynomials are important because they can be written equally well as sums (of powers of a variable) and as products (of linear functions of that variable). They can also be evaluated easily, requiring only addition, subtraction, and multiplication. For those reasons, they are the easiest algebraic expressions for us to work with. It is often worth replacing more complicated functions with polynomials that take on similar values in the region we care about, in order to work with those functions more quickly and easily. These approximations are called Taylor or Maclaurin polynomials, and for almost any function we care to study, the approximation can be made as precise as we like by adding more and more terms. This is how your calculator handles functions like sines and cosines, and it is how many basic results about complicated mathematical systems are obtained.

Speaking of sines and cosines, approximation and simplification are not always about numerical calculation. We are often more interested in understanding how complicated information can be transmitted through physical systems, whether through natural processes like the vibration of the Earth in an earthquake or over manmade channes, from radio to fibre-optic cables. Polynomials aren’t very good at describing objects or processes in the real world, because they always explode to positive or negative infinity, which doesn’t happen to things like mass, price, temperature, or any other real quanities we are interested in measuring.

Trigonometric functions, on the other hand, are perfect for this sort of thing: they stay nicely bounded within a certain range and vibrate back and forth at a wide range of speeds and strengths, perfect imitations of everything from planetary orbits to ocean tides to the buzzing of sound waves. We break down functions describing complicated physical behaviour into sums of trigonometric functions, interfering with each other to produce spikes and flat areas like children bouncing each other out on trampolines. This mathematical technique is called Fourier analysis, and in terms of modern mathematical science it is probably even more important than the work of Taylor and Maclaurin. All of our information technology relies on, among other things, the work of a few long-dead Greeks who drew circles and triangles in the sand to understand the motion of the Mediterranean sun and to appease their own aesthetic curiosity.

By the way, if you tell your high school math teacher that I’ve told you about this stuff, they will probably get very annoyed, because it goes against all conventional ideas of how to teach and in what order to present mathematical material. In the next article I’ll continue to annoy them by explaining how the idea of close approximation hints at another fundamental idea: that the important information about quantities we can measure lies in how fast those quantities change.