From deep learning down

An excavation of mathematics reveals the continuity of our knowledge

Gene Kogan
5 min readDec 28, 2017

If deep learning is what used to be called machine learning, machine learning is what used to be called computational statistics. After all, few would have been familiar with the term in the early 1980s when they were already writing programs for logistic regression, principal component analysis, and other techniques that would be taken for granted a generation later. Maybe they’d heard it at some upstart conference but certainly not in common usage. They had been busy building upon the work of their own predecessors who had been solving those same equations by hand, anticipating that in the future, a machine might be able to make sense out of vast amounts of data. So they began building ever more complex electronics and eventually software for manipulating them, and the field of computer science was born.

Betty Snyder, Marlyn Wescoff, and Ruth Lichterman programming ENIAC in 1946, one of the first general-purpose computers ever built.

“What’s computer science?”

That’s what someone would ask you if you’d said it to them while they were trying to get a machine the size of a submarine to solve linear equations in the 1940s and 50s. They had only just figured out how to make them multiply numbers together without vacuum tubes exploding, and hadn’t gotten to things like operating systems and programming languages yet. Computer science would have sounded more like science fiction to them, although they had certainly heard of artificial intelligence by then.

That’s what most people called it after Alan Turing first began to pose abstract questions involving infinite rolls of tape to help him understand the theoretical limits of computation. Widely regarded as one of the fathers of AI, he never considered himself as such. In the seminal paper in which he originally proposed the “Imitation game,” later called the “Turing test,” he cited Charles Babbage’s Analytical Engine as the predecessor to what we now call a computer.

Building the analytical engine had been the lifelong goal of Babbage’s protégé, Ada Lovelace, who wrote the first program to be carried out by a mechanical device, nearly a century before Turing was born. But she wouldn’t live to see it through. Engineers of the time had barely just harnessed electricity for the first time, let alone produced the complex circuitry her schematics required. Yet long before anyone had heard of a neural network, Lovelace had described an ambition to build what she called “a calculus of the nervous system.”

Calcu-what?

Historians still debate whether calculus was invented by Newton or Leibniz, who simultaneously came to believe that physical motion could be understood by equations relating interdependent variables to each other. But in hindsight, it should have seemed obvious once they had cartesian coordinates to work with. By linking variables to each other, Descartes had paved the way for functions to be formally expressed in the first place, eventually leading to the now familiar f(x) notation we’ve used to express the equations that model our universe ever since.

The star charts from Su Song’s Xin Yi Xiang Fa Yao (1092) featured a cylindrical projection, similar to the Mercator Projection that would be independently developed later in Europe

Not that you’d have been able to read them if you had been studying the same ideas in Middle Chinese half a millennium earlier. Astronomers of the Song dynasty had kept meticulous records of the night sky, and knew they needed to model causal relationships between variables to predict the paths of stars. The Han statesman and scientist, Shen Kuo, helped them to develop the most sophisticated and accurate system of trigonometry seen to that point, with elaborate methods for measuring arc lengths, calculating coordinates on spheres, and predicting the trajectories of celestial bodies to a precision that would not be matched in Europe for centuries to come.

Great distance prevented Eastern and Western mathematics from interacting much in those times, yet they seemed to share a curiosity for the same pursuits. Much like his Chinese contemporaries, the Italian polymath Fibonacci had been searching for a way to formalize the nascent rules of arithmetic symbolically, and he would go on to introduce Europe to a number system they would soon thereafter adopt. Traveling along the Mediterranean coast during the late 11th century, he regularly encountered Arab merchants who used a familiar set of numerals to balance their books, those that we now call the Hindu-Arabic numeral system and still use to this day, the unbounded base-10 set of integers expanding infinitely in both directions from zero.

The Bakhshali manuscript from modern-day northern Pakistan is of an uncertain date itself, but based on a 3rd or 4th century AD script. It is thought to be the earliest extant record of Hindu-Arabic numerals and their basic arithmetic operations. The representation of zero as a dot lives on in the Hindu symbol of Bindhu.

“But zero isn’t a number”

That was the conventional wisdom before 8th-century Persian scholar Muhammad al-Khwarizmi first expressed it in Arabic. He and his cohorts had just invented a branch of math they called Al-jabr (Algebra), and they needed a way of assigning zero to variables. It was steadfastly held that numbers were for counting things and you couldn’t count something that didn’t exist. But in devising a step-by-step procedure for multiplication and division, al-Khwarzimi — from whose name the word algorithm subsequently derived — knew that a zero digit was needed, and borrowed an idea that had been brought back by his Greek counterparts from their excursions into South Asia.

It had been Brahmi-writing mathematicians of Gupta dynasty India who had first elevated zero to a true number. Before then, all they had known was how to count with the positive integers which they had learned from the Babylonians in the early 1st millennium BC, who had in turn inherited the oldest numeric characters we know of from the ancient Sumerians. Carving simple glyphs of cuneiform script onto clay tablets, they left us the earliest record of a positional number system that we know of, laying the foundation for arithmetic and every mathematical expression that ever followed.

Little did any of them know, that 10,000 miles away, in a part of the world they didn’t even know existed, the ancient Mayans had done the same thing.

Left: A collection of Mayan numerals and hieroglyphics used for calculating astronomical tables, found in the Dresden Codex, one of the oldest known texts from the Americas. Right: A cuneiform clay tablet from Mesopotamia depicting mathematical exercises for Babylonian scribes, passed down to them from the ancient Sumerians.

--

--