Transcending the Anthropocene in Science

A mathematical proof bigger than the size of Wikipedia

Paul Erdös was a very strange guy. He couldn’t (or wouldn’t) butter his own toast. His entire wardrobe fitted in one suitcase and he couldn’t (or wouldn’t) wash the clothes by himself. If you extended your hand, he wouldn’t shake it. A mathematical prodigy, who could multiply three-digit numbers in his head at the age of three and discovered negative numbers at the age of four, he never bothered about anything other than mathematics. High on amphetamine and caffeine (of which he was fond of remarking, “a mathematician is a machine for turning coffee into theorems”), he’d often turn up your doorstep and announce, “My brain is open,” which meant that he was willing to work with you on an unsolved math problem. During his lifetime, Erdös collaborated with more than 500 authors and published more than 1500 papers.

During the 1930’s Erdös came up with a problem known as the ‘Erdös Discrepancy Problem’ to mathematicians. He offered $500 for anyone that came up with a proof. It laid unproven for more than seventy years when finally the hint of a proof came last month from an unlikely source, a computer. Computer scientists, Alexei Lisistsa and Boris Konev of University of Liverpool, have used computers to get things moving on and have come up with a proof of sorts. But there is one problem, the proof is 13 gigabytes long (compare this with the size of Wikipedia which is nearly 10 gigabytes). There is no way that human beings can be expected to go through that amount of data looking for inconsistencies. If so, can it really be considered a ‘proof’? If human beings are incapable of verifying some ‘knowledge’ produced by a machine, how does that change the way we do science or mathematics?

Questions like these are not new in science or mathematics. During the 1960’s, Gelernter and his colleagues at IBM set out on an ambitious project of creating programs to find proofs of the theorems in elementary Euclidean geometry. One day the program came up with an ingenious, and rather elegant, proof of one of the basic theorems (the base angles of an isosceles triangle are equal). It used a method completely different to the one used by Euclid. It later transpired that one Pappus of Alexandria had arrived at the proof via the same path some six hundred years after Euclid. But the proof generated by the program then sparked a debate on who should get the credit for this elegant proof. Was the proof lying deep within the programmer and the program merely brought it to the surface? Or did the proof lay hid somewhere in the computational universe and the program merely arrived at it with the author of it having no control over its trajectory?

In this article, we will aim to answer such questions and in the process hope to raise more questions about our changing relationship with science and the scientific method as we make exponential progress in the field of computation and artificial intelligence. First, we go back in time to the origins of modern scientific method and take a look at how science was being done for the past four-hundred or so years and how, if in any way, artificial intelligence is showing signs of changing that.

The English Knights: Sir Francis Bacon and Sir Karl Popper

Almost the entirety of modern science, as we know it, is the result of what is known as the ‘scientific method’. It is our most robust and reliable tool in trying to probe into the mechanisms of how the universe operates. As self-evident as the usefulness of the scientific method might seem to us, this is a pretty recent phenomenon, four hundred or so years old, when compared to the history of our species. From the pre-historic period, prior to the Enlightenment, despite our best attempts to produce knowledge of the useful kind, we made almost zero progress. So if we look at modern science, all that is useful came after the advent of the scientific method. All knowledge acquired prior to that can, for all practical purposes be safely discarded. What changed during the Enlightenment? What gave birth to the scientific method?

Physicist David Deutsch says, “The ‘Enlightenment’, at its root, was a philosophical paradigm shift.” The Royal Society’s motto, coined in 1660, ‘Nullius in verba’ kind of sums up the philosophical paradigm shift. Most knowledge, prior to this period, was arrived at through some kind of ‘argument from authority’. To try and verify the knowledge handed down by authorities was seen as some sort of blasphemy. The Enlightenment was a rebellion of sorts. It started a tradition of criticism and doubt. Theories were to be tested before people accepted them. What Galileo called ‘cimenti’ or ‘trials by ordeal’ became the hallmark of science. Testability became what Popper would later call “The criterion of demarcation between science and non-science.” So this created a culture of testable hypotheses being generated by human beings which when passed through the sieve of experimentation and observation, has been the source for sustained and rapid generation of knowledge that has stood the test of time, unlike the knowledge generated prior to the arrival of the scientific method. Now how we arrive at these hypotheses is a difficult question to answer and is relevant to our discussion of how artificial intelligence is going to affect the scientific method. Here we look at the views proposed by Francis Bacon and Karl Popper.

We started this journey with a quote by Bacon from 1630. As it turns out, Bacon had some ideas about how science is done. His proposition was that, we observe the world carefully and then arrive at the ‘laws’ via induction to make sense of the observation. He asserts that this Inductive Theory of Scientific Method will be providing humankind with knowledge like the process of straining fine liquor from ‘countless grapes, ripe and fully seasoned, collected in clusters’. We observe the world with care and we will be able to make connections via ‘induction’ that’ll add to the ever-growing body of scientific knowledge. But can we make observations if we don’t know what to observe or if we don’t have any a priori framework of background knowledge in which to make the observation? Karl Popper, the other knight in our story, has a lot to say.

“Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure.” (Popper, 1963)

Popper had a completely different idea as to how we do science. His idea is that we form conjectures about how the world works much before we set out to collect data or make observations. Hence, all our observations are, to borrow his phrase, “theory-laden”. We make these observations and try to fit them with our conjectures. If they don’t we discard our conjectures, they are ‘falsified’. We make new ones and then set out to see if the data fit them. Hence we progress by ‘falsificationism’.

Now, the battle of the knights has been long-drawn and it is hard to pick a winner. For our purposes of discussion, we will take a look at the history of the field of artificial intelligence and how the two methods have been employed to generate sustainable scientific knowledge and what lies in the future the field.

The Robot Physician: Artificial Intelligence Through the Years

As a medical student, I often used to wonder about the purpose of the mindless rote that seemed to be the norm in med school. Why don’t they allow us to use ‘Google’ or other search engines during exams? If knowledge-search has been made so universal by the internet, why not use that? Why waste precious neural real-estate to store facts that can be looked up on a mobile screen? Never did I gather the courage to ask these questions to the professors. But the way artificial intelligence is making progress, these questions may soon be an important aspect in how we address the healthcare system and will probably change the role of humans in the process.

When IBM’s Watson won the popular quiz show ‘Jeopardy’ on television competing against former human winners, it generated a lot of interest in the media and the public. But since then, Watson has shifted its focus to less glamorous pursuit of a career in clinical oncology. Vast amounts of data are being fed to it every day and with algorithms that keep improving with time, it is now helping clinicians determine treatment options in patients already diagnosed with lung cancer. But how does it function? How have we been training computers and programs to do these tasks? Are they using Baconian Induction or Popperian Falsificationism? In this section we take a look at how the data mining algorithms have evolved in various fields over the years. First we get away from the murky world of clinical medicine to the more elegant and inspiring world of stars. We take a look at Kepler and his Laws of Planetary Motion.

Tycho Brahe was a Danish astronomer who made one of the most detailed observations of his time of the night sky and the celestial bodies without the aid of optical prosthetics like the telescope. When Johannes Kepler came across this huge amount of data, he analysed them and came up with his three laws of planetary motion. This looks like an example of scientific knowledge derived through Baconian process of Induction. Here were the ‘ripe grapes’ of observation and dataset by Brahe, all ready to be strained into the ‘fine liquor’ of natural laws. All Kepler did was mechanically derive the laws from them. But the Popperians have a completely different view of the process. Kepler was a Copernican at heart, who didn’t believe in the geocentric model of the universe as preached by the church and as was the dominant view at the time. He started to look at the data by measuring the varying distance of the planets from the fixed sun. Had he considered the earth to be a static object, he might never have arrived at the conclusion that he did. Even Brahe had similar ‘conjectures’ when he set out to gather the data. Thus, the discovery of the Kepler’s laws of motion was a result of ‘falsificationism’, the Popperians claim.

More than three hundred years later, the computer scientists Pat Langley and his group attempted to re-trace the footsteps and Kepler and derive his third law of motion from the dataset generated by Brahe. The prevailing tradition of thought in the field of artificial intelligence that led to this was best summarised by Herbert Simon who also collaborated on the project:

“Computer programs exist today that, given the same initial conditions that confronted certain human scientists, remake the discoveries the scientists made.”

They created a program, aptly named BACON1, that tried to derive natural laws when fed with the dataset encountered by human scientists who discovered them. BACON1 came up with Kepler’s Third Law (and when fed the relevant data, Boyle’s Law, Ohm’s Law, and Galileo’s Law). The upgraded BACON3, it came up with Ideal Gas Law and Coulomb’s Law as well. This was seen as some sort of a triumph for the Baconian tradition of induction. Surely, the computer needed some background knowledge to make sense of the data but this was as mechanical as it got. As Donald Gillies, the mathematician, observes in his book ‘Artificial Intelligence and Scientific Method’, “Baconian or mechanical induction, although advocated by Bacon in 1620, was used, either not at all, or hardly at all, in science until the rise of artificial intelligence or the emergence of machine leraning programs…”

Bacon’s prescience is evident from the quote we started with. Popper and other philosophers rejecting the Baconian method of induction didn’t encounter the rise of artificial intelligence when they came up with their ideas, hence it is easy to see why they were so dismissive of it. Human discoveries were almost all made with a falsificationist approach. Artificial intelligence might have started to change that.

Now we go back to the field of medicine which started off this section and take a look at the developments in the field with a view of the changing paradigms in the way we are doing science to drive the field of clinical medicine forward.

It turns out that my anxieties and frustrations were shared by the artificial intelligence community (or they might have been motivated by something nobler but I like to think of this way) who have been trying to train computers to take over from humans the more mundane and routine task of correlational analysis of diseases, symptoms, and available treatment options. We have encountered Watson, but machine learning in clinical medicine has a long history.

MYCIN was a program (technically known as an ‘expert systems’, loosely because ‘experts’ are what they are trying to outdo) developed by the Stanford Heuristic Programming Project in the 1970’s, which was used to identify the bacteria causing an infection by analysing the symptoms and then prescribing the best antibiotic to cure that. When MYCIN was pitted against faculty and students of the Stanford Medical School, it outdid all of them (though not by a significant margin). MYCIN never went into clinical practice, mostly because computers at the time were a pain to use and it would be more time-consuming and expensive to feed the data to a machine than to consult a human expert. But things were changing. And changing fast. In the 1980’s, CADUCEUS, another expert system was developed, this time with the more focused interest in blood poisoning. It has been dubbed as the “most knowledge-intensive expert system in existence.” In the next decade, another expert system called ASSISTANT saw the light of the day. It was trained to learn in three domains, lymphography, prognosis of breast cancer, and location of primary tumor in a patient. As the results showed, ASSISTANT outperformed experts in all the domains.

The evolution of the field of machine learning in medical diagnosis has been a fascinating testament to the Baconian and Popperian modes of scientific method. Experts fed in background knowledge to the systems, providing grounds for hypothesizing conjectures. But soon the deluge of data became too large to be sieved by human experts. The machines are now arriving a conclusions by a process we might be forced to call ‘induction’ as we have no way of keeping track with the ‘conjectures’ going behind them. This disconnect between the correlations found by the machines and the human understanding of diseases is going to raise some important questions as the technology becomes more ubiquitous.

If a machine is to make an error during diagnosis, who are we to hold responsible? Are we to grant machines the same amount of latitude that we do to human experts? Or are the guidelines going to be stricter? Turns out these are not the only anxieties caused by artificial intelligence.

‘The Impenetrable Scientific Oracle’

We are part of the species Homo sapiens, which is Latin for ‘wise man’. Of all the attributes that we hold dear to ourselves, our congnitive repository, our ability to think, is probably the one we are most proud of. The apprehension, the loom of an impending doom whenever we hear about machines starting to ‘think’ better than us is what Hollywood has been feeding upon for years now.The fear of rogue machines taking over us is there but for a large part, that is the fear of ‘threatened egotism’, the fear of losing out to machines in terms of our thinking prowess, our pride is hurt. Roger Penrose, in his book, ‘The Emperor’s New Mind’, put it this way:

“… to be able to think, that has been a very human prerogative. It has after all, been the ability to think which, when translated to physical terms, has enabled us to transcend our physical limitations and which has seemed to set us above our fellow creatures in achievement. If machines can one day excel us in that one important quality in which we have believed ourselves to be superior, shall we not then have surrendered that unique superiority to our creations?”

As we have seen, the field of artificial intelligence is making progress at an unprecedented rate. With the huge amount of data that we are feeding into programs and with the data mining algorithms that keep getting better with time sift through them, are we to soon encounter correlations and predictions made by computers which cannot be understood by humans. Are we to encounter more shots at ‘Laws of Nature’ like Rule 12 of secondary structure of proteins by GOLEM as we saw in the last section? When I asked philosopher Daniel Dennett these questions, he hinted at a future of “an era of science by impenetrable scientific oracle” brought forth by the “new use of big data and data mining algorithms.”

A more anthropocentric view of how science should be done is shared by scientist and philosopher Massimo Pigliucci. He said, “If we are reduced to pattern finding algorithms without any understanding of what’s going on I’d say we are no longer doing science. I don’t subscribe to the “shut up and calculate” school of quantum mechanics, for instance. I regard it as a cop out on what science is supposed to be doing: increase our understanding of the world.”

“… what science is supposed to be doing: increase our understanding of the world.” -Massimo Pigliucci

Conclusion: Putting the Human back in Science

Science is a human enterprise to make sense of the universe around us. It starts of with the assumption that the universe is understandable in terms comprehensible to us humans. Surely, there have been utilitarian benefits of science which far outweigh benefits brought about by any other line of thought or mode of enquiry. But the primary driving force behind science has always been human curiosity. When George Mallory was asked, “Why do you want to climb the Mount Everest?”, he famously replied, “Because it’s there.” Science has been an equally audacious effort on our part to tame the laws of nature into the confines of human understanding.

If and when it comes to the ‘era of the impenetrable scientific oracle’ brought forth by artificial intelligence, a part of us will still be longing to take the journey to ‘understand’ the wisdom produced by the oracle. Some random guy will always be coming up to your doorstep in shabby clothes and a suitcase and announce, “My brain is open,” meaning he is ready to work on an unsolved mathematical problem with you.

References

• For a wonderful biography of Paul Erdös, read The Man Who Loved Only Numbers: The Story of Paul Erdös and the Search for Mathematical Truth by Paul Hoffman

• The proof of the Erdös Discrepancy Conjecture as done with computers can be read here:http://arxiv.org/abs/1402.2184

• Gelernter’s proof of equality of sides of an isosceles triangle was made famous in Douglas Hofstadter’s wonderful book, Godel, Escher, Bach. For a more detailed description, read Mind as Machine by Margaret Boden

• David Deutsch, The Beginning of Infinity, 2011

• Philosophical works of Francis Bacon have mainly been revised through various online encyclopaediae of philosophy

The Logic of Scientific Discovery by Karl Popper

• For a more detailed discussion on the BACONP programs and their background stories, read Scientific Discovery — Computational Explorations of the Creative Processes by Pat Langley et al

Machine Learning for Medical Diagnosis: History, State of the Art, and Perspective by Igor Kononenko:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.96.184&rep=rep1&type=pdf (pdf link)

Artificial Intelligence and the Scientific Method by Donald F. Gillies, 1996 (This is a must-read for ones interested in delving deeper into the topic)

• Roger Penrose, Emperor’s New Mind, 1989

• Massimo Pigliucci, professor at Graduate Center & Lehman College,
City University of New York, further elaborated on the topic: “If we are getting to the point where computers generate “knowledge” that we cannot understand, then that doesn’t count as human knowledge. The computers may very well be right, but if we can’t understand what they are doing they will indeed become “oracles” to just be trusted. Would that still count as “science”? Not in the sense of a human endeavor to understand the universe, which I think is what science is supposed to be. That said, of course, computer-generated knowledge will still likely be useful to human beings. We’ll just have to accept it on, ahem, faith…”(via email)