How to Solve a Problem, or the Foundations of Evolutionary Epistemology

Evolutionary epistemology is a diverse set of natural approaches to knowledge that make use of evolutionary theory. I believe that an understanding of this obscure area of philosophy holds the key to some critical questions that relate to problem-solving, both in the human mind, in biological adaptation and in systems of artificial intelligence. The fundamental problem that evolutionary epistemology tries to answer is simple: how do you solve a problem?

Here, I aim to provide a clear rationale for evolutionary epistemology’s rather counter-intuitive approach to knowledge. Campbell (1974) proposed the term ‘evolutionary epistemology’ to describe Karl Popper’s approach to a scientific metaphysics but, since then, the term has been generalised to cover a vast set of very different beliefs: naturalised epistemology, universal Darwinism, genetic epistemology, philosophical Darwinism, and so on. These different philosophies highly diverse but do share a common theme in a scientific, and particularly making use of evolutionary theory, approach to metaphysics. I therefore use the label ‘evolutionary epistemology’ as a blanket term, but all I personally really mean is ‘problem-solving from a position of total ignorance’.


From a human perspective, the first point of call in problem-solving is usually deduction or logic. Given the things that you already know, you can try to formulate an answer to the problem from existing knowledge. However, a deductive approach demands that you have a relevant set of memories through which to make a deduction. In the absence of a complete set of relevant premises, a deduction in the real world can be wrong, which can expose inadequacy in the formulation of the problem.

The obvious philosophical alternative, in the absence of relevant premises, might be induction. Using the memories you do have, you could generalise and extrapolate. An inductive approach is risky because the strength of a generalisation rests on the sufficiency of existing knowledge to make a reliable extrapolation. In other words, induction also requires a relevant set of existing knowledge. Thereby, again, in the real world, induction can be wrong, stemming from the leap of reason between the evidence and the solution.

It is worthy of note that there is a third method, abduction, that is often cited, but there is nothing new in abduction. To my mind, abduction is ‘sensible induction’, where a generalisation has a degree of uncertainty attached to it, often in comparison to some alternative. In this sense, abduction is described as ‘inference to the best explanation’, which, if true, is methodologically deductive and therefore necessarily to the best answer (!). I do not consider abduction to be clearly distinct from deduction.

In this way, the classical approaches to problem-solving within philosophy, deduction and induction, are reliant on existing knowledge. Now, far from saying that they are therefore suitable methods for the growth of knowledge, it is apparent that neither deduction nor induction yield any new knowledge. The conclusion from a deduction is inherent in the premises, so no new knowledge has been acquired. For induction, any new knowledge that has been acquired has not been acquired through necessity, but rather through masking existing uncertainty in the quality of the knowledge; if this uncertainty is acknowledged (i.e. abduction), then the inductive procedure is operating deductively and no new knowledge has been gained.

Therefore, classical approaches to problem-solving rely upon the problem being a tautology, or in other words not being a real problem. This conclusion is, fittingly, due to the way in which I have defined the word ‘problem’, as something to which new knowledge is required. You may think this definition is inaccurate, but it is very much a common-sense approach to knowledge as a minimum set of statements that define a position. Let us analyse this position.

Under the definition of ‘problem-solving as requiring new knowledge’, we would have to accept that computers do not solve real problems. A computer can be imagined as a set of axiomatic rules about symbolic manipulation or syntax. The problem-solving ability or intelligence of a computer is therefore in the usefulness of the axiomatic rules. In principle, the computer could be given rules for some computation that we would consider to be incorrect (e.g. ‘when I say plus, do minus’). In this case, the computer be entirely logical in making this deduction, in so far as it has not had a malfunction, but it is not very useful for problem-solving.

I think this example shows something, which the philosopher John Searle has been an ardent defender of in his Chinese Room thought experiment: namely, the intelligence of a computer is entirely from the designer. In Searle’s imagination, the designer specifies a set of syntactical rules that instruct the behaviour of the system in each instance. In this sense, if there is a mistake in the behaviour of the system, blame would fall to the designer whose oversight is the cause of the error; the system itself is incapable of culpability because it is incapable of originality.

In some senses, Searle is right, but the designer cannot be assumed to be omniscient. In a system where the designer is exactly specifying the output sentences in response to input sentences, the behaviour of the system is clearly the exact consequence of the designer’s input. But, when the designer specifies rules, the system may behave in ways that the designer could not have predicted.

A clear analogy of this could be a chaotic system like a triple pendulum. In this system, the position of the swinging pendula are decided by deterministic mathematics, such that, if rerun with the same parameters, the system behaves in exactly the same way. The behaviour of the pendula are thereby not known by the designer before a given simulation is run.

Let us apply this analogy: to have a Chinese Room that is capable of conversation so as to pass the Turing test, the output sentence would have to depend upon the input sentence. I am not suggesting that the room has semantic qualities, which still reside solely with the designer, but rather that this feedback process may be unpredictable such that the designer themselves could be fooled by the apparent semantic comprehension of a Chinese Room system that they have designed if the system behaves chaotically.

If the designer of a system does not have knowledge as to how the system will behave in a particular circumstance, then something remarkable has happened! First, by rules, the computer can solve problems that the designer has not explicitly foreseen. Second, through feedback and chaos, the computer can reach answers that the designer could not have predicted (by deduction); to be clear, in order to understand a purely syntactical systems behaviour, the designer requires semantics — i.e. new knowledge about the system beyond its syntactical rules. In this way, complex syntax can demand semantic interpretation. (As an interesting tangent, this comment is analogous to saying that chemistry (syntax) can yield biology (semantics) as an irreducible field of study.)

Yet, despite this, the system is unable to change its behaviour. In other words, the system is not autonomous and the ‘intelligence’ of the system is exclusively derived from limits instilled by the designer, even if they do not intend or understand them. In this way, the notion that a computer, as described in the Chinese Room thought experiment, is incapable of problem-solving is upheld.

But how does this bear on the definition of a ‘problem as something requiring new knowledge’? Let us tackle this sidelong. When a school child is set a mathematics problem, the solution is a genuine revelation because deduction does not come naturally; if it did, there would be no need for mathematics to be taught because it would all be so obvious and everyone would score 100% on all their tests. Yet, imagine two school children sitting a multiplication test: one child decides to learn their timetables off by heart so they can recite the answer to any problem, the other child decides to learn a set of rules for how to multiply numbers together. At the end of the test there is a question for multiplying two very large numbers together, which the first child has not learnt, but the second child can solve without extra effort.

My question is, in this school exam scenario: where is the knowledge? The first child has learnt a lot of knowledge, whilst the second has learnt very little; and yet, the second child has understanding or semantics whereas the first child only has syntax. The second child’s understanding of the problem means that the calculation itself contains no knowledge. Any knowledge that the second child has is of a method, and any instance of using the method provides no information because the answer directly follows from the question. If the method threw up a result that contradicts some other expectation, then the example is more complex. But regardless, the second child, in this way, does not need lots of knowledge in order to get 100%.

My first point is this: to the second child, the test does not pose a problem, it is just a set of questions to which the child can solve the answers. Here, I am looking to make a distinction between a question, which can be given an answer without new knowledge, and a problem, which requires new knowledge.

But, let us take this idea further. Say the first child learnt from a multiplication table with an error, the child would take the mistake as the solution — which, from the other child’s perspective, is obviously wrong. Both children have a method of solving the problem and, given their respective methods, methodological deduction is different between the two children. By framing the problem differently with different guiding rules for syntax, the children have different deductive powers on syntax. Yet, we would describe the first child as ‘not understanding the problem’ and the second child as ‘understanding the problem’.

Semantics is not arbitrarily defined, but relates to the real world. For the first child’s answer to be coherent in the real world, the mistaken solution would have to be the label for the correct answer — and hence both children could be talking about the same quantity. The likelihood is, however, that a child who has wrote-learnt their multiplication tables simply would not understand their answer as a mistake. In this way, the first child is using mathematics only in name (syntax) and not meaning (semantics) because they do not see a relation between the symbols of mathematics and entities in the real world. In other words, the second child is simply using syntax and so there are no natural checks, whilst the first child can check their answer with common sense from the real world.

The picture I am building is this: deduction cannot lead to knowledge, but the relations of deductions to the real world can lead to knowledge because of how the problem is framed within the real world. A question, as previously defined, is purely about syntax, whereas a problem, is purely about semantics but may use syntax to convey this semantics. Mathematics is a purely deductive set of operations from a set of axioms, but the truth of mathematics in the real world is obscured by the abstraction to symbolic representation, allowing mathematical problem-solving to mean something (i.e. to have semantic content from syntactic operations).

There is one last piece I wish to consider. In principle, it is possible to deduce the entirety of mathematics from its foundational axioms. Yet, deriving a particular result may take a considerable amount of time. In the real world, time is precious, and so deduction, which is incapable of yielding knowledge, may be replaced by statement. For example, the method to prove Pythagoras’ Theorem is reasonably long, and your average trigonometry test is almost certainly shorter than the time it would take a student to derive the theorem de novo. Given this, the deductive argument for Pythagoras’ Theorem can be taken for granted and the theorem learnt as a handy shortcut. A student who learns the theorem by wrote is clearly the same as the first child above, but the likelihood of the student making a mistake in the derivation of the theorem, which could lead to a different and false result, is significantly larger than the student mis-remembering a short string of syntax. Pythagoras’ Theorem, as a shortcut through long-winded deductive reasoning, is knowledge to someone who cannot see how to derive the answer.

In this way, the relationship between a question and a problem can be explicated: a question is a problem until it is solved because until it is solved it is not clear that the given method is capable of a solution. If, upon inspection, a method can be understood to be valid, then the problem becomes a question. However, until an answer is generated and shown to be the correct or best answer, all questions are problems.

Before concluding, there is one last point I wish to consider: why does deduction work? This may sound like a nonsensical question, because presumably I would use deduction to answer the question, but rather the question exposes a whole raft of nonsense that I wish to avoid. Deduction works because it is how the world works. In this way, the universe can be imagined as a gigantic set of computable operations — i.e. the universe could inhabit a computer, not in the structural or pre-deterministic sense but in the sense of determinism.

People have traditionally struggled with the notion of determinism and the obvious existence of free will. Determinism does not deny the existence of free will, but rather it enables it. In a world that did not make deductive sense (i.e. not a toy universe in a computer), free will would be impossible because your choice would not have bearing on the consequences of your choice. Free will demands that you can know what you are choosing. Fundamental indeterminism is really another form of pre-determinism because the future is independent of the past — including your choices — such that only ‘the mind of God’ outside of time would know the direction of change in the universe.

But determinism does not imply pre-determinism. Pre-determinism states that the future is entirely determined by the past. Whilst this may be fundamentally true at the very foundations of reality, we do not operate at this level — to use Richard Dawkins phrase, we are the ‘denizens of the middle world’. Determinism is a pragmatic approach that acknowledges the limitations of knowledge: the world may well be pre-determined, but my world is not.

Science is an intellectual endeavour that has reached this fundamental conclusion. Relying on fundamental determinism, science has realised the practical indeterminism of events outside of the ‘middle world’ — at the scale of the very, very small in quantum mechanics and the very, very big like meteorology. I will mention further comments on science elsewhere, but here it is sufficient to say that science is not an attack on free will. Science has pushed back uncertainty and uncovered false knowledge, but there is no sense in which science has ‘disproved’ the freedom that is self-evident to your being. In many senses, knowledge as to your nature and your biases provided by science has, in fact, given you the opportunity to be liberated from yourself.

Lastly, if the foundations of reality are practically indeterminable, this has huge implications for philosophy and science. I have argued that a child who knows how to solve a problem can do so without acquiring new knowledge. As the world is effectively a giant deductive computer, the universe without life contains no knowledge. However, life introduces knowledge into the universe because living things cannot get access to the foundations of the universe, even if they depend upon them. Knowledge is therefore a prerequisite shortcut to the ‘middle world’ that we live in.

High quality or ‘true’ knowledge is, almost by definition, more nearly consistent with the real world than low quality or ‘false’ knowledge, but all knowledge is inherently an approximation without direct foundation in reality. Going back to the example of Pythagoras’ Theorem, this is analogous to saying that we cannot prove the theorem’s truth because we do not know how to derive it, but it is nevertheless possible to benefit from using the theorem to solve problems. (What mathematicians actually do is state that Pythagoras’ Theorem is correct given certain limits and axioms because they do not care about the truth of the knowledge in reality but simply want to explore trigonometry in a mathematical reality.)

In other words, all knowledge systems are never going to provide absolute truth or absolute falsity, but this does not annihilate the possibility of knowledge. All knowledge about the real world contains elements of uncertainty, without which it would not be knowledge. The resounding question, and the aim of my argument, is this: how do you solve a real-world problem if not by starting with its foundations?

Herein lies the utility of evolutionary epistemology which can solve problems by starting from existing knowledge even if that knowledge is wrong. Unlike common sense or philosophical epistemology that would start from deduction, the abduction-like approach of evolutionary epistemology can justify existing knowledge because this knowledge must have been subject to evolutionary forces that lead to truer knowledge outcompeting falser knowledge. This knowledge can still be falser than other possibilities, but it remains a better solution than the given alternatives.

With the backdrop I have provided, I hope this ‘inversion of reasoning’ within evolutionary theory is better understood. Evolutionary epistemology does not need axioms to provide a foundation, but it nevertheless provides one. For there to be a problem, there must be a living subject who is being challenged by the problem because a problem requires new knowledge and knowledge is the property of living things. Any knowledge that has bearing on the problem is therefore dependent upon the living subject. For this living subject to exist in the first place, the universe must be deterministic, at least to a large degree, for there to be regularities within the universe which the subject can exploit to survive and reproduce so as to transcend the division between chemistry and biology. Therefore, the existence of knowledge is entirely dependent upon a deterministic universe; the existence of knowledge gives an a priori vindication that the universe is fundamentally deterministic, even if it is practically indeterministic. But more importantly, the existence of knowledge is entirely dependent upon evolutionary forces; the existence of knowledge gives an a priori vindication that this knowledge is a better starting point than previous alternatives, which can be assumed to have had a random starting point.

It is also worth mentioning that, for this reason, evolutionary epistemology is generally understood, by philosophers, to be a science not a philosophy. The methods of philosophy are typically involved in deduction from premises, but the methods of science are vastly more anarchic because they rely upon the real world. This means that science can attack the premises behind conclusions for reasons beyond incoherence, such as natural absurdity; for example, it may be that someone has a perfectly coherent theory of evolution that would work successfully in a computer simulation, but that does not mean that the theory is applicable to the real world. In this regard, the reason why mathematics, as a prime example of deductive logic, is useful is because of the application it has to the real world; without this, mathematics is nothing but coherent speculation. As such, if mathematics could coherently prove something nonsensical (e.g. 2+2=5) then the axioms of mathematics would be wrong and in need of updating to retain their semantic value; it would not transpire that reality itself is incoherent.


Conclusion: a summary of foundations to evolutionary epistemology, the science of problem-solving:

1. Problem-solving is reliant on existing knowledge, which classical philosophy cannot justify. Further, classical philosophy cannot explain the growth of knowledge, that is the acquisition of new knowledge, because deduction is tautologous

2. A problem is something that you cannot and do not know the solution to a priori:

a. Problems can be thought to lie on a continuum between enigmas (unanswered) and questions (answered)

b. Questions are purely syntactical with no semantic connotations; question-solving via symbolic manipulation cannot yield new knowledge.

c. Problems are semantic, though often expressed with syntax; all problems are questions but not visa versa.

d. The universe is syntactical, which can lead to life that is semantic; all knowledge is relative to a knowing subject.

3. Knowledge is a shortcut:

a. With foundational knowledge alone, a problem could be solved by pure deduction, but there are costs of computation time that make knowledge useful.

b. Science has shown that the real-world does not have knowable foundations, such that a purely deductive ‘theory of everything’ is not possible to ‘middle worlders’.

c. In the real-world, you cannot solve a problem by starting from its foundations; neither is there a deductive justification (beyond evolutionary epistemology) of starting from pre-existing knowledge.

d. Real-world knowledge is an approximation.

Evolutionary epistemology enables problem-solving to be justified based on existing knowledge, because the existence of knowledge entails qualities that, better than previous alternatives, lead to the reliability of that knowledge. Yet, despite this, evolutionary epistemology does not offer any reason to suppose that the existing knowledge is sacred or inviolable, such that there is a great deal of flexibility. The point of evolutionary epistemology is not only to justify a starting point, but to equip practical tools by which one can modify existing knowledge to improve a web of knowledge. Here, I have spent a long time justifying the former, and next I must move onto understanding the methods of the latter.

Thanks for reading!


Further reading (by year):

Lorenz, K. (1973). Behind the Mirror: a search for a natural history of human knowledge, Harvest Books

Munz, P. (1993). Philosophical Darwinism: on the origin of knowledge by means of natural selection, Routledge

Plotkin, H.C. (1994). Darwin Machines and the Nature of Knowledge, Harvard University Press

Cziko, G. (1995). Without Miracles: universal selection theory and the second Darwinian revolution, MIT Press

Dennett, D. (2003). Freedom Evolves, Viking Press