A Historical Analysis of the Identity of Indiscernibles
The Principle Identity of Indiscernibles (PII) states, in its least formal form, that two things that share all of their properties are in fact one and the same thing. It is the converse of the undisputed Principle of Indiscernibility of Identicals which states, intuitively, that two identical things cannot differ in any way. When combined, the two are sometimes referred to as Leibniz’s Law.
The PII has served as an important metaphysical axiom since the 1800s and is quite uncontroversial when applied to the objects with which we interact on a daily basis (trees, rocks, etc.). However, as we would expect such metaphysical principles to be perfectly universalizable, or at least to have a clearly defined scope of applicability, advances in the sciences and philosophy over the past century have brought the PII under heavy scrutiny.
Here, we take several fields — including philosophy, quantum mechanics, and computer science— as case studies about the reception of the PII and the controversy that has accumulated around it, a debate which we have yet to see the end of. We will determine whether metaphysical principles have guided the definition and elaboration of scientific concepts or whether scientific discoveries have forced metaphysics to evolve concurrently with technical advancement.
The PII has come under the most scrutiny in its own domain, Philosophy. Not only is it a contentious statement in itself, but due to its fundamental nature, it has been called upon to argue for other theories, increasing the stakes and creating further feedback from opponents of these “descendant” doctrines. As such, it remains perennially in use and, in consequence, has been forced to evolve alongside philosophy and logic, potentially contradicting Leibniz’s original intent to become an almost axiomatic rule.
Although most famously used in his correspondence with Samuel Clarke to deny the existence of Newtonian absolute space, Leibniz’s Law was originally formulated in Discourse on Metaphysics, in which Leibniz set forth his philosophy of a perfect world stemming from God’s perfection. Specifically, Leibniz deduces the PII in order to distinguish the actions of God from created substances  as such substances must include all their predicates, which extends to the idea that a substance carries within itself the whole universe, as recognised by God’s infinite wisdom. The PII follows naturally from the “containment” of all predicates, as Leibniz also declares that the very nature of the substance is defined by this act of containing. Leibniz builds on this statement by asserting that the number of substances is fixed to conclude: “the universe is multiplied as many times as there are substances, and in the same way the glory of God is magnified by so many quite different representations of his work” and continues his proof through 37 sections. As such, the PII was first designed simply to slot into a wider reasoning, and one can question later controversy that may not take this context into account. For example, does any discussion require the admission of Leibniz’s theory of predicate containment? This would contradict later restrictions of the law to pure (and intrinsic) properties, as indeed these would narrow the universe, contradicting the perfection of divine understanding. Certainly, the arguments of Max Black and others deal with the principle as if it were an axiom coming out of the ether; the PII seems to have taken a life of its own.
Another example of the principle’s independence is its adaptation to the evolution of philosophical logic. According to Frederick Sommers, the very notion of identity has evolved since Leibniz’s day to be defined as a relation (since Frege and Russell); nevertheless, the PII still holds and is argued . Another instance of the constancy of the PII within an evolving logical framework is the treatment of properties in themselves. The subject has become contentious, with some positing the existence of “universals,” instantiated by objects, while others accept only an ontology of particulars (nominalism), which, at first glance, threaten the PII as there would thus be no meaning to “having the same properties”. However, the PII has adapted, for example, via the works of Boolos  and Linnebo  and their idea of plural quantification. As such, the PII has truly transcended its original context, sharpening and refining itself to retain its essence while sidestepping potential contention. The PII, therefore, is certainly not what it was when Leibniz wrote it, having lost its theological baggage and merely existing as a seemingly reasonable postulate.
We now turn towards examining how such philosophical developments have influenced scientific theorizing. The PII extends its historical shadow in Descartes’s “Meditations” when he uses the Indiscernibility of Identicals (specifically, it’s contrapositive) to establish the separation of mind and body, asserting that the former possesses properties that the latter does not, namely regarding our ability to know of its existence. However, a recent philosophy has questioned this argument, contending that a person’s knowledge about an object cannot be counted as a property of this object. In “Alter Egos and Their Names,” David Pitt uses the identity of “Clark Kent” and “Superman” to contradict Leibniz’s Law via the statement “Lois Lane thinks Superman can fly yet thinks Clark Kent can’t,” which implies the two have different properties . To rescue the PII, philosophers began to exclude statements about the information that one has about an object from consideration when determining equality. Philosophers thus restricted the PII to its so called “strong” form, considering only intrinsic, quasi non-relational (“pure”) properties.
Yet even in its most refined form, the PII has been subjected to ontological argument, most implacably by Max Black in “The Identity of Indiscernibles”  which proposes the thought experiment of a perfectly symmetrical universe in which exist two spheres, the mirror image of each other. Barring any property of haecceity (“thisness”) which Black rejects as tautological, these two separate spheres are indiscernible and must thus be the same, providing a contradiction. Hacking  later addresses this point by re-interpreting Black’s symmetric space as non-Euclidean; however, this remains a limited stopgap as an Adam’s “continuity argument” negates such an argument by recreating Black’s universe while giving one sphere a slight distinguishing property such as a scratch, for example. According to Hacking’s defence, “space would have been different if there had been a scratch” — an absurd notion. A final defence thus appears in “The Bundle Theory of Substance and the Identity of Indiscernibles” by O’Leary and Hawthorne , advancing the concept that Black’s two spheres may be the same sphere in two different locations, a seriously counter-intuitive concept that will nevertheless reappear in quantum mechanics.
In Quantum Mechanics
This restriction on the properties within the scope of the PII is scientifically significant: in particle physics, the strong form of the PII fails. Each individual electron shares the same intrinsic properties as all other individual electrons (such as rest mass, charge, etc), and thus two electrons can only be distinguished via spatiotemporal properties, that is, their location in relation to other objects at a given time. This location is not an intrinsic property of the electron unless one holds that each electron has an intrinsic, individual and unchanging world line. However, this supposition entails the existence of a Newtonian absolute space, the exact thing the PII was originally conceived to argue against! Here, metaphysics has determined scientific work, as this chain of reasoning emerged in counter-intuitive position that all electrons are actually a single electron, a concept proposed by Wheeler and expanded upon by Feynman .
Leibniz’s notion of equality depends on the properties fulfilled by the two objects in question; yet, in practice, our reliance on sensory evidence determines the properties that we can observe and thus molds our perception of object equality. Diderot’s “blind man” is a commonly cited example which bring into question our ability to perceive equality. In his early philosophical works , Diderot seeks to show the effects of blindness on one’s life, coming to the conclusion that those who are blind are not “lacking,” but merely have a different measure of space and the objects that it encompasses. If we consider two identical balls in the same setting as the one used by Max Black that only differ in color, then to the blind, the balls are identical as they share all properties verifiable by the observant. Our interpretation of equality, even in the Leibnizian sense, is therefore entirely dependent on the observer.
Furthermore, there exists an analogy between such blindness and man’s limited ability to observe nature. As methods of scientific measurement become sharper, so do our conceptions of identity. For instance, the Anopheles Gambiae species complex of seven morphologically indistinguishable species of mosquito was long thought to be a single species, before advances in genome reading revealed vastly different DNA . Applying Willard Van Quine’s implementation  of the PII to a simplified universe where entities are defined by their species, humanity was “cured” of its blindness in observing DNA and thus able to perceive seven discernible objects. In these cases, advancement in scientific techniques (and thus increases in perception) have provided insight into metaphysical thesis, as they show that judgements of equality we make are always situated within the world, and thus that, the PII may be valid in many scientific situations despite being questionable in the abstract. As we do not live in a symmetric universe containing only two spheres, we can safely apply the PII in many practical situations. Thus, science provides a scope and limitations on the applicability of metaphysical principles.
For scientists, the motivation for holding onto the PII is clear: it easily reduces all knowledge about the universe to knowledge of a finite set of phenomena. This puts all such knowledge in the reach of experimental science, as numerical distinctness can be reduced to qualitative difference by restricting the PII to “non-identity-involving properties” to qualitative difference and similarity . This would, conveniently situate scientists as objective observers, or at least approximations of objective observers. However, many contend that in certain quantum mechanical systems, the PII simply does not hold, either in theory or experiment. Since the PII appears to hold in some cases but not others, the field has arrived at a very thorny metaphysical question that is underdetermined by the available physical evidence.
Opponents of the PII in quantum mechanics most often cite the history of statistical mechanics. For example, Maxwell-Boltzmann statistics particles are held to be individuals, but, crucially, not distinguishable. In other words, in a system with two boxes and two particles, the two states where one particle is in each box are considered different for probability purposes but are not physically distinguishable. This was even elevated to a principle of thermodynamics by Boltzmann . The assumption that two particles can be considered numerically distinct but indistinguishable necessarily rests on metaphysical foundations; namely, these particles are only weakly discernable by their spatiotemporal location, but are not strongly discernable as they are assumed to be identical. Bose-Einstein and Fermi-Dirac quantum statistics would both go on reject the difference between permutations of a system; that is, they hold that any permutation of particles in a system does not change the state function of the system, an assumption sometimes called the Indistinguishability Postulate. These new statistical frameworks had important metaphysical implications. Firstly, they continue to work with weakly discernable objects, but now the objects under consideration are the states of the system rather than the particles themselves. This has the implication of making particles themselves indistinct in the sense that the labels we put on the particles (particle 1,2 …etc) are no longer important for determining the statistical behavior of the system. Secondly, they they reject distinguishing particles in terms of haecceity, or primitive “thisness” (more formally, they do not consider the property “A=A” as one that can distinguish A from B in any circumstances). Later defenses find ways to uncouple classical thermodynamic statistics from haecceitism , and thus rescue a weak form of the PII by including relational spatio-temporal properties, that is, considering particles as distinct because they cannot be in the same place at the same time. We thus now find that practical scientific assumptions guide the elaboration of metaphysics. Since the adoption of these thesis by scientists metaphysicians of science have bent to follow, being forced to defend a weak PII that includes impure properties.
As quantum mechanics developed, the shift to considering not particles but probability waves brought more pressure on the PII, as the weak form noted above assumed the impenetrability of objects . Now, it seemed perfectly mathematically possible that two electrons be in the same place at the same time, and even that, following the Copenhagen interpretation, one electron be in a possibly infinite amount of states before observation. This fundamental challenge to the PII was stressed by Schrodinger himself, who chose to refer to “indiscernables” rather than “particles.”  Under this pressure, the notion of a quantum object was heavily criticized, and rejected altogether by many. However, the allure of reducing all quantitative interactions in the universe to a finite number of qualitative interactions that could be described by empirical research remained strong. Many theorists, such as Friebe , mount complex defenses of the principle, attempting to reconcile specific counter-examples, such as bosons in symmetric product states, which do not follow the Pauli Exclusion Principle. Ultimately, the PII seems too epistemologically valuable to fully abandon for experimental sciences, as evidenced by the tenacity of its defenders, but experimental evidence has severely limited its scope and applicability. Physics and the metaphysics of physics have thus interacted along a two way street; each influenced the development of the other. Schrodinger’s letters famously prompted Quine to say that “matter goes by the board,” demonstrating the weight the philosopher gave to his colleagues in the physics department.
In Computer Science
Physics, with the birth of quantum mechanics in the 20th century, is not the only science to bring forth a new perspective on the identity of indiscernibles. Indeed, the development of theoretical computer science and, in particular, the abstract formalization of what is commonly called a “program,” raises a few intriguing questions related to the equality of two “things”. Indeed, a natural question to ask is under what conditions are two programs identical?
Before we can begin discussing the notion of equality for two programs, we must first explore an issue related to the very identity of a program. Many computer theorists, such as James H. Moor  and Timothy Colburn , have suggested that programs have a dual nature. Indeed, a program can either be described as written text following some grammar with a specific semantic account or as the physical manifestation of that text on a given machine. As they do not possess all of the same properties, if we apply the identity of indiscernibles to both of these interpretations, we can easily conclude that they are indeed quite different. Then, how can these two interpretations be reconciled? Various solutions have been proposed, yet there is no clear consensus among computer scientists. For instance, some have described the relationship between the program-text and the program-process as being analogous to someone making a plan and then physically acting it out . Others have instead argued that the program-text and the program-process are related via causation; under this interpretation, the program-text directly causes the program-process. The latter view is quite controversial, with theorists like Colburn arguing that the text itself doesn’t cause the process and that it is rather the result of a physical interpretation by the machine ; in that sense, the software has been labeled a concrete abstraction that has a medium of description (the text, the abstraction) and a medium of execution” . Then, with these different interpretations of duality in mind, how do we differentiate between two programs? When trying to answer this question, if we just consider the program-text, then by the identity of indiscernibles, the identity of the program is bound to the appearance of the text. In that case, changing the spacing or the font of a program would make it a completely new object. This is quite unsatisfactory and has led computer theorists to delve into the program-process, which is directly related to a program’s semantic value.
There are two main categories of programming language semantics: operational and denotational semantics. According to PJ Landin, in operational semantics, the properties of a program are verified through the construction of proofs derived from logical statements about its execution and procedures . On the other hand, Robert Milne describes denotational semantics as constructing mathematical objects that describe the meaning of expressions from the language . To illustrate this difference, consider two programs that return the Fibonacci sequence, one of which iterates through a loop and the other which is recursively defined. According to denotational semantics, the two programs are deemed equal, whereas they are distinct according to operational semantics (since the execution of the two programs is different).
Operational and denotational semantics provide two different criteria for program equality. These two conflicting semantic accounts are quite problematic, which is why a new notion, called observational equivalence, was deemed necessary. Two programs P and Q are deemed observationally equivalent if and only if in any context where P is valid program, then, in the same context, Q is also a valid program with the same semantic value. In this definition, a context can refer to anything, such as the machine the program is run on, the input that is used, etc. For practical purposes, Turner notes that it is obviously impossible to verify every possible context and “formally” prove any two programs’ observational equivalence .
From observational equivalence, another concept, soundness, was deduced. If all non-observationally equivalent programs have distinct denotations, then the semantic is deemed sound. It follows that the notion of identity induced by a sound semantics satisfies the indiscernibility of identicals (if we take two programs, P and Q, such that the denotation of P is the same as the denotation of Q, then under any context, the denotation of P is the same as the denotation of Q under the same context). To obtain the identity of indiscernibles, it was necessary to define another notion, completeness, which states that any two programs that have different denotations cannot be observationally equivalent (if P and Q are observationally equivalent, then P and Q have the same denotations). If we define equality for programs as having the same semantic value (denotation), then we see that a semantics which is both sound and complete (called “fully abstract”) satisfies Leibniz’s law. Similarly to when we discussed the link with quantum mechanics, we see that the metaphysical Leibnizian notion of equality and more traditional sciences are heavily intertwined.
In conclusion, science and the metaphysics of objects interact via a complex array of mutual feedback loops. The metaphysical concept of the PII has historically provided a foundation for the elaboration of many scientific concepts, such as the one-electron universe, or the concepts of denotational and observational equivalence of programs. Inversely, scientific advances like the greater biological insight provided by DNA or the strange description of the world in terms of quantum mechanical probability waves have demonstrated the limited scope of metaphysical statements as well as requiring a shift to “impure” properties. Over time, the evolution of the PII itself has thus not been the result of solely philosophical disputes over its veracity, but a confluence of factors, including the evolution of scientific knowledge. In this light, our original research question was perhaps misguided, as it assumed that one discipline would necessarily take precedence over the other, an approach that is clearly methodologically unsound given the nuance and complexity of interdisciplinary interaction and knowledge production we have identified in the history of the PII.
 GW Leibniz, Discourses on Metaphysics, Jonathan Bennett, 2007, pp 9 <https://www.earlymoderntexts.com/assets/pdfs/leibniz1686d.pdf>
 Fred Sommers “Leibniz’s Program For The Development of Logic”, in Boston Studies In The Philosophy of Science, Vol XXXIX, Essays in Memory of Imre Lakatos
 George Boolos, “To Be Is To Be a Value of a Variable (or to Be Some Values of Some Variables),” Journal of Philosophy (1984), 81: pp. 430–50.
 O Linnebo, “Plural Quantification”, The Stanford Encyclopedia of Philosophy (Spring 2009 Edition), Edward N. Zalta (ed.), <https://plato.stanford.edu/archives/spr2009/entries/plural-quant/>.
 Pitt, David. “Alter Egos and Their Names.” The Journal of Philosophy 98, no. 10 (2001): 531. doi:10.2307/3649468.
 Black, M., “The Identity of Indiscernibles”, Mind, 1952
 Hacking, I., 1975, “The Identity of Indiscernibles”, Journal of Philosophy, 72 (9): pp. 249–256.
 O’Leary-Hawthorne, J., 1995, “The Bundle Theory of Substance and the Identity of Indiscernibles”, Analysis, 55: pp. 191–196.
 Richard Feynman, “Nobel Lecture”, 1965, <https://www.nobelprize.org/prizes/physics/1965/feynman/lecture/>.
 Margaret Jourdain, Diderot’s Early Philosophical Works (1916),< https://archive.org/details/diderotsearlyphi010275mbp/page/n13 >
 Lawniczak MK, Emrich SJ, Holloway AK, et al. “Widespread divergence between incipient Anopheles gambiae species revealed by whole genome sequences”. Science (Oct 2010)
 Quine, W.V.O., 1976, “Grades of Discriminability”, Journal of Philosophy, 73: pp. 113–116.
 James Ladyman and Tomasz Bigaj, “The Principle of Identity of Indiscernibles in Quantum Mechanics,” Philosophy of Science, 77 (January 2010), pp.117–136.
 S French, “Identity and Individuality in Quantum Theory”, The Stanford Encyclopedia of Philosophy (Spring 2006 Edition), Edward N. Zalta (ed.), <https://plato.stanford.edu/archives/spr2006/entries/qt-idind/>.
 S French, “Identity and Individuality in Classical and Quantum Physics”, Australasian Journal of Philosophy 67 (1989): pp. 432–446.
 F.A. Muller, “Withering Away, Weakly,” Synthese, 2009.
 Cord Friebe, “Individuality, distinguishability, and (non-)entanglement: A defense of Leibniz’s Principle”, Studies in History and Philosophy of Modern Physics, 2014.
 Moor, J.H., 1978, “Three Myths of Computer Science”, The British Journal for the Philosophy of Science 29(3): pp. 213–222.
 Colburn, T., and Shute, G., 2007, “Abstraction in Computer Science”, Minds and Machines 17(2): pp. 169–184.
 Turner, Raymond, and Amnon Eden. “The Philosophy of Computer Science.” Stanford Encyclopedia of Philosophy. December 12, 2008. <https://stanford.library.sydney.edu.au/archives/sum2009/entries/computer-science/>
 Landin, P.J., 1964, “The mechanical evaluation of expressions”, Computer Journal 6(4): pp. 308–320.
 Milne, R. and Strachey, C., 1977, A Theory of Programming Language Semantics, New York, NY: Halsted Press.
This paper was written by Louis de Benoist, Tristan Blot, Paul Seghers, Phoebe Mac Donald, and Deivis Banys as part of a course at Ecole Polytechnique.
© Louis de Benoist 2021