On the Origin of Synthetic Life

Roman V. Yampolskiy
36 min readApr 12, 2019

Attribution of Output to a Particular Algorithm

Roman V. Yampolskiy

Computer Engineering and Computer Science

University of Louisville

roman.yampolskiy@louisville.edu

Abstract

With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to be able to distinguish between natural and artificial life forms. In this paper, we present this challenge as a generalized version of Darwin’s original problem, which he so brilliantly addressed in On the Origin of Species. After formalizing the problem of determining origin of samples we demonstrate that the problem is in fact unsolvable, in the general case, if computational resources of considered originator algorithms have not been limited and priors for such algorithms are known to be equal. Our results should be of interest to astrobiologists and scientists interested in producing a more complete theory of life, as well as to AI-Safety researchers.

Keywords: Designometry, Evolution, Falsifiability, Genetic Engineering, GMO, Synthetic Life, Robot Evolution.

1. Introduction

In 1859 Charles Darwin published his famous work — On the Origin of Species. In it, he provided a naturalistic explanation for the origins of fossilized and living biological samples collected in different regions of planet Earth. Before publication of Darwin’s theory of natural selection (currently integrated into what is known as the theory of evolution), the prevailing theory used to explain such samples attributed their origins to a supernatural cause commonly assumed to be God(s). Darwin’s theory quickly became the dominant one accepted by majority of scientists as the best explanation for the origins of different species. Evolutionary theory has only consolidated its position over the years due to strong additional evidence from such diverse fields as genetics, anthropology, and computer science [1].

In particular, research in genetics, which was not available during Darwin’s life, has provided a treasure trove of experiments used to confirm Darwin’s theory. At the same time, recent unprecedented advances in genetic engineering [2], directed evolution [3, 4], reprogramming [5] and synthetic genomics [6, 7] have allowed scientists to create Genetically Modified Organisms (GMOs) [8], expand genetic code [9, 10], create synthetic DNA [11] and synthetic life [12, 13] and consider creation of synthetic human genomes [14]. With the development of the latest tool for genetic manipulation (CRISPR [6]) no fundamental limits remain to the engineering of novel synthetic life forms. With fields like Evolutionary Robotics [15, 16], Artificial Life [17–20] and Evolutionary Computation [21] providing theoretical and experimental support for the creation of evolvable synthetic life it is worthwhile to think about the future directions in post-Darwinian evolutionary theory [22].

A major challenge we are likely to face in the near future is being able to tell synthetic life forms from natural ones. We are already experiencing a need to identify GMOs for proper labeling and compliance, with some early work reported in that domain [23–25]. With advances in space exploration, particularly with spacecraft visiting moons and planets of the solar system a possibility of bacterial contamination of space objects by organisms from Earth becomes a real possibility. If such organisms are later rediscovered we would need to be able to determine their origin. Likewise, spacecraft returning from a mission may bring unknown organisms to earth, despite our best precautions [26], again presenting us with what, for want of a better expression, we call the sample attribution problem. There is also a possibility of discovering extraterrestrial life, but we will not concentrate on this situation.

We can now setup an artificial environment in which the samples’ origin and distribution is known in advance (unlike in Darwin’s original problem) and attempt to select the correct explanation between modern evolutionary [27] and non-evolutionary theories [28, 29] in a side-by-side test, something we were previously not able to accomplish. Can science accurately distinguish between naturally evolved and genetically engineered life forms if the distribution at the outset is known? This would be easy to set up in the lab once we have access to a large number of synthetic life forms. We could, for instance, take the special case where the distributions are equal initially by placing 50 naturally evolved organisms (class A) and a 50 engineered organisms (class B) into a lab setting and challenge the scientific community to accurately attribute each sample with respect to class A or B. With individual samples represented by standalone artifacts, not historical records of multiple related samples. As a thought experiment we can imagine setting this up on another planet as a challenge for alien scientists/explorers to reduce the impact of knowing something about natural organisms on Earth. This produces a very clean and decisive experiment as our artificial setup removes any bias associated with results directly affecting ourselves as people on Earth and allows us to perform an experiment, the results of which can be evaluated against known truth-values. This would give us a chance to evaluate our theories of origin of biological samples.

Occam’s Razor [30], which states that among multiple possible hypotheses the simpler one should be selected, is typically used to argue that evolutionary theory provides a superior explanation to theories which may include an engineer as such theories have to also explain nature and origins of the said engineer resulting in a more complicated hypothesis. However, in our proposed experiment, the nature of the engineer is known and samples are chosen to have equal likelihood of being generated by evolutionary or synthetic means making application of Occam’s razor erroneous.

2. Generalized Sample Attribution Problem

The proposed problem of discerning synthetic life from naturally evolved life forms can be seen as a special case of the general problem of selecting the algorithm responsible for generating observed samples from a number of possible algorithms, in contrast to the original problem faced by Darwin of developing a naturalistic algorithm, which could be used to explain collected biological samples. This problem is a subset of Solomonoff Induction (SI) [31, 32] and science in general [33]. Given a set of observations, determine which of many theories best accounts for what was observed and accurately predicts future observations.

To distinguish it from Darwin’s original problem we call this problem the Generalized Sample Attribution Problem (GSAP) or Generalized Darwin’s Problem. GSAP can be expressed as a computer science problem, in terms of algorithms and digital data. Any type of scientific samples and DNA code in particular can be represented as a bit string. Algorithms capable of generating bit strings encoding collected samples can be subdivided into two main types: evolutionary algorithms (Genetic Algorithms, Genetic Programming, etc.) and engineered algorithms (Expert Systems, Cognitive Systems, etc.). Hybrid types, such as algorithms engineered to evolve [34] and those, which evolve capability to do engineering are also possible. In the biological domain, such mixed types can also be a result of crossbreeding between genetically engineered and naturally occurring organisms.

For the purposes of our work, it is important to establish clear criteria that determine whether something is artificial or natural, as many samples will combine properties of both. Well-engineered designs are capable of adaptation and some evolved systems are capable of engineering. For example, Shapiro argues that we observe natural genetic engineering in evolution: “… much of genome change in evolution results from a genetic engineering process utilizing the biochemical systems for mobilizing and reorganizing DNA structures present in living cells” [35]. We will define engineered samples as those which include any contributions from an intentional agent such as a human engineer, a definition which excludes natural evolution which is intelligent [36] and powerful, but not purposeful or intentional [37], optimization process [38].

Finally, the possibility remains that a third type of algorithm, one outputting random bits will also hit the target string[1] [39], but as the size of the bit string grows exponentially increasing computational resources would be required for this algorithm. Random algorithms could correspond to appearance of living forms by chance in some parts of the multiverse due to availability of necessary probabilistic resources [40]. It will happen if the Everett’s many-worlds interpretation of quantum physics [41] is true or if an algorithm is used to generate every possible universe [42] leading to generation of all conceivable strings in some universe, but as we are looking for a generic procedure to evaluate samples from particular universes, random algorithms can be safely ignored.

3. Distinguishing Naturally Evolved Life from Engineered Life

Analyzing properties of a particular evolutionary algorithm may allow us to discover features which can be used to distinguish between engineered and evolved organisms. For one, we know that evolution takes a very long time to work so if we learned that only a limited amount of time was available for the formation of a complex sample that would indicate that it was not a product of natural evolution. Also, some features have not been found in natural systems and so their inclusion may indicate that engineering took place. For example, Minsky wrote: “Many computers maintain unused copies of their most critical ‘system’ programs, and routinely check their integrity. However, no animals have evolved like schemes, presumably because such algorithms cannot develop through natural selection. The trouble is that error correction then would stop mutation — which would ultimately slow the rate of evolution of an animal’s descendants so much that they would be unable to adapt to changes in their environments” [43].

Many respected scientists discuss the apparent difficulty in distinguishing between natural and engineered systems. For example, Shapiro says: “It is very important to recognize that living cells resemble man-made systems for information processing and communication in their use of mechanisms for error detection and correction.” [35]. Similarly, Dawkins says: “Biology is the study of complicated things that give the appearance of having been designed for a purpose” [44] and continues “We may say that a living body or organ is well designed if it has attributes that an intelligent and knowledgeable engineer might have built into it in order to achieve some sensible purpose… any engineer can recognize an object that has been designed, even poorly designed, for a purpose, and he can usually work out what that purpose is just by looking at the structure of the object” [44]. More generally, Minsky addresses the need to change our thinking regarding teleological explanations: “We now can design systems based on new kinds of ‘unnatural selection’ that can exploit explicit plans and goals, and can also exploit the inheritance of acquired characteristics. It took a century for evolutionists to train themselves to avoid such ideas--biologists call them 'teleological' and ‘Lamarckian'--but now we may have to change those rules!” [43]. Because evolution is a powerful optimization process it is capable of producing designs (springs [45], gears [46], compasses [47], Boolean logic networks [48], digital codes [49] etc.) which are just as complex as those produced by intelligent agents, meaning that any test designed for detecting intelligence via examination of artifacts will fail to determine the causal source [50].

The difficulty in identifying how any one particular sample originated is exacerbated by the fact that most observed evidence is equally likely to support either synthetic or natural origins hypothesis. Compare observations of certain properties in naturally evolved biological organisms with similar observations from engineered organisms or software: DNA similarities between organisms indicate that later samples evolved from earlier ones (e.g. homo sapiens evolved from homo erectus), but code similarities between different releases of a software project indicate that much code was reused (e.g. Windows NT and Windows XP). Poor design in nature can be explained by the fact that the evolutionary process has no foresight (e.g. blind spot in human eyes due to the location of nerve fibers in front of the retina), but poor design in engineered systems can be explained by the incompetence of the engineer (ex. Toyota brake problems). Vestigial organs in some animals (ex. the wings of flightless birds) are well explained by deducing that the species is in the process of adapting to a changed environment, but in the world of engineering, outdated features are frequently observed because it may be costly to redesign the system to remove them (e.g. ashtrays on airplanes) or to keep the system backwards compatible (able to be used with an older piece of hardware/software). Animals evolved the ability to adapt to changing environments (e.g. seasonal fur change), but software is frequently designed to be adaptable to user preferences (e.g. Netflix learning what movies you like).

Similar analysis can be applied to other evidence frequently used to justify attribution of samples to only a single hypothesis. It is important to note that this dual explanation for evidence is symmetric, so “classical” evidence of engineering has a well-fitting explanation in naturalistic evolution and vice versa. Figure 1 illustrates why it may be difficult to distinguish natural and engineered specimens via simple observation.

(JCVI-syn3.0 cells [12] Glow-in-the-dark cat [51] Multi-grafted fruit tree [52] Pyrite Cubic crystals [53] Clathrus Ruber mushroom [54] Issus Coleoptratus gears [46])

Figure 1: Engineered (top row) and natural samples may be difficult to distinguish.

3.1 GMO Detection Methods

In order to comply with recent GMO regulations a number of techniques have been proposed to identify GMOs [23–25]. Many protein and nucleic acid-based detection methods have been developed and used for identification and quantification of GMOs. Such techniques typically rely on direct matching of samples to available reference materials stored in databases of known GMOs, which might include sequence information of exogenous inserts as well as endogenous reference genes [55]. Such methods of direct matching do not work for undisclosed modifications.

3.2 Unevolvable Elements

An interesting sub-problem in the forensic investigation of origins of biological samples is the study of Unevolvable Elements (UE). Such elements are components of the sample that could not arise via an evolutionary process, because all precursor elements do not improve or even lower fitness of the organism, preventing the module (code fragment) in question from arising. We distinguish two types of such elements: Type A, which decode to a meaningful plaintext and are too long to happen by chance and Type B, which represent narrow targets in the space of possible solutions, surrounded by broad moats of negative fitness. While unevolvable elements of the first type are well documented [56, 57], existence in the real-world of elements of the second type remains an open question. Let us examine each type of UE, and review some examples of each.

In many cases, genetic engineers behind the project have no reason to hide their contribution and in fact may be interested in making sure that the organism is labeled in such a way that it is obviously seen as synthetic, for example with watermarks [58]. Labeling is also useful to make it possible to trace an organism’s descendants to the originator, which can be very important, for example in case of patent disputes [59]. Such labeling may take a form of a digital signature, or plain text metadata such as inserting the text “Made in USA”.

Meaningful [39] text encoded in DNA on purpose or left there by mistake during the design process (as comments or inactive code) could be detected and extracted [57]. In fact, over the last few decades scientists have inserted text messages into natural living organisms [60], GMOs [61] and synthetic life forms [58]. Such messages range in length from a few symbols (such as “E=MC2” [60]) to full text of books [62] with a complete archival systems in the works [63]. The actual encoding and decoding process is beyond the scope of this paper, but the interested reader is advised to read a survey of the topic by Beck et al. [57]. As long as the length of the discovered message is not trivial an investigator can conclude that engineering took place and the organism is not 100% natural. Efforts to find such text [64, 65] preceded the ability of scientists to insert such messages. A search for signs of engineering in biological (genomic) information of any unattributed biological sample is just as reasonable as the SETI search of astronomical data. In particular with any samples acquired from extraterrestrial sources such Biological SETI[2] [66] should be a recommended first step.

The main challenge comes from recognizing text as “meaningful” particularly in cases of non-human engineers. Many attempts have been made to formalize “meaningful” to represent “the value of a message as the amount of mathematical or other work plausibly done by its originator” [67]. Different measures have been proposed under such names as “potential” (Adleman [68]), “incomplete sequence” (Levin and V’jugin [69]), “hitting time” (Levin [70]), “sophistication” (Koppel [71]), and “intelligence based complexity” (Yampolskiy [39]) are best known as the logical depth [67] of a string. Bennet describes this concept as follows: “Of course, the receiver of a message does not know exactly how it originated; it might even have been produced by coin tossing. However, the receiver of an obviously non-random message, such as the first million bits of pi, would reject this “null” hypothesis, on the grounds that it entails nearly a million bits worth of ad-hoc assumptions, and would favor an alternative hypothesis that the message originated from some mechanism for computing pi. The plausible work involved in creating a message, then, is the amount of work required to derive it from a hypothetical cause involving no unnecessary, ad-hoc assumptions. It is this notion of the message value that depth attempts to formalize” [67]. Similarly, Gurevich, describes a step-by-step process for what he calls “impugning randomness” [72] a method for distinguishing purposeful from accidental.

By analogy with the SETI approach a search for artificiality and cognitive universals could take place instead, with statistical abnormalities and non-randomness being used to detect language-like patterns [73, 74]. To avoid ambiguity it is desirable to find patterns which are 1) highly statistically significant and 2) exhibiting hallmarks of artificiality such as “symbol of zero, the privileged decimal syntax and semantical symmetries” 3) inconsistent in principal with any natural process be it Darwinian or Lamarckian evolution [66].

In adversarial scenarios, such as illegal utilization of GMOs, genetic engineers might be interested in hiding their contribution to the design of the organism, either by explicitly erasing all evidence or at least by making its detection difficult if not impossible without privileged information by relying on Steganography [75] or Deniable Cryptography [76, 77]. Deniable cryptography produced by encoding and combination of multiple plain texts is not very efficient in terms of size of cipher text and would produce large segments of DNA with no discernable meaning, something akin to “junk DNA”. However, recent research suggests that such DNA segments are actually very meaningful and language-like [78] and might contain the historic record of modules which have evolved in previous environments and might be useful in the future if environmental conditions return to the previously seen state or for the control of gene expression.

In case when it is suspected that unknown engineers are originators of the organism, the text may be encoded using some unknown coding/language [79] so it might be a worthy idea to check Schelling Point [80, 81] passwords [82] such as digits of π, prime numbers, Fibonacci numbers, etc.

In organisms with no DNA code or if only external observation of the sample is possible we may be interested in investigating the presence of unevolvable elements of second type — functional modules which could not arise via the process of mutation with natural selection due to “low-fitness moats” around such designs. A low-fitness moat must not just prevent evolution of the module from components; it also precludes its appearance as a reduction from a more complex module. Darwin himself put it as follows: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down” [83]. Such modules can happen by random chance only if the number of involved parts is very small, so a component with a significant number of diverse parts is unlikely to arise by chance alone.

Whether low-fitness moats exist in complicated domains, such as biology, is an open question we would like to see addressed. It is possible that they don’t exist or are very rare. The argument is that the search space is so vastly high-dimensional (e.g. 3 billion base pairs in human DNA) that it is unlikely that there is literally no route through this 3-billion-dimensional space to any particular high-fitness point or region. There are similar arguments right now in deep learning about why stochastic gradient descent in large networks of millions of connections (i.e. dimensions) does not seem to be getting caught in local optima to the extent we might expect. It appears many of these “local optima” are actually saddle points and not optima after all, and perhaps genome space is similar.

Consequently, we propose a challenge to the synthetic biology community to purposefully design and produce an organism with an unevolvable component, which meets Darwin’s criteria for falsifying his theory. Can such a feat be accomplished? Can it be mathematically proven that a particular design is not evolvable, or at least statistically very unlikely? We believe those are important questions to be answered by genetic engineers and which would reconfirm the falsifiability of the Theory of Evolution [84]. If unevolvable elements do not exist, every design can be naturally occurring and so it is not possible to distinguish between natural and synthetic origins, otherwise the presence of unevolvable elements can be used to prove that engineering took place.

A strong connection exists between unevolvable elements of Type A and Type B. A long meaningful text may represent a blueprint for constructing an unevolvable organ and an unevolvable biological module can be reduced to a complex and meaningful informational pattern. By analogy with AI-Completeness [85, 86] we propose the concept of Intelligence-Completeness (I-Completeness) to indicate that certain elements are not evolvable and require intelligence to be constructed. I-Complete artifacts could be reduced to other representations (text, drawing, 3D model, organism, etc.) without losing their distinctive origination signature from the purposeful engineering process.

What distinguishes I-Completeness from AI-Completeness is that AI-Complete systems have no restrictions on how they can be constructed, while objects with the I-Complete property, from the definition, cannot be products of an evolutionary process. Consequently, our challenge of constructing an artificial unevolvable biological organ is equivalent to the problem of proving some problem I-Complete. From this first, hypothetical, case other problems would be shown to be I-Complete via a series of reductions, which is a well-known method in the theoretical computer science community [87].

AI-Completeness was first established [86, 88] as the property of passing the Turing Test (TT) [89], with other problems shown to be AI-Complete via reductions from the TT. Perhaps we can rely on the same problem for proving I-Completeness, since engineering of synthetic life requires at least human level intelligence, and that is exactly what is being detected by the TT. One possibility is to take verbatim text from someone passing the TT and to encode it in an organism’s DNA, with questions from the test corresponding to specifications and answers to meaningful information. The existence of area specific TTs in domains such as art and poetry [90] suggest that we can also produce a restricted TT for the domain of genetic engineering and encode any unevolvable element descriptions as answers to questions asking to describe such structures.

3.3 Forensic Evidence from the Code

In theory, as long as statistical properties of samples produced by a particular algorithm can be captured, another algorithm can simulate them on purpose, essentially spoofing behavior of the original algorithm [91]. In fact, the statistical model describing the samples can serve as an engineered algorithm for generating an equivalent sample distribution be it by an evolutionary process or any other type of algorithm. Engineered algorithms are capable of both simulating natural evolution and using it as a module in achieving their goals [92]. In principal, an engineered algorithm can produce any computable distribution and so can an evolutionary algorithm with infinite computational resources, making both types of algorithms universal and claims of particular origin of samples unfalsifiable given the unlimited power of either approach. Consequently, we can never have 100 percent certainty as to the origination algorithm, only probabilistic estimates. This analysis applies only to post-factum observations of collected samples. If we have a chance to observe and analyze the sample generator at work we can be certain as to the process used.

4. Designometry — Generalization of the Proposed Analysis

A forensic investigator studying an explosive device, a professor looking at a plagiarized programming project, an art expert examining a potential forgery and numerous other professionals find themselves in a situation where they need to infer information about the engineer/designer/author of a product/object/text in the absence of direct access to the agent, and only in possession of the agent’s output. For example, depending on the domain, the process of making such inferences is called forensic analysis [57], stylometry [93], historiometrics [94] or behavioral profiling [95, 96]. Regardless of the subdomain of inquiry, the generalized process we will call Designometry: is to uncover a “signature” of the originator in the artifact and from it to identify the agent responsible or to at least learn some properties, of the design process, which produced the artifact. Designometry could be widely applied to both biological and non-biological artifacts, which are products of intentional construction. The field includes such subdomains as:

· Artimetrics — which identifies software and robots based on their outputs or behavior [97, 98].

· Behavioral Biometrics — which quantify behavioral traits exhibited by users and use resulting feature profiles to verify identity [99]. Examples of analyzed artifacts may include text, art as well as records of direct or indirect human-computer-interaction [100].

· CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) — obtains input from an agent and classifies producing agent as human or artificial [101, 102].

Desingometry could itself be seen as a sub-branch of Intellectology, a field proposed to “study and classify the design space of intelligent agents, work on establishing limits to intelligence (minimum sufficient for general intelligence and maximum subject to physical limits), contribute to consistent measurement of intelligence across intelligent agents, look at recursive self-improving systems, design new intelligences (making AI a sub-field of intellectology) and evaluate capacity for understanding higher level intelligences by lower level ones” [103, 104].

Next, we will give one example, which would fall under the heading of designometry. Stylometry of text relies on statistical analysis of “vocabulary richness, length of sentence, use of function words, layout of paragraphs, and key words” [105] to determine the gender, age [106], native language [107], personality type [108] and even intelligence of a human author or comparable properties of an artificially intelligent text generator [109]. In general, it seems it is possible to estimate the scientific knowledge and minimum intelligence necessary to produce, or at least duplicate, a particular artifact by analyzing its complexity, prerequisite components and evidence of tools used in the production, be it an artifact/data or an abstract algorithm [39, 71]. This does not imply that anyone with the required level of intelligence would be able to produce the artifact under consideration, just that someone below that level would fail to do so. The Reader is encouraged to read about fascinating designometric analysis of the Antikythera mechanism [110], the Egyptian pyramids [111] or the Stuxnet Virus [112] for some famous examples of such efforts.

As for the anticipated future applications of designometry, one example could be given from the domain of AI Safety Yampolskiy writes about an artificial superintelligent system confined to a restricted environment [113], which attempts to learn the nature of its designers and programmers by inspecting its own source code: “… the AI will have access to covert sources of information such as its own hardware and software and could analyze its design and source code to infer information about the designers. For example analysis of the source code may reveal to the AI that human programmers are slow (based on the file modification dates), inefficient (based on code redundancy), illogical (based on bugs in the code), have bad memory (based on the long and descriptive variable names), and don’t think in code (based on unnecessary comments in the code)” [114]. Another interesting application of designometry would be to the problem of determining if the environment in which an agent (human or artificial intelligence) finds itself is natural or engineered. This has important applications in the domains of AI Safety [115], self-locating beliefs [116], life choices [117] and general philosophy [29]. Such capacity would be particularly timely as our ability to create realistic virtual worlds is improving exponentially [118]. Finally, we foresee great utilization in domain of steganography detection [119] and general forensic analysis.

Open problems in designometry include consolidation of analysis methods from specific domains, as well as development of generalized tools and tests to be used in novel domains of investigation. Man-made [120, 121], alien-made [122] artificial object detection, and exhaustive understanding of types of information which could be inferred about the originator from the artifact are all current examples of research directions in designometry. It may be useful to be able to tell if two designs were engineered by the same agent or if an agent reused parts from another design. It is also highly likely that this process could be automated via machine learning as has been demonstrated by recent work in software designometry [123].

5. Most Life in the Universe has Engineered Origins

Inspired by Bostrom’s statistical argument for our universe being an engineered one [29] we suggest a similar argument in the realm of biology. Estimating the distribution in the universe of synthetic life versus naturally occurring life, it is likely that designed life (biological robots of any complexity produced by early alien civilizations) is the significantly more common default case. Others have made similar observations, for example Dick: “…cultural evolution may have resulted in a postbiological universe in which machines are the predominant intelligence…” Dick goes on to say “… this means that we are in the minority; the universe over the billions of years that intelligence has had to develop will not be a biological universe, but a postbiological universe” [124], or Schneider specifically on high intelligence agents: “… it may be that [Biologically Inspired Superintelligent Aliens] are the most common form of alien superintelligence out there.” [125]. Similarly, Makukov et al, state: “… at the current age of the Galaxy it might be even more probable for an intelligent being to find itself on a planet where life resulted from directed panspermia rather than on a planet where local abiogenesis took place, and the Earth is not an exception from that. This is not to say that the view that terrestrial life originated locally is flawed. But subscribing largely to this view and dismissing the possibility that terrestrial life might not be a first independent generation in the Galaxy is probably nothing but a manifestation of geo-anthropo-centrism (inappropriately armed with Occam’s razor).” [126].

Unless evidence to the contrary exists, a given life form is statistically more likely to have its origins as a product of engineering and so our priors should be adjusted accordingly. This type of reasoning also applies to Earth: we are also likely to have our origins as synthetic life, as suggested by the theory of directed panspermia [28], seeding [127] or some other similar variants [128, 129]. In fact the approximate probability of being produced by unaided laws of physics rather than engineering is equal to 1 divided by the total number of self-reproducing biological robot species all the generations of intelligent beings around the universe have ever produced. In our estimate (based on Drake’s equation [130]) this tends to zero as the age of the universe increases. In general, as the universe ages, the chance of any life form being an original evolved form rather than second or later generation design approaches zero. It is important to note that our statistical argument applies only to the origins of life, not to the process of speciation, which is well explained by the Theory of Evolution. In contrast, the Theory of Evolution does not make any claims regarding the origins of life.

Assuming that in our future we will seed thousands if not millions of such robot colonies (which in turn may do the same) in our quest to colonize the galaxy, we can observe that the common problem of attribution of the origins of life would show up on many planets (this would also happen under the many-worlds interpretation of quantum mechanics and as a result of robots being developed by space aliens). We may refer to this situation as the Many Darwins Problem (a “Darwin” per seeded planet).

Further, let us consider a thought experiment; we shall call it the Robot Planet Problem[3]. Suppose at some point in our future we design a very advanced humanoid (biological) self-replicating robot with the goal of exploring distant planets. We send a group of such robots on a long-term mission to a star known to be orbited by a number of Earth-like planets [131]. Our goal may be to establish a permanent base on one or more such planets to reserve its resources for us, in case competing alien species may have interest in the same solar system. We would also like to make the said planets habitable for human beings and to instruct our robots to await contact from their human masters. The robots are, of course, designed to be adaptable to variations in their future environment and have a general level of intelligence comparable to that of humans.

Although it may be possible to make them superintelligent [104, 114, 125, 132], but it is probably not a rational thing to do as such robot may present a danger to us and would be harder to control [133]. Also, providing robots with very specific goals may produce undesirable side-effects and may not work well in a large number of planets with unknown conditions. Perhaps our instructions to them will be something like: “Reproduce to a number sufficient to obtain full control of your host planet, make it habitable for yourself and your masters and await arrival of your designers”. A number of less important instructions can be provided, such as: maintain good condition of each robot, establish a rule of law, do not destroy other robots, etc. It is possible that the planets in question may already contain some forms of life, but probably not highly intelligent life, so additional instructions may be provided to preserve local biodiversity.

As a significant amount of time passes on the Robot Planet, the group’s mission is probably going to progress fairly well with the construction of necessary infrastructure, increase in population and development of sophisticated local culture and religious tradition centered around its human masters. At some point, most or all robots would have no direct knowledge of their human masters. Considerable advances are likely to have been made in terms of science and technology. At this point it is likely that a “Robot Darwin” would appear, who would criticize the idea of human masters as an irrational belief and propose a naturalistic explanation for the inhabitants of the robot planet not too different from the theory of evolution. Since the robots were designed with ability to adapt to their new environment sufficient evidence for evolution would be found and it would quickly become a dominant and very reasonable explanation for the origins of the robot colony, in light of ideas presented in this paper.

6. Conclusions

In this paper, we have suggested a design for an experiment in which engineered life is as likely as natural life by normalizing priors. The experiment is intended to test the current assumption that it is possible to determine if a given sample is produced by natural evolution while also allowing us to investigate the detectability of genetically modified and fully synthetic life forms, which are quickly becoming common due to the latest advances in genetic engineering. With thought experiments we attempted to show that most current life is statistically more likely to have synthetic origins and shown how such theory could be tested by translating the problem to the domain of computer science. All investigated theories have fully naturalistic explanations and are completely falsifiable.

In the theoretical case of unlimited resources (mostly time [134], but also multiverses) it is not possible to tell which type of algorithm is responsible for producing the collected samples, as all investigated algorithms are universal in a sense that they can eventually produce any pattern. The suggested analysis is also broadly applicable to biological and non-biological samples, essentially everything we can represent as a binary string.

Developments in synthetic biology and evolutionary robotics raise a number of ethical, biosafety and security issues. In addition to potential development of novel deadly pathogens [135], genetically modified humans [136] and other organisms, we are also facing a potential runaway evolutionary process. An outcome of such process could be the appearance of dangerous and potentially superintelligent robots [137], which may cause human extinction in the same way that a large number of previously existing species went extinct because of the appearance of an intellectually superior species — Homo Sapiens.

We have reviewed a number of cases in which it is possible, as a result of forensic analysis, to conclusively state that a collected sample has been engineered rather than occurred naturally. Such telltale signs include: complexity in the absence of probabilistic resources, watermarking, multilevel encoding [138], support for future features, physical computation [139], evidence of degradation from the original design, and the engineer’s signature, etc. It may even be possible for intelligent agents to perform this analysis on themselves to discover their origins. Synthetic life forms which may be discovered in the wild will be interesting to study, because they can have a number of features not found in naturally occurring ones, such as: backdoor control mechanisms, hidden capabilities, previously unseen features, etc. Studying designed systems may also leak information about the engineers behind the design. Methods to do so are of interest to forensic investigators, SETI scientists, stylometry practitioners and exobiologists. Finally, it is very important to note that confirmed detection of synthetic life, even in the wild, would not prove any non-naturalistic notions be it god(s), creationist myths or religion, only that engineering took place.

Acknowledgements

The author is thankful to Suzanne Lidström, Yana Feygin, Alexey Melkikh, Susan Schneider, Gennadiy Mirochnik, Søren Elverlin and Kenneth Stanley for valuable feedback on this paper.

References

1. Goldberg, D.E., Genetic Algorithms in Search, Optimization and Machine Learning1989: Addison-Wesley Pub. Co.

2. Simmons, D., Genetic inequality: Human genetic engineering. Nature Education, 2008. 1(1): p. 173.

3. Packer, M.S. and D.R. Liu, Methods for the directed evolution of proteins. Nature Reviews Genetics, 2015. 16(7): p. 379–394.

4. Jeschek, M., et al., Directed evolution of artificial metalloenzymes for in vivo metathesis. Nature, 2016.

5. Suzuki, T., et al., Mice produced by mitotic reprogramming of sperm injected into haploid parthenogenotes. Nature Communications, 2016. 7.

6. Garfinkel, M.S., et al., Synthetic genomics: options for governance. Industrial Biotechnology, 2007. 3(4): p. 333–365.

7. Ostrov, N., et al., Design, synthesis, and testing toward a 57-codon genome. Science, 2016. 353(6301): p. 819–822.

8. Jackson, D.A., R.H. Symons, and P. Berg, Biochemical method for inserting new genetic information into DNA of Simian Virus 40: circular SV40 DNA molecules containing lambda phage genes and the galactose operon of Escherichia coli. Proceedings of the National Academy of Sciences, 1972. 69(10): p. 2904–2909.

9. Liu, C.C. and P.G. Schultz, Adding new chemistries to the genetic code. Annual review of biochemistry, 2010. 79: p. 413–444.

10. Wang, L., et al., Expanding the genetic code of Escherichia coli. Science, 2001. 292(5516): p. 498–500.

11. Gibson, D.G., et al., Creation of a bacterial cell controlled by a chemically synthesized genome. science, 2010. 329(5987): p. 52–56.

12. Hutchison, C.A., et al., Design and synthesis of a minimal bacterial genome. Science, 2016. 351(6280): p. aad6253.

13. Park, S.-J., et al., Phototactic guidance of a tissue-engineered soft-robotic ray. Science, 2016. 353(6295): p. 158–162.

14. Endy, D. and L. Zoloth, Should We Synthesize A Human Genome? Available at: https://dspace.mit.edu/handle/1721.1/102449, 2016.

15. Nolfi, S. and D. Floreano, Evolutionary robotics: The biology, intelligence, and technology of self-organizing machines2000: MIT press.

16. Lipson, H. and J.B. Pollack, Automatic design and manufacture of robotic lifeforms. Nature, 2000. 406(6799): p. 974–978.

17. Langton, C.G., Artificial life: An overview1997: Mit Press.

18. Soros, L. and K.O. Stanley. Identifying necessary conditions for open-ended evolution through the artificial life world of chromaria. in ALIFE 14: The Fourteenth Conference on the Synthesis and Simulation of Living Systems. 2014.

19. Wehner, M., et al., An integrated design and fabrication strategy for entirely soft, autonomous robots. Nature, 2016. 536(7617): p. 451–455.

20. Ronald, E.M., M. Sipper, and M.S. Capcarrère. Testing for emergence in artificial life. in European Conference on Artificial Life. 1999. Springer.

21. Back, T., D.B. Fogel, and Z. Michalewicz, Handbook of evolutionary computation1997: IOP Publishing Ltd.

22. Bostrom, N., The future of human evolution. Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, Ria University Press, Palo Alto, 2004: p. 339–371.

23. Querci, M., et al., New approaches in GMO detection. Analytical and Bioanalytical Chemistry, 2010. 396(6): p. 1991–2002.

24. Elenis, D.S., et al., Advances in molecular techniques for the detection and quantification of genetically modified organisms. Analytical and bioanalytical chemistry, 2008. 392(3): p. 347–354.

25. Ahmed, F.E., Detection of genetically modified organisms in foods. TRENDS in Biotechnology, 2002. 20(5): p. 215–223.

26. Allton, J., J. Bagby, and P. Stabekis, Lessons learned during Apollo lunar sample quanrantine and sample curation. Advances in Space Research, 1998. 22(3): p. 373–382.

27. Mayr, E. and W.B. Provine, The evolutionary synthesis: perspectives on the unification of biology1998: Harvard University Press.

28. Crick, F.H. and L.E. Orgel, Directed panspermia. Icarus, 1973. 19(3): p. 341–346.

29. Bostrom, N., Are we living in a computer simulation? The Philosophical Quarterly, 2003. 53(211): p. 243–255.

30. Blumer, A., et al., Occam’s razor. Readings in machine learning, 1990: p. 201–204.

31. Solomonoff, R.J., A formal theory of inductive inference. Part I. Information and control, 1964. 7(1): p. 1–22.

32. Solomonoff, R.J., A formal theory of inductive inference. Part II. Information and control, 1964. 7(2): p. 224–254.

33. Rathmanner, S. and M. Hutter, A philosophical treatise of universal induction. Entropy, 2011. 13(6): p. 1076–1136.

34. Lehman, J. and K.O. Stanley, Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 2011. 19(2): p. 189–223.

35. Shapiro, J.A., Natural genetic engineering in evolution, in Transposable elements and evolution1993, Springer. p. 325–347.

36. Watson, R.A. and E. Szathmáry, How Can Evolution Learn? Trends in ecology & evolution, 2016. 31(2): p. 147–157.

37. Hodgson, G.M. and T. Knudsen, Why we need a generalized Darwinism, and why generalized Darwinism is not enough. Journal of Economic Behavior & Organization, 2006. 61(1): p. 1–19.

38. Legg, S. and M. Hutter, Universal intelligence: A definition of machine intelligence. Minds and Machines, 2007. 17(4): p. 391–444.

39. Yampolskiy, R.V., Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence. Journal of Discrete Mathematical Sciences & Cryptography, 2013. 16(4–5): p. 259–277.

40. De Simone, A., et al., Boltzmann brains and the scale-factor cutoff measure of the multiverse. Physical Review D, 2010. 82(6): p. 063520.

41. Everett III, H., “ Relative state” formulation of quantum mechanics. Reviews of modern physics, 1957. 29(3): p. 454.

42. Schmidhuber, J. A computer scientist’s view of life, the universe, and everything. in Foundations of computer science. 1997. Springer.

43. Minsky, M.L., Will robots inherit the Earth? Available at: http://web.media.mit.edu/~minsky/papers/sciam.inherit.txt, 1994.

44. Dawkins, R., The blind watchmaker: Why the evidence of evolution reveals a universe without design1986: WW Norton & Company.

45. Shin, J.H., et al., Force of an actin spring. Biophysical journal, 2007. 92(10): p. 3729–3733.

46. Burrows, M. and G. Sutton, Interacting gears synchronize propulsive leg movements in a jumping insect. science, 2013. 341(6151): p. 1254–1256.

47. Qin, S., et al., A magnetic protein biocompass. Nature materials, 2016. 15(2): p. 217–226.

48. Robinson, R., Mutations Change the Boolean Logic of Gene Regulation. PLoS Biol, 2006. 4(4): p. e64.

49. Hood, L. and D. Galas, The digital code of DNA. Nature, 2003. 421(6921): p. 444–448.

50. Montanez, G., Detecting Intelligence: The Turing Test and Other Design Detection Methodologies, in 8th International Conference on Agents and Artificial Intelligence24–26 February 2016: Rome, Italy.

51. Wongsrikeao, P., et al., Antiviral restriction factor transgenesis in the domestic cat. Nature methods, 2011. 8(10): p. 853–859.

52. Lewis, W.J. and D. Alexander, Grafting and budding: A practical guide for fruit and nut plants and ornamentals2008: Landlinks Press.

53. Pyrite, in WikipediaMay 18, 2016: Available at: https://en.wikipedia.org/wiki/Pyrite.

54. Clathrus ruber, in WikipediaMay 18, 2016: Available at: https://en.wikipedia.org/wiki/Clathrus_ruber.

55. Dong, W., et al., GMDD: a database of GMO detection methods. BMC bioinformatics, 2008. 9(1): p. 260.

56. Beck, M. and R. Yampolskiy, DNA as a medium for hiding data. BMC Bioinformatics, 2012. 13(Suppl 12): p. A23.

57. Beck, M.B., E.C. Rouchka, and R.V. Yampolskiy, Finding Data in DNA: Computer Forensic Investigations of Living Organisms, in Digital Forensics and Cyber Crime2013, Springer Berlin Heidelberg. p. 204–219.

58. Gibson, D.G., et al., Complete chemical synthesis, assembly, and cloning of a Mycoplasma genitalium genome. science, 2008. 319(5867): p. 1215–1220.

59. Holman, C.M., The impact of human gene patents on innovation and access: A survey of human gene patent litigation. UMKC Law Review, 2007. 76: p. 295.

60. Yachie, N., et al., Alignment‐Based Approach for Durable Data Storage into Living Organisms. Biotechnology progress, 2007. 23(2): p. 501–505.

61. Heider, D. and A. Barnekow, DNA-based watermarks using the DNA-Crypt algorithm. BMC bioinformatics, 2007. 8(1): p. 1.

62. Church, G.M., Y. Gao, and S. Kosuri, Next-generation digital information storage in DNA. Science, 2012. 337(6102): p. 1628–1628.

63. Bornholt, J., et al. A DNA-based archival storage system. in Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. 2016. ACM.

64. Yokoo, H. and T. Oshima, Is bacteriophage φX174 DNA a message from an extraterrestrial intelligence? Icarus, 1979. 38(1): p. 148–153.

65. Nakamura, H., SV40 DNA — A message from ϵ Eri? Acta Astronautica, 1986. 13(9): p. 573–578.

66. shCherbak, V.I. and M.A. Makukov, The “Wow! signal” of the terrestrial genetic code. Icarus, 2013. 224(1): p. 228–242.

67. Bennett, C.H., Logical depth and physical complexity. The Universal Turing Machine A Half-Century Survey, 1995: p. 207–235.

68. Adleman, L.M., Time, space and randomness1979: Massachusetts Institute of Technology, Laboratory for Computer Science.

69. Levin, L.A., Invariant properties of informational bulks, in Mathematical Foundations of Computer Science 19771977, Springer. p. 359–364.

70. Levin, L.A., Randomness conservation inequalities; information and independence in mathematical theories. Information and Control, 1984. 61(1): p. 15–37.

71. Koppel, M., Complexity, depth, and sophistication. Complex Systems, 1987. 1(6): p. 1087–1091.

72. Gurevich, Y. and G.O. Passmore, Impugning randomness, convincingly. Studia Logica, 2012. 100(1–2): p. 193–222.

73. Lemarchand, G.A. and J. Lomberg, Universal cognitive maps and the search for intelligent life in the universe. Leonardo, 2009. 42(5): p. 396–402.

74. Elliott, J.R., Detecting the signature of intelligent life. Acta Astronautica, 2010. 67(11): p. 1419–1426.

75. Katzenbeisser, S. and F. Petitcolas, Information hiding techniques for steganography and digital watermarking2000: Artech house.

76. Sahai, A. and B. Waters. How to use indistinguishability obfuscation: deniable encryption, and more. in Proceedings of the 46th Annual ACM Symposium on Theory of Computing. 2014. ACM.

77. Yampolskiy, R.V., J.D. Rebolledo-Mendez, and M.M. Hindi, Password Protected Visual Cryptography via Cellular Automaton Rule 30, in Transactions on Data Hiding and Multimedia Security IX2014, Springer Berlin Heidelberg. p. 57–67.

78. Mantegna, R.N., et al., Linguistic features of noncoding DNA sequences. Physical review letters, 1994. 73(23): p. 3169.

79. Tsonis, A.A., J.B. Elsner, and P.A. Tsonis, Is DNA a language? Journal of theoretical Biology, 1997. 184(1): p. 25–29.

80. Schelling, T.C., The strategy of conflict1980: Harvard university press.

81. Zuckerman, I., S. Kraus, and J.S. Rosenschein, Using focal point learning to improve human–machine tacit coordination. Autonomous Agents and Multi-Agent Systems, 2011. 22(2): p. 289–316.

82. Yampolskiy, R.V. Analyzing User Password Selection Behavior for Reduction of Password Space. in The IEEE International Carnahan Conference on Security Technology (ICCST06). October 17–19, 2006. Lexington, Kentucky.

83. Darwin, C., On the origin of species by means of natural selection, or. The Preservation of Favoured Races in the Struggle for Life, London/Die Entstehung der Arten durch natürliche Zuchtwahl, Leipzig oJ, 1859.

84. Popper, K., Conjectures and refutations: The growth of scientific knowledge2014: routledge.

85. Yampolskiy, R.V., AI-Complete, AI-Hard, or AI-Easy — Classification of Problems in AI, in The 23rd Midwest Artificial Intelligence and Cognitive Science ConferenceApril 21–22, 2012: Cincinnati, OH, USA.

86. Yampolskiy, R.V., Turing test as a defining feature of AI-completeness, in Artificial Intelligence, Evolutionary Computing and Metaheuristics2013, Springer Berlin Heidelberg. p. 3–17.

87. Karp, R.M., Reducibility Among Combinatorial Problems, in Complexity of Computer Computations, R.E. Miller and J.W. Thatcher, Editors. 1972, Plenum: New York. p. 85–103.

88. Shahaf, D. and E. Amir, Towards a theory of AI completeness, in 8th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2007)March 26–28, 2007: California.

89. Turing, A., Computing Machinery and Intelligence. Mind, 1950. 59(236): p. 433–460.

90. Pease, A. and S. Colton. On impact and evaluation in computational creativity: a discussion of the turing test and an alternative proposal. in Proceedings of the AISB symposium on AI and Philosophy. 2011.

91. Yampolskiy, R.V. and V. Govindaraju. Use of Behavioral Biometrics in Intrusion Detection and Online Gaming. in Biometric Technology for Human Identification III. SPIE Defense and Security Symposium. 2006. Orlando, Florida

92. Yao, X. and T. Higuchi, Promises and challenges of evolvable hardware. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 1999. 29(1): p. 87–97.

93. Ali, N., M. Hindi, and R.V. Yampolskiy, Evaluation of Authorship Attribution Software on a Chat Bot Corpus, in 23rd International Symposium on Information, Communication and Automation Technologies (ICAT2011)October 27–29, 2011: Sarajevo, Bosnia and Herzegovina. p. 1–6.

94. Simonton, D.K., Reverse engineering genius: historiometric studies of superlative talent. Annals of the New York Academy of Sciences, 2016.

95. Yampolskiy, R.V., Behavioral Modeling: an Overview. American Journal of Applied Sciences, 2008. 5(5): p. 496–503.

96. Yampolskiy, R.V. and V. Govindaraju, Taxonomy of Behavioral Biometrics, in Behavioral Biometrics for Human Identification: Intelligent Applications, L. Wang and X. Geng, Editors. 2009, IGI Global. p. 1–43.

97. Yampolskiy, R. and M. Gavrilova, Artimetrics: Biometrics for Artificial Entities. IEEE Robotics and Automation Magazine (RAM), 2012. 19(4): p. 48–58.

98. Yampolskiy, R., et al., Experiments in Artimetrics: Avatar Face Recognition. Transactions on Computational Science XVI, 2012: p. 77–94.

99. Yampolskiy, R.V. and V. Govindaraju, Behavioral Biometrics: a Survey and Classification. International Journal of Biometric (IJBM). 2008. 1(1): p. 81–113.

100. Yampolskiy, R.V. and V. Govindaraju, Direct and Indirect Human Computer Interaction Based Biometrics. Journal of Computers, 2007. Volume 2, Issue 8: p. 76–88.

101. Yampolskiy, R.V., AI-Complete CAPTCHAs as Zero Knowledge Proofs of Access to an Artificially Intelligent System. ISRN Artificial Intelligence, 2011. 271878.

102. D’Souza, D., P.C. Polina, and R.V. Yampolskiy, Avatar CAPTCHA: Telling Computers and Humans Apart via Face Classification, in IEEE International Conference on Electro/Information Technology (EIT2012)May 6–8, 2012: Indianapolis, IN, USA.

103. Yampolskiy, R.V., The Universe of Minds. arXiv preprint arXiv:1410.0369, 2014.

104. Yampolskiy, R.V., Artificial Superintelligence: a Futuristic Approach2015: Chapman and Hall/CRC.

105. Li, J., R. Zheng, and H. Chen, From fingerprint to writeprint. Communications of the ACM, 2006. 49(4): p. 76–82.

106. Goswami, S., S. Sarkar, and M. Rustagi. Stylometric analysis of bloggers’ age and gender. in Third International AAAI Conference on Weblogs and Social Media. 2009.

107. Brooke, J. and G. Hirst. Native language detection with ‘cheap’learner corpora. in Twenty Years of Learner Corpus Research. Looking Back, Moving Ahead: Proceedings of the First Learner Corpus Research Conference (LCR 2011). 2013. Presses universitaires de Louvain.

108. Luyckx, K. and W. Daelemans, Using syntactic features to predict author personality from text. Science, 1998. 22: p. 319–346.

109. Ali, N., D. Schaeffer, and R.V. Yampolskiy, Linguistic Profiling and Behavioral Drift in Chat Bots. Midwest Artificial Intelligence and Cognitive Science Conference, 2012: p. 27.

110. Freeth, T., et al., Decoding the ancient Greek astronomical calculator known as the Antikythera Mechanism. Nature, 2006. 444(7119): p. 587–591.

111. Isler, M., Sticks, stones, and shadows: building the Egyptian pyramids2001: University of Oklahoma Press.

112. Langner, R., Stuxnet: Dissecting a cyberwarfare weapon. Security & Privacy, IEEE, 2011. 9(3): p. 49–51.

113. Babcock, J., J. Kramar, and R. Yampolskiy, The AGI Containment Problem, in The Ninth Conference on Artificial General Intelligence (AGI2015)July 16–19, 2016: NYC, USA.

114. Yampolskiy, R.V., Leakproofing Singularity — Artificial Intelligence Confinement Problem. Journal of Consciousness Studies (JCS), 2012. 19(1–2): p. 194–214.

115. Chalmers, D., The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 2010. 17: p. 7–65.

116. Bostrom, N., The mysteries of self-locating belief and anthropic reasoning. The Harvard Review of Philosophy, 2003. 11(1): p. 59–73.

117. Hanson, R., How to live in a simulation. Journal of Evolution and Technology, 2001. 7(1).

118. Whitworth, B., The physical world as a virtual reality. arXiv preprint arXiv:0801.0337, 2008.

119. Sleiman, M.D., A.P. Lauf, and R. Yampolskiy. Bitcoin Message: Data Insertion on a Proof-of-Work Cryptocurrency System. in 2015 International Conference on Cyberworlds (CW). 2015. IEEE.

120. Kumar, S. and M. Hebert. Man-made structure detection in natural images using a causal multiscale random field. in Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on. 2003. IEEE.

121. Gruen, A., E. Baltsavias, and O. Henricsson, Automatic extraction of man-made objects from aerial and space images (II)2012: Birkhäuser.

122. Cirkovic, M.M., Macro-engineering in the galactic context. Macro-Engineering, 2006. 54: p. 281–300.

123. Caliskan-Islam, A., et al., When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries. arXiv preprint arXiv:1512.08546, 2015.

124. Dick, S.J., Cultural evolution, the postbiological universe and SETI. International Journal of Astrobiology, 2003. 2(1): p. 65–74.

125. Schneider, S., Alien Minds. Science Fiction and Philosophy: From Time Travel to Superintelligence, 2016: p. 225.

126. Makukov, M.A., Space ethics to test directed panspermia. Life Sciences in Space Research, 2014. 3: p. 10–17.

127. Miletić, T., Extraterrestrial artificial intelligences and humanity’s cosmic future: Answering the Fermi paradox through the construction of a Bracewell-Von Neumann AGI. Journal of Evolution and Technology 2015. 25(1): p. 56–73.

128. Arrhenius, S., Die verbreitung des lebens im weltenraum. Die Umschau, 1903. 7: p. 481–485.

129. Gold, T., Cosmic garbage. Air Force and Space Digest, 1960: p. 65.

130. Drake, R., A general mathematical survey of the coagulation equation. Topics in current aerosol research (Part 2), 1972. 3: p. 201–376.

131. Cash, W., Detection of Earth-like planets around nearby stars using a petal-shaped occulter. Nature, 2006. 442(7098): p. 51–53.

132. Yampolskiy, R., Leakproofing the Singularity Artificial Intelligence Confinement Problem. Journal of Consciousness Studies, 2012. 19(1–2): p. 1–2.

133. Yampolskiy, R.V., Taxonomy of Pathways to Dangerous AI, in 30th AAAI Conference on Artificial Intelligence (AAAI-2016). 2nd International Workshop on AI, Ethics and Society (AIEthicsSociety2016)February 12–13th, 2016: Phoenix, Arizona, USA.

134. Valiant, L.G., Evolvability. Journal of the ACM (JACM), 2009. 56(1): p. 3.

135. Kaiser, J., US halts two dozen risky virus studies. Science, 2014. 346(6208): p. 404–404.

136. Shulman, C. and N. Bostrom, Embryo Selection for Cognitive Enhancement: Curiosity or Game‐changer? Global Policy, 2014. 5(1): p. 85–92.

137. Nijholt, A., No grice: computers that lie, deceive and conceal. 2011.

138. Perez, J.-c., Deciphering Hidden DNA Meta-Codes-The Great Unification & Master Code of Biology. Journal of Glycomics & Lipidomics, 2015. 2015.

139. Binder, P. and G. Ellis, Nature, computation and complexity. Physica Scripta, 2016. 91(6): p. 064004.

[1] In computer science a string is a finite sequence of characters, a concept not related to the String theory in physics.

[2] Search for messages in biological information.

1We are aware of the Futurama episode “A Clockwork Origin” (Episode 6, Season 9) with a similar plot.

--

--