Nine Answers from Logan Thrasher Collins

Bion Howard
Church of Maths
Published in
11 min readJan 7, 2018

The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
— Arthur C. Clarke

Is emotion useful for scientists?

Let’s ask Logan!

Logan Thrasher Collins is a futurist and synthetic biologist at the University of Colorado, Boulder. His recent blog post, Rational Romanticism argues for a link between emotion and science. This helps us understand the world and the self.

Here, we interview him about futurism and Rational Romanticism:

1. Why futurism?

We live in an age of tremendous opportunity and challenge. The technosphere is growing in complexity at an unprecedented rate. We face a choice. We can embrace scientific exploration and have the stars or we can succumb to fear and meet extinction. Futurism allows us to approach tomorrow with optimism and intelligence. The futurist is free from status quo bias because the futurist possesses enough imagination to simultaneously envision radical positive change, collateral obstacles, and solutions to said collateral obstacles. I “live in the future” to shape the future and help make it beautiful.

2. How can we build a bright future for everyone?

To build a bright future for all, I propose that we integrate empathy and technology. Currently, humans possess disturbingly low degrees of empathy. This manifests as bullying, prejudice, discrimination, blame, and greed winning out over altruism. People tend to justify sociopathic behavior by inventing circuitous ways to diffuse their own responsibility. On the individual level, these issues can be taxing. On the societal level, they might be responsible for a large part of the inequality and violence in the world. Imagine a world in which most people can imagine the stories which lead to homelessness, unemployment, and poverty. For instance, rather than consider an individual on welfare to be “lazy” and undeserving of help, more people would visualize themselves in the same situation, and imagine possible sequences of events which may have led to the welfare recipient’s circumstance. Higher empathy would promote more and better altruism and help people understand underlying systemic problems. Such problems would likely be more efficiently and effectively solved by people who experience these empathetic sentiments.

To enhance empathy, we will need to properly apply technology. Early on, incorporating virtual reality into our online experiences may provide an environment with more sensory stimuli that activate empathetic cognitive processes. It is often difficult for people living in developed nations to visit far-away impoverished areas. As such, many feel a sense of removal from the struggles in those regions even when reading about them or observing photographs. With VR, people can witness such struggles firsthand, leading to stronger activation of empathetic neural pathways. People might virtually visit such locations for other reasons (i.e. adventure, exploration, recreation) and then be exposed to the challenges in those regions. Essentially, VR could make the world “smaller” and promote stronger ties across the global community of humans.

But virtual reality is only the beginning. I would advocate for voluntary cognitive enhancement of empathy using pharmacological, genetic (including designer babies), and implant-based methods. It is important to make these enhancements voluntary in order to prevent fascist politics from interfering with the process and causing net harm. However, voluntariness does not preclude social pressure. I propose we should attempt to foster an environment in which empathy enhancement is considered to be “responsible.” In this way, society may bifurcate from fearing empathy enhancement to actively encouraging it. The details of engineering these social pressures are complicated, but intelligent people can help determine how to do so without resorting to fascism. We may eventually achieve a superempathetic majority using such technologies.

Post-singularity, mass-scale mind uploading might be feasible. This could even extend to wild animals. I would argue, given this capability, we are morally obligated to upload every organism with the cognitive complexity of Drosophila melanogaster on up. After uploading all macroscale biological life, we should develop an empathy-based framework for how to end suffering in both humans and animals. For humans, uploading should be voluntary, at least until superintelligence has advanced enough to thoroughly understand objective morality and make decisions without underlying selfish motives. If the biosphere is placed in a computational substrate and we understand the emotional states of uploaded organisms, we may design a collective system to erase suffering, maintain exploratory impulses, and enhance joyful qualia.

3. What are some of your favorite future technologies?

When it comes to future technologies, there is a lot to be excited about. Personally, I’m most looking forward to the innovations that hybridize biological and non-biological systems. I’m particularly excited about soft bionanoelectronics, in vitro meat, bacterial nanorobots for environmental and biomedical applications, neural prosthetics and neuromorphic circuitry, telepathic communication via neural prosthetics (which has already been tested in rats by Berger’s group), nanoscale devices for connectomics, new sensory organs (i.e. Neil Harbisson’s eyeborg), improved computational protein engineering, and scalable brain-computer interfaces. Such technologies will bring us closer to a world where imagination and reality are inseparable. In the longer term, I look forward to mind uploading and radical qualia engineering. By merging our minds with technology, we will be able to engineer the brain’s hypercanvas in new and beautiful ways.

4. Who are some of your biggest influences?

Ray Kurzweil introduced me to the idea that technology supports its own development and so undergoes exponential growth. After reading about Kurzweil’s law of accelerating returns, I saw new possibilities for the future which I had previously thought would take thousands of years to reach. While I think Kurzweil’s prediction for the technological singularity occuring in 2045 is moderately overoptimistic, I would still argue that his model is plausible enough that the singularity may occur prior to 2100. Further, I appreciate Kurzweil’s optimism because it has driven so many people to strive to improve the world. Kurzweil has given us hope for the future and with hope comes people who are willing to make the attempt to build a bright tomorrow.

David Pearce, author of The Hedonistic Imperative, helped me flesh out my philosophical approach to transhumanism. Pearce seeks a future in which suffering is abolished. This includes human and animal suffering. In addition, Pearce hopes to use biotechnology to enhance happiness and raise the hedonic baseline so that our most blissful experiences today are far below our average state in the future. He emphasizes that we should engineer ourselves to retain fluctuations about this high emotional baseline. In this way, we would remain productive and continue acting as a curious, creative, and driven species. While the idea of superhappiness is off-putting to many, Pearce develops numerous convincing counterarguments to the most common critiques. These counterarguments are too extensive to detail here, but I encourage you to investigate them at (4).

It should be noted that Pearce and I disagree about the methods for achieving this vision. Pearce believes the entire Earth can be converted into a paradise using solely biological methods. I would argue that this would render the system too susceptible to collapse, resulting in more suffering. In order to ensure suffering vanishes forever, I would instead advocate mass mind uploading and transmutation of the planet Earth into computronium, allowing for any bifurcations into instability to be immediately reversed. I would also say that more transcendent emotional states will likely be achievable with more processing resources available to our minds. For this reason, the path to abolishing suffering involves a combination of neuroengineering, connectomics, nanotechnology, bioengineering, synthetic biology, neuromorphic computing, automation, AI, computational neuroscience, and mathematical qualia science rather than only genetic engineering, pharmacology, and similar techniques.

Some other people who have influenced my thinking on STEM and the future include Jack Andraka, Easton LaChappelle, Brian David Johnson, George Church, and a number of science fiction authors. Jack Thomas Andraka and Easton LaChappelle were scientists who, like me, began their research in high school. Jack Andraka developed an inexpensive diagnostic for pancreatic cancer while Easton LaChappelle developed an inexpensive, 3D printed prosthetic limb and a brain-computer interface to control the arm and hand. These individuals and thousands of other high school researchers (I met many of these people at the International Science and Engineering Fair) helped me realize that age is no barrier to changing the world. Brian David Johnson was the first futurist who I met in person and advocates using science fiction to help design the future. George Church has inspired me with his highly impactful work on synthetic biology innovations at the interface between academia and industry. Finally, many science fiction stories have influenced my approach to futurism. Some of the most influential of these include The Last Question (Asimov), Blood Music (Bear), Understand (Chiang), Utriusque Cosmi (Wilson), and True Names (Doctorow and Rosenbaum).

5. What research are you interested in?

I am particularly excited to use synthetic biology and bionanotechnology to build scalable brain-computer interfaces (BCIs) and to map synaptic connections in vivo. I have written some research proposals for such technologies and I am seeking labs which might have the resources necessary to help me implement these proposals. I cannot yet disclose the full details of my proposals on a public forum due to the problematic IP laws in the U.S., but I will say that they involve polymerosomes, gold nanoparticles, electrochemistry, conjugated antibodies, and X-ray microscopy. I enjoy researching interdisciplinary, outside-the-box solutions to seemingly intractable problems.

I’m in a transition period between antimicrobial synthetic biology and neuroengineering. Over the past five years, I have developed a de novo antimicrobial peptide, OpaL (Overexpressed protein aggregator Lipophilic), which disrupts bacterial homeostasis by forming insoluble aggregates when expressed intracellularly. In addition, I have engineered a bacterial conjugation delivery system for the gene encoding OpaL so that donor bacteria can be used to transfer opaL into recipients. I will submit my manuscript on this research for publication in a few weeks. After wrapping up this project, I intend to dive into my new ideas for neuroengineering.

Star Trek’s Half-Human/Half-Vulcan Spock symbolized the conflict of logic and emotion

6. What is Rational Romanticism (R2)?

Rational romanticism is a philosophy which unifies logic and emotion. I propose that, in order to be truly rational, one must also understand both the intrinsic and practical value of emotion. On the other end, in order to optimize emotional states and pursue existential meaning, one must utilize empiricism and logic. Culturally, rationality and romanticism have long been considered mutually exclusive. According to rational romanticism, they represent an inseparable whole. Rational romanticism suggests that effective reasoning requires emotions and logic to be merged.

7. Where could R2 be applied?

A troubling schism exists between the arts and the sciences. Many scientifically-oriented people mistrust the arts and claim that they are out-of-touch with the mechanistic workings of reality. Likewise, many artistically-oriented people mistrust science and claim that it is out-of-touch with the “soul” of the human experience. Both groups have a point, but they are mistaken in believing that art and science cannot merge.
Pure rationalists engage in irrational and destructive behavior when they attempt to frame emotion as a distraction. Emotion is the goal, emotion is the meaning of life. All experiences of fulfillment, spirituality, joy, love, peace, and adventure are emotion. The pervasiveness of pure rationalism in technical communities has caused a disturbing level of risk-aversion. As I discuss on my blog, ambitious research projects are less likely to succeed than more moderate ones and it is difficult (from a pure rationalist perspective) to gain a reasonable measure for the degree of risk involved in such projects. The pure rationalist would discard such ambitious projects as having too much unknown risk. But throughout history, we have observed that the people who ignore boundaries and even apparent impossibilities, the people who keep obsessively fighting to make their vision a reality no matter the odds, we have seen that these are the people who change the world (i.e. the space program, the human genome project, heavier-than-air flight, the lightbulb, the automobile, the home computer). Rational romanticism embraces these risks and allows for outside-the-box, seemingly “crazy” innovators to make a difference.
Pure romantics engage in irrational and destructive behavior when they attempt to frame science and technology as a coldly logical automaton with no regard for humanity. This manifests in film and popular fiction, where science and technology usually appears as a tool of villains. Frankenstein, Gattaca, Brave New World, Jurassic Park, The Terminator, and countless other fearmongering works exemplify this trend. Popular non-fiction only reinforces this harmful mindset (i.e. most mainstream news articles that discuss philosophical considerations around science, Geek Heresy, anti-GMO books, religious critiques of technology, etc.) By promoting the idea that empiricism is evil and inhuman, society has been turned against technological solutions to problems, causing countless deaths (consider social reactions to Golden Rice), other tragedies, and missed opportunities for exploration and discovery. By replacing these pervasive anti-science attitudes with rational romanticism, technology will fully realize its tremendous potential and improve our lives.

8. How can we test R2 theory?

Initially, I propose that we test rational romanticism by investing in the collection and analysis of sociological, psychological, neurobiological, and political data over multiple timescales. We may identify new strategies for further improving rational romanticism via data-driven insights. When interpreting these data, we should emphasize global optimization of emotional states as the universe-wide system’s main goal. As time goes on, I propose that we develop quantitative theories of consciousness to directly measure and model emotional states. This will allow more precise and powerful pursuit of emotional optimization.

9. Why build emotional machines?

AGI does pose some existential risk, but this risk will be minimized if we construct humanlike AGI with emotions rather than an intelligence that resembles Bostrom’s hypothetical paperclip optimizer. Currently, AI research focuses around designing algorithms which learn how to perform specific tasks extremely well. I would argue that the best route to AGI might involve using many different algorithms, wired together into a cognitive structure that resembles the human brain. For instance, the dopaminergic mesolimbic reward pathway might be emulated by reinforcement learning and the CA3 region of the hippocampus could be mimicked by Hopfield memory nets. But the key would be to combine these algorithms in a way that emulates the entire anatomical structure of the human brain. Some other potentially important considerations might include inter-region connectivity to more closely approximate the brainlike operations and the use of neuromorphic circuitry to spatially localize information processing in a brainlike fashion and so experience more humanlike qualia. (Neuromorphic circuitry may also improve energy efficiency in such systems and make them easier to construct in a brainlike way). As such, AGI would be quite similar to humans.

Once we construct human-level AGI, we can enhance it towards greater-than-human abilities. In this way, even superintelligence will be rooted in humanlike construction. Furthermore, we could build superempathetic AGI which would be driven to help people in a rationally romantic manner. For instance, superempathetic AGI would not euthanize patients without discussion and explicit consent from everyone involved because it would understand the wider ramifications of such an act on the emotions of the patient’s family. Simpler qualia-optimization algorithms might not be so compassionate. On the larger scale, this would decrease the likelihood of existential risk from AGI. In this way, our machines could be made to understand human emotions and nuances, allowing for safer AGI.

Links:

  1. Logan’s blog post on Rational Romanticism
  2. Technical Transhumanism Facebook Group
  3. On Reason and Passion
  4. The Hedonistic Imperative, responses to objections
  5. On the relationship between emotion and cognition
  6. Church Of Maths on Facebook

--

--