“Neanderthal-Brained Crab Robots, Neuralink, and the Frontiers of Intelligence as We Understand It”

Kevin O'Connor
10 min readOct 8, 2023

--

or: How I Learned to Stop Worrying and Love the Crab

Image Credit: deepai.org

Disclaimer

I am not an expert (at anything really). I am trying to write a light-hearted slightly deeper than surface level explainer based on the research I am doing while some models are running.

Introduction

Generative AI is all the rage right now. I’m working with LLMs. You’re working with LLMs. Your 13-year-old nephew is working with LLMs on his homework. Your marketing team is using them. Your lawyer, hopefully, isn’t using them. On Linkedin, you’ll see people posting “Generative AI Quiz!!”, followed by some introductory statistics question.

A cynic might say,

“The hype has blown past all reason and logic, and just as the machine learning engineer replaced the data scientist replaced the data analyst replaced the statistician; we will see the flashy new titles that tell the venture-capital you are hiring for the next big thing.”

Being cynical is boring, so I like to think of all of the titles like convergent evolution — different titles emerging from the same base of skills. The combinations of programming skills, deep math knowledge, storytelling, and business sense are just very helpful in business, and are selected for to solve different problems. It makes sense that specialists that have these characteristics in different proportion live in the niches of their strengths.

With Apologies to Alysson Muotri

In nature, convergent evolution is fairly common. However, crabs are undisputed kings of convergence. So popular in fact, that we have a word for all of the times crabs have evolved separately: Carcinization

The body plan is just too effective.

With that in mind, it should come with little surprise that when it comes to attaching synthetic intelligences to bodies; those bodies would first be of the crabby variety.

This was back in the summer of 2018. Maybe I was distracted by being back in school…or the lure of the Wasatch mountains in June kept me off of google news… or this story wasn’t received with much fanfare. Maybe there was some massive global collective trauma soon to come, and we forgot everything that happened back then.

If you forgot or missed this information at the time, let me share what I can only describe as an actual idea.

Legendary UCSD biologist Alysson Muotri wants to put neanderthal mini-brains on mechanical crab bodies, and pit them against crabs with human mini-brains to study the differences between Sapiens Sapiens and the extinct hominid some of us share DNA with.

Incidentally, this is exactly what I would tell my department head if I was trying to build a race of cyborg crabs on campus…

Muotri has developed the modern human brain organoids to the stage where his team can detect oscillating electrical signals within the balls of tissue. They are now wiring the organoids to robots that resemble crabs, hoping the organoids will learn to control the robots’ movements. Ultimately, Muotri wants to pit them against robots run by brain Neanderoids. (Note: Neanderoids refer to clumps of neanderthal brain tissue)

I am making light of some absolutely remarkable work on the forefront of biology and computer science, intended to benefit many people… so I will stop the jokes here. Most important to note, is that he is connecting computers to artificial brains.

Some of the most astounding technologies of the coming age are emerging, and the brain-computer interface is something we really need to be talking about more.

Neuralink and the Brain-Computer Interface

Elon Musk and his companies dominate the public consciousness.

Elon Musk, is a hard person to write about. An army of idolizing followers, and an army of detractors follow his every move.

While treading the fine line between these two camps and my own personal beliefs, I think an intellectually honest person would agree that the man funds some very interesting businesses. Most of his attempts are not original ideas. Some of them are right for our time, and some of them don’t really make any sense. (I am sorry, but Hyperloop is not going to happen.)

William Heath 1829 — UNIVERSAL IMAGES GROUP/GETTY IMAGES
William Heath’s 1829 engraving — Credit: UNIVERSAL IMAGES GROUP/GETTY IMAGES

Elon’s most successful projects follow this general scheme: Take a working technology — make some realistic strides forward — and fund it through the regulatory hurdles to make it first to market with your improved features.

Electric cars…reusable privately launched rockets… and now the brain-computer interface (BCI).

BCIs largely work due to an incredible characteristic of the brain called cortical plasticity. In simple terms, your brain will adapt to the electrodes placed in its outer layer and treat them like other sensor or effector channels.

This work emerged in the United States in the 1970s at UCLA, funded by the National Science Foundation and later DARPA. This arcane research has progressed dramatically in the past 50 years. Some of the more recent findings have been mind-boggling, to say the least. Multiple research teams have managed to transmit simple thoughts from one human subject’s brain to another’s. In another fascinating saga, scientists have connected the motor cortex of mice in such a manner that when they lift up one mouse’s tail, the other mouse a thousand miles away will be induced to lift their tail. We have made significant strides in interpreting brain activity, and to a lesser degree; passing information back to the brain through computers.

After many years of research and experimentation on animals, the first functional neuroprosthetics appeared in the 1990s.

Musk founded Neuralink in 2016 with the initial focus of creating a high-bandwidth, low-latency connection between the human brain and external devices such as computers and prosthetic limbs. In less than a decade, they have announced approval for a modest human trial, in which subjects will be implanted with a flexible panel of electrodes that wirelessly transmit brain activity to an app.

I think we can generally agree that a few electrodes in a human brain receiving input so that they can control a prosthetics device is important work that will likely improve quality of life for millions of people. Improving function for those who need it is the first and safest step.

In the longer term, BCIs have the promise of improving function for millions, furthering neurological research and understanding, and eventually enhancing human capability. Who wouldn’t appreciate a little extra memory or processing power?

You already installed this bad boy so you could play Majora’s Mask. The VR games you control with your mind will be kind of like this, but better, and also maybe inside your skull.

A critical distinction to make is between BCI input and output. Would you convey to an AI that you want help drafting an email? Would you let the AI write and send that email for you? One situation is entirely in your control, and helpful. One is potentially more helpful and far more risky. A BCI reading your brainwaves opens up numerous ethical discussions alone, but adding in the write access ratchets up both the benefits and risks. Maybe we could cure addiction. Maybe we could lessen the impacts of anxiety. Once we open the door to modifying people’s thoughts and behaviors, there is infinite room for quality of life improvement and man-made horrors beyond our comprehension. Let’s take a high level look at some of the potential risks and considerations.

Ethical Considerations

  1. Mind Privacy and Security: BCIs could potentially dive into the deep recesses of our minds. Keeping our thoughts and neural data private and secure is like guarding the secrets of a treasure chest. Try to imagine the consequences of a malicious actor reading your thoughts. You remembered your bank details? Accounts drained. You accidentally had an unsavory thought and your husband was monitoring you? Husband is out of here. Your government has been taken over by the Communists? Fascists? Neanderthal-brained crab robots? Here is 24 hours of incredibly unpleasant neural activity that drives you into submission after you had a controversial thought. This problem will be compounded by the emergence of cryptography breaking quantum computers, and quantum-robust encryption of these systems will have to be thought of before, and not after they are implemented at any meaningful scale.
  2. Consent and Knowledge: Before diving into the world of BCIs, it’s crucial that folks understand what they’re getting into. It’s like signing up for a high-stakes adventure; you should know the risks and rewards, less you sign a thousand-page EULA that gives a company the right to stream McDonald’s advertisement directly into your brain.
  3. Free Will and Control: Advanced BCIs open the door for both coercion and explicit control. We must take aggressive preemptive steps to protect fundamental human rights.
  4. Equality and Access for All: Imagine BCIs as superpowers. There is a high risk of increasing social stratification both internally in the first mover countries, and generally between countries that have access to these advanced technologies. It’s not about where you’re from or who you are; it’s about your right to access and wield this technology. If humanity makes a leap forward, hopefully we can do it without leaving people behind.
  5. Regulatory Complexity: BCIs need rules and regulations. Balancing the desire for innovation with the need for oversight should be a very high priority. Otherwise, there will be some wildly terrible things happening on the fringes of society. As we have observed with AI and Cryptocurrencies, it would appear that complex emerging technology is very difficult for regulators to understand and competently act upon.
  6. Protection for Vulnerable Souls: Some folks, especially those in challenging situations, might be more susceptible to the allure of BCIs. We need safeguards to shield them from harm.
  7. Long-Term Unknowns: BCIs are still a bit of a mystery when it comes to long-term effects. It’s like embarking on a quest without knowing what lies at the end of the road. We must keep a vigilant eye on our adventurers.
  8. Philosophical Musings: BCIs can make us ponder deep questions about ourselves. It’s like staring into the existential abyss, asking who we truly are, and whether our thoughts are really our own. BCIs can lead us on an identity quest, challenging our authenticity. It’s like questioning whether we’re still ourselves after a profound transformation. What will happen when everyone realizes that all of our thoughts are electro-chemically deterministic? Will there be large groups of people who will reject these powers for philosophical reasons? What would we owe to them when they can no longer compete effectively in the economy?

These ideas should not be foreign to us, since these discussions are already happening with silicon based intelligence. These discussions also echo a broader conversation — the Human Enhancement Dilemma. Whether we’re enhancing human abilities through silicon-based or biological intelligence, the need for methodical preparation is evident, lest we sow the seeds of our own demise.

The Specter of Synthetic Biological Intelligence

Intelligence is a horrendously muddy word.

In the context of artificial intelligence (AI) and synthetic biological intelligence (SBI), the term intelligence often refers to the capacity of a system to process information, learn from data, and make decisions or solve problems autonomously. However, even within these fields, the concept of intelligence can take on different meanings and interpretations, further contributing to its “muddy” nature.

Understanding and discussing intelligence, whether in the context of humans, machines, or biological systems, requires a nuanced and context-dependent approach that acknowledges its multifaceted nature and the challenges associated with its definition and measurement.

Less important practically, but very important ethically: is another equally muddy word — consciousness. When a “computer” stores information exactly as we do, has biological memories, makes its own decisions, and learns biologically, can we call that computer anything other than intelligent? If it is internally aware, made from brain tissue, and more capable than a human, are we more likely to accept it as conscious? Is this fair to silicon based intelligences? Is there really even that much of a difference between carbon and silicon intelligence?

  1. We are developing silicon based intelligence
  2. We are developing interfaces between brains and computers
  3. We are growing brains in labs

You don’t need Sherlock Holmes to deduce that if people are already trying to put extinct hominid brain tissue on crab robots, there will be a team to put those three things above together.

Oh wait…

Cortical Labs grew ~800,000 human neurons over a microchip. By using a simple method of reinforcement learning, taught the brain-in-a-petri-dish to play pong with a few minutes of training.

Meet Dishbrain.

Electron microscopy image of Dishbrain
Electron microscopy image of Dishbrain

The human brain has some 86 billion neurons. Someday in the not-so-distant future, we might bear witness to human brain tissue growing in a clean-room over a large scaffold of silicon, being gently misted with some kind of culture keeping the cells alive. Maybe it has even more neurons than the human brain.

The horizon of technological possibilities stretches wide: we’re witnessing the growth of human brain tissue on silicon scaffolds, the melding of minds and machines through brain-computer interfaces, and the development of ever-more capable silicon-based intelligence. As we venture into this uncharted territory, we can’t help but wonder what lies ahead. Will synthetic biological intelligence surpass its silicon counterparts? Will we recognize these creations as conscious beings? Could they offer a potential safeguard against the dominance of strong silicon AI, or will they herald our swan song?

These are not just questions for the realm of science fiction; they are the ethical quandaries that will shape the future of technology and society. Whether this type of research is allowed to continue, and how we navigate the intricate web of intelligence, consciousness, and ethics, will ultimately determine the path we tread in this brave new world of possibilities.

I am cautiously optimistic. These avenues of research may be our best shot at not losing out to silicon. We should start having these conversations sooner rather than later.

Thanks for reading!

I’ll see myself out.

Alexa, play “Radioactive” by Imagine Dragons.

--

--

Kevin O'Connor

Econometrician by day | 🚀 Exploring the Frontiers of Tech & Imagination 🚀 | 🧠 AI Enthusiast | 🧬 Bio-Tech Explorer | 🌌 Future-Thinker