Quantum + Decentralized

Jon Paprocki
The Advancedness Project
16 min readFeb 1, 2018

This post is the first in a planned series of 3 where I’ll try and sketch some ways in which A+B computing may be interrelated, where A, B are distinct elements of {Quantum, Decentralized, Qualia}. This will be a bird’s eye view of future content.

I expect that Quantum+Decentralized (called QD today) to be the QDQ interlink about which I’ll write about the most given that it is the least speculative interlink and the one I know the most about. The current thesis statement for this blog is that biological organisms may be understood as large scale decentralized quantum computers, some aspects of which may be experienced by a conscious entity via qualia and qualia computing. This is not a precise statement at all, and one primary focus of this exercise is to make it more precise and use it as a motivator for investigating the possible role of quantum information processing in biological organisms and how (quantum) information moves through biological systems. While the potential role of QDQ computing in biology is the primary inspiration for this blog, I expect to focus more on the actual computing side of things most of the time given my relative lack of expertise in biology.

Today, we will be touching on a few aspects of what makes quantum information different from classical information in terms of storage and transportation. Quantum information networks require an entirely new way of thinking about agents communicate with one another. This will be followed up by some speculation on what roles hardware, software, and humans play in a quantum internet, direct ways in which quantum computing efficiency may be improved with distributed computing and vice versa, and some thoughts on quantum information in biological systems. Along the way, we will encounter time crystals, teleportation, fractals, lasers, and other friends of quantum computing.

Today’s post is brought to you by lasers. Lasers. They’re the best.

Before we dive into the fun stuff like a decentralized quantum internet and biological quantum information factories, we need to discuss the most important difference between the transmission of classical information and the transmission of quantum information. This hard limitation makes the possible role of quantum computers in a distributed network more complicated, and generally affects every single aspect of working with quantum information. I am speaking of the no-cloning theorem. This elementary theorem states that an unknown quantum state cannot be “copied and pasted”. The very short proof is essentially that a “cloning operator” cannot be linear, and therefore is not permitted.

Put another way, “copying and pasting” an unknown quantum state is impossible for quantum systems (and quantum computers in particular). Since we are focusing on quantum computing here, we can consider “quantum state” to just mean some superposition of N qubits.

Performing a “cut and paste” of an unknown quantum state, on the other hand, is possible, and it is accomplished via quantum teleportation. A brief description of the quantum teleportation protocol is due here (but ought to get its own post someday): Given a single qubit one wishes to “cut and paste”, or teleport, from A to B, you start by producing an “EPR pair” of maximally entangled particles (usually photons — thank you, lasers) at A and then sending one of the pair to B. A then performs a certain measurement on her half of the EPR pair and the qubit to be teleported and sends the results (2 ordinary bits) to B, who is then able to use that information and his half of the EPR pair to reproduce the teleported qubit.

Making a perfect quantum copy of yourself is impossible. You are “quantum copy protected software” — you cannot reproduce your quantum state somewhere else without destroying the original!

Thus, whenever one wants to “copy” quantum information from one quantum computer to another, what is required is actually the program used to produce that quantum information (i.e. a quantum state you wish to copy), and the quantum computer being “copied to” has to actually run the program itself (or the original quantum computer needs to make a second copy to send). This is much different than how classical computers communicate, where for example someone may spend hundreds of hours of supercomputer time computing a simulation of say, turbulence, but you can access all of the velocity data from that turbulence simulation without recomputing the simulation yourself. Quantum computers have no such luxury, but at least this dramatically reduces the number of possible ways in which quantum information processing can play a role in biological organisms.

For example, if RNA folding somehow implements quantum computing in that the folding process results somehow in a protected quantum state delocalized across the molecule determined by the nucleotide sequence, then it may make sense for some things that directly interact with RNA to have a quantum computational aspect. Thus if important quantum information is stored as folded-up RNA molecule, the way in which that quantum information is copied is by making a copy of the RNA molecule itself. Given the complexity and delicacy of the quantum teleportation protocol, I would be surprised to discover teleportation playing a role in biological systems, but do not want to rule it out entirely. In general though, our running assumption will be that if quantum information stored in the quantum state of, say, a protein needs to be “copied”, it will be crudely encoded as a series of amino acids (aka the primary structure), and then the “copied” quantum state is imperfectly reproduced as the new protein molecule folds up. The short (15 minute) answer for why I work on this assumption is this video I linked to in the Introduction post, the long answer ought to be a recurring topic. I should take this opportunity to emphasize that these ideas are nowhere near proven, I just find them to be reasonable enough to speculate further on and share with a wider audience than they would normally find in academic literature.

There is one very important caveat here that I have not touched, which is a generalization of the no-cloning theorem called the no-broadcasting theorem. In this bird’s eye view post, I do not want to get into the technical difference between the two theorems (which is the difference between a pure state and a mixed state). This theorem generalizes the no-cloning theorem to say that you certainly cannot copy your pure quantum state to multiple parties, let alone one other one, but it also tells us that if you have multiple copies of a mixed state (which for now I will abusively just call a “lower quality” quantum state), then you can make copies of it! This process of making “lower quality” copies out of other lower quality copies is called broadcasting. Bizarrely, you can even improve this “loss of quality” when you make the new copies in a phenomenon called superbroadcasting! While it sounds ridiculous, I will attempt to speculate on the possible role of something analogous to superbroadcasting in biological systems further below.

At the moment, the relationship between QD computing is entirely hypothetical, though also inevitable as all computing trends are interrelated. In order to make things interesting, we have to predict what kind of computing environment might exist when both of these technologies come to fruition. The general assumption I will run on is that there will eventually be some kind of “worldwide supercomputer”. Nearly all devices will be able to communicate with one another, and furthermore one will be able to utilize unused processing power on any device in the network (assuming the owner allows you to do so), probably paying for that privilege with some kind of cryptocurrency. Such a computing environment would also provide privacy if desired, easy access to data from any agent in the network that is offering/selling it, AI to process this data, decentralized web hosting and file storage, and many other things cryptonerds like to speculate about. I don’t have a time frame in mind for when I think such a thing will emerge since the primary obstacles slowing down the emergence of a worldwide supercomputer are political and social in nature. Ceptr will generally be the model of a “worldwide nervous system” and biomimicry-based computing paradigm at the forefront of our mind when writing for this blog, given that its the only project in this vein I know about.

The most direct way I can imagine for QD computing to work in harmony is by solving routing problems, such as preventing/mitigating traffic jams and efficient routing on a mesh network. Deciding how to most efficiently route traffic is solved with quantum annealing, a particular quantum algorithm which requires much lower quality qubits than a universal quantum computer, with the primary player in this field being D-Wave, who currently claim to have a working quantum annealer with more than 2000 qubits (compared to ~50 qubits of the largest universal quantum computers at time of writing).

Mesh networking may be the only way the internet will ever be truly free from the control of corporate or state entities. The short version of this story is that mesh networks work by routing connections through ordinary devices rather than along centralized backbones. Thus there are no central points of failure or control. The only way to take down or control the network is by taking the entire thing down at once.

Image result for distributed computing decentralized

A few local mesh networks exist. A couple that I am aware of are in NYC and Spain. The challenges for doing so are substantial and not to be underestimated. As a general rule, they are slower than centralized networks. Here is one creative proposal for bootstrapping a worldwide mesh network by in Third World countries that already have poor internet services.

One possible way to incentivize setting up mesh network nodes would be via cryptocurrency. When your traffic routes through a given node, you would pay the owner of the node with a cryptocurrency microtransaction. These microtransactions would effectively replace the money currently paid to ISPs, and instead directly to whoever is serving or routing content. One would hope that the economics work out so that this is much cheaper as well! This post balances this optimism for a cryptocurrency-powered mesh network with some pessimism. I think that a mesh network based on Holochain resolves many of the purported issues, but I’ll leave that to a future post to elaborate on.

A scene from Koyaanisqatsi, a nonverbal movie which explores our relationship with Nature and technology.

Utilizing quantum computing to improve traffic routing is already past the hypothetical stage and is already being worked on by Volkswagen, and in fact they have already experimented by utilizing quantum annealing to improve taxi routing in Beijing. Self-driving cars are already predicted to improve traffic by a huge amount — combined with more efficient routing from quantum computers, its not impossible to believe that today’s traffic problems will at least be temporarily vanquished by technology someday in the next 10–25 years.

However, all non-asymptotic growth is unsustainable, and technology and the associated resource consumption can only ever offer stopgap solutions to the problems of growth. The only way to escape this cycle is to stop growing! I don’t know how to convince humanity to stop growing, but I at least have faith that somebody will be able to figure out how to do it in the potential post-scarcity, post-work society that we ought to reach for.

What would a “library of quantum states and programs”, or a “quantum GitHub”, look like given what we know about no-cloning, teleportation, and broadcasting?

Larger quantum computers could be used to produce quantum states that are then teleported to smaller quantum computers for which it would be difficult/impossible to produce. This is especially relevant when one considers short decoherence times in the near future. If a larger quantum computer can produce a desired quantum state in 10ns and teleport to a smaller quantum computer in 2ns, that is better than the smaller quantum computer spending, say, 50ns producing the same quantum state.

For example, if you want to simulate a protein binding to a receptor, instead of producing the quantum state of the protein and receptor yourself on your rickety old garage quantum computer that was already obsolete the second it was created, you could outsource the production of a protein quantum state and a receptor quantum state from some larger, faster quantum computer. These states are then teleported to your junker of a quantum computer (that doesn’t even have time crystal memory, wow) which can then be employed in your simulation. Outsourcing the production of these complicated quantum states to larger, higher fidelity quantum computers saves precious nanoseconds of coherence time and also enables a given quantum computer to perform slightly better when the teleported quantum state was produced with higher quality qubits.

Thus, creating a “quantum internet” — essentially an organized way for quantum computers to exchange quantum information via quantum teleportation — will have a synergistic effect improving the capabilities of all quantum computers in the network. Hammering down an extremely reliable quantum teleportation protocol that can be replicated on a massive scale is an incredibly difficult task, but does not appear totally out of reach. In 2017, a large Chinese team successfully teleported qubit states from the ground to an orbiting satellite. The sci-fi dream of teleporting quantum states between labs across the Earth via satellite ought be realized in the not-so-distant future.

One reason quantum teleportation ought to be possible on a massive scale is that essentially the only huge technical challenges beyond those that already exist for unnetworked quantum computers are the transportation of entangled EPR pairs between the sending and receiving site as well as translating quantum information from the original architecture to as optical architecture. This EPR pair is typically a pair of photons, which can be transported via fiber optic cables (thank you, lasers). Quantum computers will enhance our understanding of how to effectively do this, thus I am optimistic about the possibilities of a quantum teleportation network.

Given the presumption that members of the EPR pairs for a quantum teleportation network are photons, and photonic quantum computation is generally not being considered at a commercial scale (but is perfectly suited for experiments with long-range entanglement), one can imagine that the equivalent of a quantum computer “network interface controller” would be a mechanism that translates between the native architecture of the quantum computer to a photonic architecture, and also some mechanism for producing, sending, and receiving EPR pairs (as well as a classical communication channel).

A much more sophisticated and futuristic possible “quantum internet” would involve being able to entangle qubits between quantum computers in a more complicated manner than just exchanging EPR pairs to perform quantum teleportation. At the most complete level, this would essentially allow all sufficiently advanced quantum computers to coordinate in such a way that they may as well be considered to be one gigantic quantum computer. We have not even managed to do this with digital computers, but if done with quantum computers, our computing power would truly approach ludicrous levels due to the fact that every additional qubit doubles the amount of classical bits that can be manipulated at once.

With quantum computers, Nature promises us the awesome power of Moore’s Law on Crack, but cackles as she tells us we can only reliably read N of 2^N bits!

A quantum computing network capable of “mass entanglement” is unimaginably difficult to create in comparison to one merely capable of mass teleportation. However, baby steps in this direction are possible as soon as a quantum teleportation network is established. With techniques like quantum machine learning, this problem may not be as intractable as it first appears.

Important to mention along these lines again is broadcasting — if you can afford to work with cheap, dirty quantum states sold at a discount price, then perhaps you can just clone them from quantum GitHub instead of ordering them custom-made by the fanciest quantum computer in your arcology.

Storing quantum states for long periods of time is a problem unto itself. This task is appropriately known as quantum memory. Existing proposals for quantum memories sound like they are straight out of a sci-fi novel. As mentioned above, one candidate is time crystals, which is a form of matter that is periodic in time instead of space. Periodic in time suggests motion, and the idea is that time crystals are always “in motion”, just like a quantum state is always “in motion” (think of the fluctuations of a wave). So of course this looks wrong immediately, since that suggests that a time crystal is a perpetual motion machine, but actually this is not the case because this motion happens in the ground state and so energy cannot be extracted. Another candidate is fracton memories. Yes, these are related to fractals. How much cooler can it get? I do not know of a good popular account focusing on fractons, but this Quanta article has a segment on it that does a good job. I might write about them someday as well. Another candidate is optical quantum memories, with our good old friend the laser lending us a hand once again. I don’t know as much about these, but I think they still rely on error-correction techniques unlike time crystals and fractons, which corrects for errors at the physical level like a hard disk does.

Time crystals and fractons. Advanced enough for you yet?

One recent study on the biological foundations of memory discovered something very interesting about how neurons involved in memory communicate. You really should read the article, but the summary is that there is a particular protein called Arc which behaves in a virus-like fashion and transports RNA molecules between neurons, and that this process is somehow involved in the storage of long-term memory.

This is relevant to us for a couple of reasons: if RNA stores quantum information, Arc is acting as a vehicle for moving that quantum information around; and that the behavior of Arc itself is sophisticated enough to urge on to investigate how that behavior is encoded.

First, since we’re running with the hypothesis that folded RNA molecules, and possibly some folded proteins in general, somehow store useful quantum information, the immediate conclusion one would like to draw is that neurons are possibly trading this quantum information around via this Arc protein. Given how much more “dense” quantum information is than classical, this seems like an incredibly efficient way to store and move information. Also, as mentioned in the article, many copies of a given RNA molecule are made. Given that the quantum information will inevitably be stored as messy “mixed states”, having lots of copies of it being spread around could somehow lead to a superbroadcasting like effect making the memories higher fidelity — but with no real knowledge of how exactly the RNA is involved this is really baseless speculation and I don’t want to take this idea too seriously, rather I’m just exploring how quantum information network concepts could apply in biological systems.

Second, the fact that the Arc protein acts in a virus-like fashion, infiltrating neurons to take RNA and then send it off elsewhere, leads one to consider that there has to be some incredibly sophisticated way of storing or “programming” this sort of behavior within the structure of the protein itself since there is no distinction between hardware and software in biology. From a reductionist standpoint, it almost tautological to say that this is a result of quantum mechanics, since everything is a result of quantum mechanics. So we should try to be more precise. If the behavior were stored as quantum information within the quantum state of the protein, it would have to somehow be protected from the environment. Another way of saying that in computational terms would be to say that it is self-error correcting — the environment disturbs the quantum state creating an “error”, and there is some mechanism which automatically corrects that error. Only very exotic quantum systems correct their own errors, so knowing that such a thing must correct for errors at the physical level really narrows things down enormously.

For every universal quantum computer architecture other than topological quantum computers, error correction is what most of the resources are being put towards. Estimates vary quite a bit, but in general you’re going to need many physical qubits for each logical qubit in these architectures. I don’t want to divert to a discussion on error-correction in this post, but suffice to say that the only real hope I can imagine there being for quantum information being stored on a protein is if it is topologically protected. This means that local disturbances to the protein, like something bumping into one part of it, will not affect the information being stored. Only large scale violent events that affect the entire protein at once will have a chance at altering that information. And right now the only candidate I know of for a topological quantum field theory being relevant here is SO(3) Chern-Simons theory, as Jørgen Andersen mentions in the video linked above. This idea desperately needs its own series of posts, but it is also incredibly difficult to write, so it may be awhile!

A handful of pioneers have explored at a mathematical level some similarities between how biological processes behave and basic quantum mechanics. Louis Kauffman, whose work we will be exploring on a regular basis in this blog, laid much of the foundation for this in his under-appreciated paper Biologic. This paper explores some very uncanny isomorphisms between the logic of DNA replication, protein folding, knot theory, and the Dirac formalism of quantum mechanics. It is a primary aim for me this year to write about this and bring it to a wider audience, and relate it to ideas in this post, but given the bird’s eye view of this post I will refrain from diving into this very exciting topic for now.

Does knot theory secretly play a fundamental role in biology?

This post would be complete without mentioning how quantum computing is relevant to security and cryptography on a decentralized internet. I am sorry to disappoint, but this is something I know very little about! It’s a gap I hope to fill someday. What little I know is that quantum computers that can break cryptography in a bad way are pretty far off (probably at least 15–20 years), there are excellent security protocols you can make use of with quantum computers (like knowing if anyone is eavesdropping), and that there are post-quantum encryption methods for digital computers. To me, someone who has never had much interest in security, it just looks like moving the goalposts rather than anything that will totally change the face of security. Certainly anybody who has sensitive information that will still be relevant in 20 years stored in a cryptographic format that quantum computers will break ought to do something about that.

QD computing has many interesting challenges and applications. The no-cloning theorem dramatically changes what it means to share and exchange data. The broadcasting effect really has interesting implications for the utility of data redundancy. The more copies of some given quantum information you have, the better the new copies are! Quantum information is shared via teleportation and lasers, and may be stored for longer periods on time crystals and in fracton arrangements. Quantum computers can solve routing problems for distributed networks, and distributed networks increase the power of every quantum computer in it, creating a sort of bootstrap. Biological organisms may already be utilizing some of the best, most efficient models for distributed quantum computing, and basing our distributed quantum computing systems on biomimicry may save us an enormous amount of trial and error. There is a lot more to come from this place, and I look forward to the challenge of understanding it!

--

--