Bio-Computers: Physical Neural Nets

calhoun137
Analytics Vidhya
Published in
5 min readMar 25, 2020

--

This is the second article in a series based on my original work on the theory of self-reproducing machines. The first article was about bio-programs, which are self-replicating computer programs.

In this article I lay out a new electronic architecture for computers which is exponentially more efficient than modern computers in a sense I will define. I will show how modern computers are a special case of physical neural networks and am calling this subject bio-computers. I will explain what I have discovered and what areas require further research.

NAND gates and modern computers

It is well known that a modern computer has a CPU which process instructions and does arithmetical operations, but otherwise is made almost entirely our of NAND gates.

NAND Gate

A NAND gate is very easy to describe, it is a transistor which has 2 wires coming in, and one wire going out. The condition for the wire going out to be in the “on” state is that both of the incoming wires are not both “on”, in that case the outgoing wire will be “off”

A “physical neuron” is a transistor which has an arbitrary number of incoming wires, and one outgoing wire, and the outgoing wire is only in the “on” state when a specified condition is met, which is called the “threshold function”. The threshold function and the number of incoming wires are free parameters of the system.

credit to neural network and deep learning book for this image, check out that link!

A first approximation to a bio-computer is a physical object, a circuit board, made out of physical neurons instead of NAND gates specifically, but is otherwise the same as a modern computer. It’s clear that a modern computer is a special case of a bio-computer

Electronic Architecture and Algorithmic Efficiency

It’s generally understood that the efficiency of an algorithm depends on the implementation details and can be measured using Big-O notation. For example, a quick sort algorithm is said to have an efficiency of O(nlogn).

It’s not as well known that the efficiency of an algorithm also depends on the underlying electronic architecture of the computer it is being run on. This is the source of “exponential” gain that is achieved by bio-computers over modern computers.

Although as we will shortly see, the source of this exponential gain in efficiency is not a direct result of any kind of self-reproduction process, but this new architecture is especially suited for programs which are self-replicating in nature as described in the previous article in this series.

A quick example will prove this point. Suppose we want to write a program which, given any 2 numbers less than 100, will return the result of dividing the first number by the second. It’s clear how to write such a program on a modern computer, let’s say the efficiency of this algorithm is E. I will now demonstrate a method to build a physical circuit which has efficiency O(1), solution: create a lookup table for all 10,000 possible combinations of 2 numbers less than 100 and then no matter what 2 numbers are input just look it up in the table.

To see how this kind of exponential gain in efficiency is related to neural networks, we can imagine the image recognition problem to be roughly speaking “in the class NP”, but these networks seem to, at least experimentally, solve this type of problem “in P time”.

Dynamic Electronic Architecture

The main aspect of a more general bio-computer which distinguishes it from a modern computer is that as it is used, its underlying electronic architecture is modified in such a way that over time it becomes more efficient specifically for the things it is more frequently used for.

For the sake of discussion, imagine a neural network with an arbitrary number of layers where each physical neuron in each layer has a constant number, say 25, wires coming in from the previous layer.

The way this process works is a new type of processor which is also included in the bio-computer will use machine learning algorithms to adjust the weights of all the wires, as well as the threshold function for the various nodes, while the computer is used.

Areas for Further Research

There are many aspects of this process I do not completely understand at the time of writing this note, but I am confident there is a lot to work with here.

The main idea is to follow the way the brain works from the point of view of a bio-computer, and using methods from pure mathematics and computer science to replicate these types of functions in a physical bio-computer.

It’s not necessary to begin from a starting point of dynamically generating new layers, turning nodes on and off, doing feedback, or any of the other things we see in the human brain.

A much better starting point would be to build a physical bio-computer out of transistors and start to identify the most important areas of research which will be required, and construct a more detailed plan of attack for this exciting new area of computer science

In the next article I will explain how the computational power of bio-computers led me to a new conjecture about the P vs NP problem

tl;dr

bio-computers are physical neural networks that have a new kind of processor and that get faster over time at the things they are most frequently used for, and which suggest new avenues for scientific investigation.

--

--