A.I. Crash Course: The Brain

Deep Dive into Artificial Intelligence

Unicorn
9 min readDec 30, 2022
An illustration of a neuron that resembles a tree branch
Thoughts become dreams. Dreams become premonition. Premonition becomes realityClay Unicorn

In the first part of this series, I wrote on the background of hardware and computing fundamentals, which are important to understand if you want to see the full picture of how A.I. really works. This article is the second piece: where technology meets biology.

But first a quick anecdote. I want to acknowledge that I am no longer puzzled as to why people are both so fascinated and also see AI as some type of magic of which only an elite few seem to understand. The reality of the situation is that A.I. concepts are an area nearly anyone can fully comprehend with some really basic foundational knowledge in three subject areas: math, biology, and computing. Please forgive any over-simplification in my writing style. I’m trying to cut out the jargon and domain-specific knowledge as much as possibly to help the average person grasp these principles. If you find yourself wanting to spice things up but still learn about AI, you can go check out the parallel series exploring how I built an Indie Game Studio (almost) entirely with AI.

What we aim to achieve in this article is a precursor describing novel and fundamental concepts of an employee management (EM) application. We will break down how the EM app uses various AI concepts to power a proprietary recommendation system in our own application.

We’ll use basic of terms in this publication as much as possible, but if you want to fully comprehend the definitions there are definitions along the way that you may want to further explore and understand.

While the definition of consciousness is still one of greatest mysteries of our time, the building blocks have been thoroughly tested and understood. Our brain is mostly just a big connection of tiny simple little cells that send signals. Those signals are actually fairly trivial. It’s how they work from one neuron to the next that makes a chain of complexity.

That same model exists in computing. There’s nothing fancy or complex about an algorithm really. In software development, an algorithm is basically just a chain of inputs and outputs with logic to decide where to route as a next step.

Understanding that computing and the brain are built on the same core building blocks, it starts to make a lot of sense that we are able to progress so rapidly in the field of A.I. It’s really all about adding more nodes, more chains, more layers. If we add enough layers, we may recreate a level of intelligence that starts to match what we see in conscious life.

The three elements of the brain that are most critical in developing A.I. systems are:

  • Data separation
  • Neurons
  • Basic logic

Data Separation

We often take this super power we have for granted: we are able to categorize and sort data very effectively and do-so all day, every day. In fact, that very process in our brain is so significant that it’s why we need REM sleep and why we dream. That makes this area very easy to understand. Data can be broken into two types for our exploration: linear and non-linear.

Look at the following visual of linear vs non-linear and you probably intuitively understand the principles. There are red and blue dots plotted on a graph, can you draw a straight line dividing them, or do you have to separate them with a squiggle?

There are two graphs with red and blue dots. The first graph shows the red and dot grouped together with space between so a diagnal line can be drawn separating these dots linearly. The second graph shows a mix of dots all over and a straight line will not cleanly separate them.

Linear vs non-linear is really all you need to understand when it comes to data separation in the context of AI. However, there are multiple angles you have to consider with how you might group or ungroup data. Consider the following: the dots can be red or blue or combination of red and blue (purple). The complexity of this data separation has now increased because it’s not a binary system but rather a range of values.

So now that we have introduced this concept that red and blue are values that make up the color of a dot, there are three points to consider. We want to prefer linear grouping as it is the most simple and then deal with our non-linear groups. So the following groupings should account for all situations to sort our data:

  • On a linear scale: the color is red or it is blue.
  • On a linear scale: the color is neither red or blue.
  • On a non-linear scale: the color might have red
  • On a non-linear scale: the color might have blue

Now that we have an established method of how we sort data, we can get into what happens in our brains when we do this intuitively and start to understand how to replicate these data classification systems in a neural network, which is at the heart of all AI.

Neurons

We won’t get into micro-biology too much except to draw parallels between neurons in the brain and perceptrons, the computer equivalent. If you understand either side of this equation of biology or computing, it’s sufficient to show the matching of terminology:

  • A collection of communicating nodes is a neural network.
  • A neuron is a node.
  • Dendrites are inputs.
  • The nucleus is where the algorithms are stored and executed.
  • Axons are data transmitters.
  • A synapse is an output.

The dendrites accept incoming data. In computing, this is the Input in I/O. They receive information from elsewhere. The are pretty basic in concept but an input may have a specific receptor, meaning it may only receive a certain type of input. I like to call this the round-peg input. It may allow for a round peg but not a square. Aside from that, it doesn’t really have any logic to it, just the types of inputs that fit inside. It’s important to note that most neural networks don’t add the round-peg criteria because they’re built with a specific data type in mind as is the case in our example: the input will always be a color. So why add in extra complexity expecting any other input? Still it’s worth mentioning because the input type can be variable if we want our system to account for it.

The nucleus is the decision maker; the algorithm. Just as the nucleus of a plays a role in the process of decision making in a cell, a perceptrons is a simple algorithm used to classify input data. When a perceptron is presented with an input, it processes the input using weights and biases, and then produces an output based on the result of this processing. The perceptron can then learn to adjust these weights and biases based on the output, in order to improve its classification accuracy.

Axons are the data transmitters. In the cell, this is a long, thin projection that extends from the cell body of a neuron, and is responsible for transmitting electrical signals away from the neuron. In machine learning algorithms, this information is often transmitted between different components of the system, in a process known as forward propagation. Additionally, the axon is also responsible for transmitting signals back to the originator in a process known as back propagation. This process allows the neuron to adjust its internal state based on the signals it receives from other neurons. In machine learning and AI, backpropagation is a key process for training algorithms, as it allows the system to learn from the errors it makes and adjust its internal parameters in order to improve its performance. In this way, the axon of a neuron can be seen as a model for the transmission of information and the process of learning and adjustment in machine learning and AI systems.

The synapse (axon terminals) is basically data output and routing. Technically, a synapse is a small gap between two neurons but the fundamental principle of how it works is that it allows for the transmission of signals between neurons. In the context of a neural network, a synapse is simply the connection between two nodes, or more colloquially: the I/O exchange. In a neural network, node are arranged in layers, and the connections between them are represented by these synapses.

Basic Logic

Logic is a fundamental concept in so many areas; computer science, scientific methods, and human reasoning just to name a few. It is the study of principles and methods of reasoning, especially the principles and methods of correct reasoning. In computer science, logic is used to design algorithms, programming languages, and software systems. In scientific methods, logic is used to develop hypotheses and theories, and to evaluate the validity of experimental results. In human reasoning, logic is used to analyze and evaluate arguments and to draw conclusions based on evidence.

One of the basic building blocks of logic is the statement, which is a declarative sentence that often resolves to something simple like true or false. Statements are used to express facts, opinions, or beliefs, and they form the basis for logical arguments.

Another important concept in logic are the operators, which can be represented through symbols or words. You already know many of these operators: less than <, greater than >, equals =, and so on. These operators connect two or more statements in a logical expression. Any of these operators can be used to create algorithms, which we explored in Part 1: the Machine.

This is where we start intersecting with data separation. Using operators we can start representing sorting algorithms using statements inside of a node. Is the color red? Is the color blue? A statement which can answer this question could read: if color is equal to red then it is true. These simple if/then/else based statements can work great for linear data, but not so great on non-linear data. More code is needed, more computational power, etc. So to optimize for this we often look at more advanced operators or using multiple layers in a neural network.

The XOR (exclusive OR) operator is one of the more complex operators as it is used to connect two statements in such a way that only one of the statements can be true for the overall expression to be true. The XOR operator is often used in computer science to represent conditional statements, where one action is taken if a certain condition is met, and another action is taken if the condition is not met. This is one way to solve for non-linear separation on a granular level (inside the node).

The other way to solve for complex sorting of data is to separate concerns into layers. To keep our nodes as small and efficient as possible, we might have layer one simply test for the truthfullness of the input being red. Then layer two checks for blue. And layer three handles exceptions that didn’t return true in the previous layers.

All of these concepts of logic are at the center of everything, hence why it is compared to a nucleus of a cell. The statements, logical operators, and logical expressions are used heavily in the construction of neurons and nodes and what is true for computing is also true for the brain on this level.

Putting it all together

To this point you have six core foundational principles that make up everything you need to comprehend AI:

  • Processing
  • Information storage
  • Algorithms
  • Data separation
  • Neurons
  • Basic Logic

Now we can start connecting the dots on so many levels and describing all of the different terms you might have read and misunderstood! The next chapter will be on a lot of terminology and names of AI systems: machine learning, training, deep learning, and more.

An pen and watercolor illustration of a unicorn walking down a city street
Author: Clay Unicorn, Founder of Unicorn a business and tech consultancy. Photo generated using MidJourney.

Subscribe to get on the waiting list and early access to our upcoming book. If you want leading experts to guide your business in AI, technical patent work, or consulting, get in touch with the Unicorn team.

--

--

Unicorn

We are an incubator and consultancy specializing in software, tech, automation, AI, and retail businesses.