A.I. Crash Course: The Machine

Deep Dive into Artificial Intelligence

Unicorn
7 min readDec 22, 2022
Thoughts become dreams. Dreams become premonition. Premonition becomes realityClay Unicorn

ChatGPT, Dall-E, Lensa, MidJourney and so many AI tools have become popular in such a short time. But do you know what AI really is? This article is for the curious-minded that might want the basics but not all the technical details only a software engineer would understand.

This is part one of a series I will publish over the next few weeks. It’s also a preview of a comprehensive book that I’m writing on how to create, patent, and utilize novel AI systems for your business! Subscribe to get on the waiting list and early access! You can collaborate with me and the team at Unicorn if you have questions, need a tech consultant, or are curious to learn more from experts.

What we aim to achieve in this article is a precursor describing novel and fundamental concepts of an employee management (EM) application. We will break down how the EM app uses various AI concepts to power a proprietary recommendation system in our own application.

We’ll use as basic of terms as possible in this publication, but if you want to fully comprehend the definitions, you will need the core understanding of these three key elements of computing and then the definitions of AI from IBM’s White Paper.

Since the dawn of the computer age, society has often speculated and dreamed of the possibilities of creating an artificial intelligence similar to its own. Computing breakthroughs have opened our minds to the likeness of our own consciousness and the raw processing power harnessed within. While even simple computers are marginally more efficient than our own brains when it comes to simple algorithms and mathematics, the idea of a machine simulating human thought or choice has been beyond our reach until the last decade. So what changed in the last few years to create all the buzz around Artificial Intelligence and what is AI really?

To understand what artificial intelligence is, you need to first comprehend the underlying components that make AI possible:

  • Processing power
  • Information storage
  • Algorithms

These base components make up the hardware of an AI system. They’re akin to the essential components of our own brains and ways of thinking. As hardware becomes cheaper and breakthroughs more frequent, we make giant leaps forward in mimicking a biological brain.

Processing Power

Many are familiar with the concept of a computer’s central processing unit (CPU) even if only at a high level. You may even be on a laptop at the moment, branded with an Intel or AMD sticker advertising which chipset your computer is using (e.g.: Intel Core i9). Think about this CPU as the engine of a car. Among all the factors that determine a car’s ability, the engine is the underlying component that is central to the strength and speed of the vehicle. In many ways this is analogous to the underlying importance of a CPU in a computer and thus an AI system. With the linear growth seen in Moore’s Law (which in layman’s terms: predicts that technology capabilities double year-over-year) it’s feasible to make predictions about the future of computing and to see the journey from simple algorithms to interconnected and advanced systems design in the last few decades.

The CPU is integral to the speed at which information can be processed. For a direct comparison, the human brain is estimated to process information around 100 teraflops while current supercomputers can process at 200k teraflops or greater. While the average consumer would only see a tenth of these specs in a high-end computer, the ability to tap into cloud computing allows anyone to take advantage of CPU speeds that rival the human brain.

As we’ve crossed the threshold for raw processing power, we can see the means to start replicating the human mind in terms of processing speed. However, there are still two other core components in storage and algorithms that have to be vetted.

Information Storage

At its most basic, information storage is just a static set of data on some device. Storage is everywhere: flash drives, camera SD cards, hard drives, RAM, and so on. Storage solutions are so abundant, that virtually all digital systems have some sort of storage baked in. Information storage comes in a variety of flavors and layers. Here are some examples of how data is typically accessed and some related software or hardware solutions for each:

Short term, with fast access

  • RAM

Long term, with slower access

  • Hard drives
  • Flash drives
  • Network or Cloud

Small data sets based on primitive data

  • Document databases (MongoDB, etc)
  • Key-value database (Redis, etc)

Access based on a relationship to other data

  • Relational databases (postgres, etc)
  • SQL databases (MySQL, MSSQL, etc)
  • Graph databases (Neo4j, etc)

Scientists can only guess as to the actual storage capacity of the human brain. Estimates range from 1 terabyte to 2.5k terabytes. The low end of these numbers are easily readily available on most consumer hardware, and even at the highest end through Cloud-based storage solutions.

However, despite the fact that storage is both cheap and abundant, herein lies the first issue for the advancement of Artificial Intelligence. Storage in a primitive state, is just that: too primitive. What separates and excels in a human brain is our ability to manipulate information and it works in tandem with our own internal “algorithms.” Anecdotally, this is one of the primary reasons we sleep and dream. Our internal memory processes decide what information to move to fast-access storage, long term storage, or throw in the trash. In addition to the sorting barriers, since we have yet to measure the true processing power and storage capacity of the human brain, we don’t know exactly what threshold is needed to replicate this power in a computer. The interim solution and discovery of those benchmarks lies in our ability to create and test algorithms.

Algorithms

All types of logic and mathematical formulas are algorithms. This all-encompassing word is at the heart of everything in electronics, consciousness, physics, and even existentially across the known universe. In computer science, advanced algorithms are the building blocks of complex systems. They work in tandem to power some of the most powerful and common software services: Google search results, which posts display in your Facebook feed, or recommended products on Amazon. These programs may start off with small collections of basic algorithms, but as they grow and become more sophisticated, they naturally progress towards models that can be described as the fundamentals of Artificial Intelligence.

Even if you are not consciously aware of it, your brain is constantly running complex algorithms in your head at all times. A decision as simple as what to make for dinner draws from dozens, if not thousands, of data points in your body. Your digestive system may be driving a craving based on enzymes it needs to function effectively, your cardiovascular system could be reporting a craving for minerals like iron to produce blood cells, and the list goes on. As all these data points report into your brain constantly and aggregate to an unconscious bias for what you “want” to eat. You then make a conscious decision factoring in this bias, which you would likely self describe as a craving or taste preferences, but it’s all based on this underlying data. Ultimately you end up making a decision, probably completely unaware of all the algorithms that went into the conscious choice.

Computer programs can mimic all of these same systems. The key is breaking down each system into a cluster of algorithms and then splitting apart those clusters into individual formulas. Herein lies a problem. Writing the individual algorithms that biology developed over the course of millions of years takes a long time, so as it currently stands, this feat is seemingly impossible.

That’s where AI comes in. If you can instead write a program that can inspect an input and output of data, you can create a program that analyzes this data and tries to mimic the algorithm and recreate the same results. This, in a simple form, is an example of deep learning which is a building block and precursor to Artificial Intelligence.

The past, present, and future of AI

We covered the key points of underlying hardware and software systems that are necessary not just for Artificial Intelligence, but computing in general. In that journey, we made analogies to human consciousness or simpler engineering models like automotive. Hopefully, this helped you grasp core concepts and terminology which are necessary to proceed into the deeper reaches of how AI systems operate.

Before wrapping up on the base understanding, it is important to note that any implied or stated limitations that currently exist in software or hardware are on the precipice of radical change at the time of writing this paper. Quantum mechanics and computing are already breaking barriers that may rival the human brain in many ways. The perceived boundaries are only limited by the speed of adoption of new technologies that are already in existence. When quantum computing hits critical mass in terms of consumer adoption, the leaps forward on all fronts will likely be 100 fold or greater.

Subscribe to get on the waiting list and early access to our upcoming book. If you want leading experts to guide your business in AI, technical patent work, or consulting, get in touch with the Unicorn team.

--

--

Unicorn

We are an incubator and consultancy specializing in software, tech, automation, AI, and retail businesses.