Edmundo Ortega
Machine and Partners
5 min readJul 1, 2024

--

Photo by Midjourney

This is the first post in a series of articles explaining how AI works for non-technical leaders. Here at Machine & Partners, we believe that in order to make good decisions about your AI strategy, you need more than a superficial understanding of what makes it tick. That’s because generative AI is more than a new technology—it’s a new paradigm for how we will work and build products. It’s truly a different way that, because of its accessibility and probabilistic nature, requires a much more cross-functional and collaborative approach.

What is AI?

Let’s kick off this series at a fundamental level. The word AI gets used in so many different ways with different levels of specificity. In fact, it’s such a generic term at this point that it’s nearly meaningless, but I would say that a good working definition is something like: Any algorithm that attempts to mimic human intelligence.

You could also say that AI means “machines that think.” But this just begs the question, what is thinking? We don’t really have a good answer for this, and I think that’s why we’re struggling to imagine a realistic future with “intelligent” machines. Let’s keep it more grounded for now and try to zoom in on the different historical approaches we’ve taken to building machines that emulate our minds.

AI has a lot of separate sub-disciplines, but I find it helpful to divide it into two broad categories: Symbolic and Non-Symbolic approaches.

Symbolic AI is logic-based. It has pre-programmed rules that tell the machine how the world works and how is should behave. For example, you could build an autonomous spacecraft that lands itself based on camera and sensor inputs and a worldview that includes the rules of physics. If it fires a rocket, the ship should expect a certain response based on those rules. This has been the historical approach to building AI. It’s based on our models of reality. We call it symbolic because it is essentially taking human logic (and all the symbols we use to express that) encoded into the machine. Because these systems must follow a discrete set of rules, it’s possible to observe symbolic systems by tracing their logic on why they made any given decision.

Non-Symbolic AI is essentially what we know as Machine Learning. This itself is a deep field with many different approaches, but we can generalize a bit by saying that it’s very data-oriented, statistical in nature, and does not rely on logical reasoning rules (aka a worldview). Machine learning algorithms can be difficult to “look inside” because their learning isn’t encoded in a human-friendly way. In other words, you can see how a ML model is behaving, but it’s not always clear why it’s behaving that way. In a future article you’ll see why this is. Anyway, non-symbolic AI includes stuff like Large Language Models, self-driving cars, facial recognition, recommendation algorithms, fraud detection, and spam filters. Non-symbolic AI is based on a more fundamental model of how to learn, so that it can create it’s own model, versus using a model that has already been learned by us.

Without realizing it, most people believe that all AI systems are symbolic. When ChatGPT answers questions with an apparent logical coherence it’s easy to assume that it must have “thought” logically to construct the answer. This is an ego-centric bias that creates outsized expectations for AI systems and then leaves us surprised when they get seemingly “easy” questions wrong. It’s really hard for us to get our heads around the idea that a computer can derive an elegant, seemingly logical answer purely from statistical analysis of language or other data.

Neither of these broad approaches are universally better than the other. They can be used separately or together based on their strengths and weaknesses. For example, self-driving cars use both systems. They have ML models for computer vision to “see” where they are going. But they are also given a set of symbolic rules about the functions of the car and maybe even about traffic laws. In combination, the symbolic side can provide boundaries for the non-symbolic side, which may not have clarity on what’s “right or wrong” or struggle in edge-case situations.

When you have problems that require infallible reasoning or just have a simple set of rules that they need to follow, then symbolic AI may be a good choice. An expert system for banking is a good example. Money MUST move appropriately and be traceable according to precise rules. But symbolic systems struggle or fail when encountering situations that weren’t considered in their programming, like when Alexa can’t answer what seems like a simple question and decides to punt by reading a Wikipedia article on the topic.

Non-symbolic AI is good when you have a lot of data, can’t precisely describe a logical algorithm, and are okay with some amount of imperfect reasoning by the machine. Facial recognition is a good example. It’s hard to encode exactly what makes two faces the same and if a face is mislabeled on Facebook now and again, it’s not the end of the world. Non-symbolic systems require a lot of data for their training and can fail when they encounter things that weren’t in that data set. For example, an algorithm trained to recognize and read license plate images will be able to read license plates it’s never seen, but it won’t be able to tell the difference between a panda and a raccoon because it wasn’t trained for that.

AI is an era-defining technology that allows us to leverage silicon in sometimes startlingly human ways. It can be a powerful way to sift, sort, search, and make meaning from an overwhelming amount of information. Finding the right applications for AI can unlock untapped productivity, efficiency, and revenue for businesses.

I see AI driving a paradigm shift where we humans are giving up control for convenience. It’s happening right now. We’re letting AI perform tasks through reasoning that we can’t understand or control. This non-deterministic approach tends to make programmers really uncomfortable! But for most people, the convenience of having an AI do “good enough” work, outweighs the occasional error or hallucination. As time goes on, we’ll hand over more autonomy to machines so that we can spend more time on the things we enjoy or care about. It’s not good or bad, it’s both. And the ramifications are largely unknown.

As a final thought, I want you to walk away from this article understanding that AI is not magical. No matter what form it takes, it’s just a computer algorithm. AI systems, even the ones modeled on our brains, don’t “think” in the same way we do. Despite the hype, we’re nowhere near reaching AGI and being turned into human slaves by our robot overlords. What’s AGI? I’ll save that for the next article.

--

--