AI, Machine Learning, and Tacit Knowledge

Are we so different from our machines?

Dave Andrea
4 min readSep 6, 2022
Photo by Josh Riemer on Unsplash

I want to talk about how machine learning relates to consciousness. Is machine learning going to lead to Skynet? Is it only a matter of time before terminators from the future come to get us? Will machines lock us in pods, submersed in goo, to harvest our body heat for energy?

I think several misunderstandings lead us to jump to these conclusions. If we sort out and clarify some of the primary ways people learn, then perhaps we will better understand what type of learning is really happening when a machine is learning.

There are various ways of classifying types of knowledge and ways of learning, but I want to focus on just two of them. Conceptual and Tacit knowledge/learning.

Conceptual knowledge

Conceptual learning should be familiar because we do so much of it in school. First, you learn a few ideas and how they interrelate and interact with each other. Now you have a new bit of knowledge. Keep doing this, and eventually, you will have a great deal of expertise.

Just as importantly, conceptual knowledge is the kind you can easily explain to others, provided they have a necessary background understanding of the topic. Even if they don’t, they can, in principle, learn more stuff to eventually understand your explanation.

Tacit knowledge

Tacit learning is of a different sort. It’s something you learn by doing. More specifically, in the beginning, you do it terribly. Still, you eventually “know” how to do it through repeated trial and error, even though you probably will have difficulty explaining it.

Learning to ride a bike is a helpful analogy for understanding the kind of implicit learning involved in gaining tacit knowledge. At first, you climb on, filled with hope, only to just tip over and fall to the ground. Then you try with a running start or perhaps get someone to give you a shove, and you learn how moving makes it easier to keep your balance. But you need to coordinate balancing with pedalling and steering, which involves different muscle groups working in tandem.

Over time, your body internalizes the “feel” of riding a bike. The various little facets of riding that combine to form the skill of riding a bike become a part of you. You can’t pass this knowledge on to someone else by explaining it to them; you can only guide them as they learn it for themselves.

We all know what successfully riding a bike looks like. So every little mistake we make while learning is constantly compared to our idea of the “right way” to ride a bike. You pedal smoothly, stay balanced, and don’t tip over. Our brain is constantly making judgements about each little detail of how we move and whether it contributes to correctly riding a bike or to failing to ride a bike.

Some of these judgements are conscious: if I stay still, I’ll tip over; if I suddenly jerk the handlebars to the side, I’ll likely flip off the bike and crash. However, the vast majority of these comparisons are unconscious. And this is a good thing. There are way too many things happening at once to be able to hold it all in your mind and make sense of it. Doing it by feeling is much more efficient.

When I say that some judgments are unconscious, I don’t mean they are necessarily inaccessible to or hidden from our conscious thought. It’s more like they are transparent to us. Instead of seeing, thinking about, or feeling them directly, they are the lens through which we see and experience the world around us. If we choose to stop and analyze something, we can bring it into our consciousness, but we don’t really need to as long as nothing goes wrong.

Implications for artificial intelligence

How does this connect to artificial intelligence and machine learning? I want to suggest that the learning process of machine learning is more akin to riding a bike than learning things conceptually. This, of course, has implications for AI’s relationship with consciousness.

Many people’s minds automatically jump to artificial consciousness when they hear the term AI. For example, the machines in The Matrix. Whether artificial consciousness is even possible is a question neuroscientists, computer scientists, and philosophers struggle to answer. I certainly don’t have an answer. I only seek to clarify the question and make some possible solutions more accessible.

However, I think we miss the point when comparing machine learning to seemingly sentient robots from science fiction. Instead, machine learning algorithms are more similar to our tacit knowledge gained through experience, much of which is entirely unconscious. This is likely obvious to anyone working in AI, especially if they have studied machine learning algorithms in any depth. But it is clear from reading sensationalized yet highly misleading headlines that this knowledge often doesn’t filter down to the general public.

So how does machine learning relate to consciousness? In short, I think it doesn’t. Or at least not in a direct way. There is an imprint, or perhaps an echo, of consciousness on trained algorithms. But this comes not from the machine or software but from the people who implement the algorithms, clean the data, and set up training sets.

(Part 2 is coming. Stay tuned)

--

--

Dave Andrea

Teacher, Programmer, Writer, Music Composer, Philosophy Enthusiast, and All Around Nerd