The Missing Link of Next-Gen AI

Without it, you’re just building a computer

Theory of Thought
Symbols Explained

--

We now have the technology to build amazing computers that replicate many of the functions of the human brain. Massive machines are being assembled right now that offer orders of magnitude more computational force than any human can provide. These digital brains have millions of cpu’s that process the latest machine learning applications across network clusters that are slowly approaching the complexity of the human brain.

We like to believe that one day we will interact with these supercomputers in a way that is identical to human interaction, providing us with that sense of surprise and wonder that we get from communicating with other living beings. I think we call this computing aspiration artificial intelligence and it’s the holy grail of futuristic computing.

The question is — are we currently approaching artificial intelligence in the right way?

Is our goal to replicate the way human’s think? Or is it to outperform humans at specific tasks? If the case is the latter, we’ve already achieved it. The every-day PC is already capable of out-calculating most people. But if one thinks the goal of AI should be human imitation, then perhaps one would conclude that the latest generation of supercomputers might be falling short.

Douglas Hofstadter, a professor and cognitive scientist, said this: “Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, ‘read’ is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.”

Jaron Lanier, a prominent author, thinker, and cybersociologist agrees that until we fundamentally understand that which we’re trying to clone [the human brain and mind], everything else is an impressive attempt up Everest that never totally summits.

What the leading neurophilosophers are saying is that one cannot fully mimic human (or animal-like) intelligence if he does not have a fundamental theory describing the living mind.

Advanced artificial intelligent machines should have the ability to continually surprise us with that sense of creative thought. They should have personalities that aspire to grow and change as do ours.

Intelligence requires the ability of tapping into the abstract reality that our minds intuitively experience — and we need to transfer that ability to a computer, somehow.

In my book, Theory of Thought, I go to great lengths at explaining the difference between the brain and the mind. It’s a philosophical exposé on what basic principles come together to create the abstract world of every living organism. It focuses on explaining the additional layers of reality that support the existence of symbols and symbolism within abstract AND physical realms.

As Plato once elaborated by his Theory of Forms, there might be a hidden (ie. abstract) space that intersects our bodies. This abstract space contains Forms from which all things in nature are derived.

This ancient idea eventually lead to dualism, which states that the brain resides within a physical space, yet the mind exists somewhat separately within an abstract space.

This existence of a duality between the brain and mind is hotly debated today, with some scientists claiming that its acceptance might be crucial for solving the differences between general relativity and quantum mechanics (ie. locality vs. non-locality).

You might have heard Roger Penrose refer to this abstract region as being capable of entangling the microtubules in a brain and thus capable of affecting our thoughts. Perhaps our brains are directly communicating with abstract structures in the universe through quantum mechanics and we just don’t know enough to understand the subtleties of the process.

And why shouldn’t this be possible for humans?

Why must we believe that the brain is a self-contained computer with no correlation between the thoughts it manages and the external world? I know that most empirical scientists don’t want to jump to any conclusions before running tests, but I think its important to have a vision and formulate a deep understanding of how nature is supposedly organized before we can even decide what tests to perform.

Other ideas such as those purported by string theorists, might refer to this hidden region as being some extra-dimensions that are curled into each other at every point in space. We might never detect them physically, but they could still exist as a feature of mathematics.

So to build real AI probably requires theories that go beyond current understandings of space and time, and causes us to dramatically think outside our 4-dimensional box.

It seems to me, and many others I have spoken with, that we’re quickly approaching a crossroads that will soon unite various theories found in physics, neurology, psychology, philosophy, number theory, sacred geometry, and metaphysics.

I believe that this yet un-discovered crossroads will be codified by a re-formulation of pure mathematics that will unify our scientific cornerstones into a beautiful, new worldview.

Some of you know what I’m generally talking about — a type of Unified Field theory — a major math and physics theory that describes how everything works in space and time including the brain, body, and mind. Of course it’s not so easy to piece together since a great many ideas need to be re-explained or even re-thought!

For instance, can a thought be compared to a hyper-dimensional object?

If two people think about the same ‘thing’, are they having independent experiences, or are they each perceiving different angles of a single, dependent object?

What is the relationship between one’s thoughts and the external world she experiences, and will AI have a similar type of relationship between internal and external?

And perhaps most importantly, what would the architecture of an abstract space that contains thoughts and symbols look like?

This is important stuff to sort out!

Let me tell you something that I’m pretty sure about.

In my book I’ve illustrated an architecture of a ‘mindspace’ that is based on 5 relationship patterns. These five patterns are the most basic ways of describing everything because any thought or object must be described in relation to another thought or object.

Relationship patterns are used in all branches of science to describe objects, events, and interactions in space and time.

Because every mind navigates these patterns, the key to AI is in knowing how to read, write, and organize the patterns.

My research has led me to believe that these patterns below are the key for unlocking the profound architecture of mindspace:

Basic hierarchical relations between PHI, PI, and E. Overlaid with numbers and transformed into an equivalent system pattern that is eventually related to the Mandelbrot Set.

What these patterns represents should astound us all, because it is the foundation of pure mathematics. They represent how numbers and types of numbers are essentially formed and are interrelated within the fabric of space and time on the most basic levels.

Without knowing what the above patterns are, and how they work, there is a 100% chance that reaching the peak of AI mountain will not be happening any time soon — because these patterns ARE the peak.

In my next several posts, I’m going to dive into this pattern and illustrate how pure mathematics might give rise to every law of nature in the universe, including those that govern matter and mind.

Stay tuned! In the meantime, you can download my free ebook here.

--

--

Theory of Thought
Symbols Explained

Exploring the metaphysics of hyper-dimensional structures.