Summoning classical AI
If you can map a system into a network of the system’s nodes and the interaction rules they follow, you understand that system.
If we build a standardized identification tool for systems that humans find meaningful, AI will find it much easier to solve ‘complex’ problems, which are actually simple problems following the rules of an unidentified system.
Teaching AI to hunt meaning
Computers are built to focus on details like mathematical accuracy, whereas humans focus on objects of mid-level abstraction, what I call communication objects — opinions, ideas, stories (as opposed to high-level objects like systems or concepts).
So computers miss patterns & identities that would be obvious to us because we look for the higher-level semantic product of the details, not the details themselves.
In other words, humans tend to use insights based on their understanding of the underlying system to generate decisions from the top-down.
Computers use data to find patterns & build understanding from the reverse direction.
Which is why computers have trouble identifying objects that we care about, like the following:
Once we have the algorithm to identify these semantic objects like stories, solutions, perspectives, etc — the usage of which might look like this:
AI will find it much easier to not only help us find meaningful output but also to generate it. Imagine the possible impact of AI on comparing different systems to find similar patterns - an infamous source of insight-mining - let alone the other vectors to insight.
System maps can optimize learning
If we want AI to be as smart as us, we need to teach it how to identify these mid-level communication objects.
If we want it to be smarter than us, we need to teach it how to build & fit insights into an abstract conceptual network & adjust the network according to learned insights - like this network:
The key advantage human brains have is our semantic organizational capacity; we systematize the world around us into models that make sense for our local needs & interests, adjusting them as needed.
What we need to build
Which means AI needs the semantic code libraries that our brains use to understand systems. This includes relationship-analysis tools as well as the ability to write tests for the system models it builds, to check whether a change is valid and coordinates with the system, or if it breaks the system, and whether that’s worth breaking.
It also needs to be able to evaluate priorities, leverage human-contributed thinking libraries, identify communication objects, evaluate a statement’s probability of truth, identify patterns & their consequences, store decision paths & their ratio of success, switch perspectives & abstraction levels, and many other thinking tools.
Most importantly, it must also be able to configure all of these tools itself to adapt to its environment. Those who want to build classical AI are aiming at a perfect learning machine, one capable of infinite adaptation and evolution.
If we want AI to be truly capable of learning anything, we cannot constrict it to obey any limits that our brains do not; nothing can be hard-coded in a perfect learning machine.
Human Error
Even with our best planning, we may accidentally hard-code limits into our learning machine, in the unexpected sub-optimal behavior of bugs.
We can build programs to identify bugs programmatically, as suggested below:
but ultimately the best way to avoid bugs is by building tightly planned code from the top-down; first conceiving of the rules governing the system & mapping the system, then building it according to a well-conceived plan on handling the spectrum of possible input behaviors.
In order to destroy humans, AI would have to be evaluating us for compliance with priorities like kindness or progress, which means we would need to build AI with hard-coded priorities or goals or the ability to acquire them. If we want to ensure AI never attacks us, we need to be careful about the goals — or the goal-creating capacity — we code in our algorithms.
