Cellular automaton Rule 153 by Cormullion

Is AGI that complicated?

Synthetic Intelligence
Synthetic Intelligence
5 min readJun 14, 2020

--

To start with, here is an example of quite a simple problem, which is surprisingly hard for many people to solve.

To figure out if the AI is conscious or not, engineers created a new model specifically for this task. This AI has complete knowledge about the world and itself, but you can ask only yes/no questions to communicate with it. Which is obviously totally fine to receive an answer to determine whether it is conscious or not. However, to achieve this groundbreaking result, engineers had to use a model with absolute accuracy, which has a side-effect. The AI exists as a couple of agents, one of which always answers truth and the second one always lies. Every time a question is asked, one of them answers and there is no way to know which one, and the order is completely random.

What yes/no question should you ask to determine if the AI has consciousness?

After finding your solution for this problem (which hopefully you’ll accomplish sooner or later), some things about it deserve some notice.

  1. You don’t need to check your solution when you found the right one. It just works.
  2. Even though there is only one right solution, it can be formulated slightly differently, and some variations are more solid than others.
  3. The difference between a solution and the solid one is how you get there: intuitively finding a question that works, or creating a precise formula or schema, which you translate into words.
  4. And, when you get a solid solution, the most astonishing one: how can such a simple solution be so hard to find? (If it wasn’t hard for you… well, it’s quite rare but normal.)

An obvious answer to the last question is that our brain just isn’t optimized for solving such problems, as well, like the fact that we are not trained to do them. Perhaps it’s not such a bad thing. At the end of the day, our mental performance is terrible or, in the best case, mediocre for most tasks that have not been needed for survival, at least for hundreds of thousands of years.

Basically there are two strategies for solving non-trivial problems with a small number of elements:

  1. Creating a schema of these elements with their properties and relations;
  2. Brute forcing possible combinations of the elements.

In most cases, the first strategy is much more complicated than the second one. Typically it works in a way when every next attempt to find a new combination has to reestablish the schema, making it more and more stable in time, and in general, it’s the most significant part of needed mental efforts.

The key reason for such an unproductive process is the absence of stable neuron connections, which would support the needed manipulations. Well, evolution has not appeared to have been able to produce anything superior until now, so the best we can do is to be aware of this limitation and plan our mental activities accordingly.

It’s also important to understand that we have some blind spots in our way of thinking. That’s how we can miss some essential conceptions not because they are complicated, but because they are just inconvenient for our way of thinking.

Imagine that the above problem has ten times more elements. It’s not that many in case of a task, where the involved elements can be handled one by one. But in a case when not trivial combinations of elements are possible, getting a solution can be a question of tremendous effort for our brain. (Some luck can be helpful too when brute force is involved.) However, even if it’s hard for us to get a solution to such a problem, the problem itself is not hard.

Our brain just isn’t optimized to process such tasks. What is even worse, it’s a limitation defined by our genetics — sorry, but nobody from our predecessors needed to tackle anything like this, so how could it evolve? Our working memory is hardwired to operate simultaneously with a very limited number of objects. The best we can do is temporarily represent some elements as a group and deal with it as one element, then change the grouping in order to change perspective, trying to store the results of the previous configuration and somehow relate them to a new one. Yes, it’s a trainable skill, but only to a quite modest extent.

Extended cognition, meaning using some external crutches like visualization on paper, can help, but again, only to some limits, because it’s only assisting with the saving of and switching between some configurations, while the maximum size of any specific configuration our brain can hold is limited and can’t be extended (at least, without some genetic modification).

On the positive side, in many cases, our mental abilities are enough to understand the solution to such a problem. So when it’s completely solved, we start to see it as something very simple and straightforward, sometimes even evident and primitive.

The central claim here is the following:

Even though it’s hard to discover principles needed to create an AGI, they are pretty simple and, after being discovered, are easy to apprehend.

How can we be sure about it? Well, it’s implausible that evolution is capable of anything else. When the best you can do is occasionally tweak something and check to see if it is useful or not, you better look for something simple. At least until something completely new evolved, capable of using other principles of development.

Also, look at the structure of our neocortex, which is remarkably uniform and consists of the same elements everywhere, and capable of repurposing one part to perform functions of another one. There have to be the same operational principles behind it to make that possible.

But is this idea even useful? If finding the solution is still hard, what is the value of knowing that the final result is simple? Actually, there is. A great value.

Firstly, just understanding some criteria for the solution can dramatically decrease the space of possible solutions.

Secondly, because such solutions are easy to confirm, we can benefit from one of the success mantras of lean startups: “Fail fast” and move away from something complicated and possessing an unclear perspective, switching to a more promising direction.

--

--