Thoughts on “real” artificial intelligence

Strong artificial intelligence, there are countless points of view that come down to the following metaphor, trying to create real intelligence in a computer is is like trying to learn to a submarine to swim, which means there is a tangible limit standing in the way of creating a really conscious machine. But who knows, we may put ourselves in a position that we would be able to “teach” a submarine to swim better than a dolphin.

How a conscious being emerge, develops..?

Let’s try anthropomorphise nature.

Nature, in a effort to retain knowledge, started developing single cell organisms, these organisms started getting more and more complicated thus the knowledge of previous generations could be maintained more effortless and along the way we, humans, came up. We have a unique ability that we can transfer and save our experiences and knowledge and give them to the next generations to build upon. Was Church–Turing thesis unavoidable so this process of knowledge go up a level, and we as human race are just intermediaries in the creation of one consciousness, that instead needing millions of years to develop, it’ll emerge very very soon. Maybe or most probably we’ll become obsolete.

The creation of such intelligence could fire a reaction that itself would create an even higher form of intelligence that may would go on infinite times(?). Maybe there is a limit on how much higher this could go, are we the theoretical top of that limit? Are we prepared for the dive we are taking in the next level of intelligence with A.I.?

The successful creation of such intelligence would have replication in many social, religious maybe aspects. How, we humans, we were able to create a smarter form of intelligence. The most important question to consider is if have the maturity to go through this tunnel and come out at the other side.

Computational systems (based on silicon) are coming to a physical limit, Moore’s law starts to “slow down” so it would be difficult a intelligence to be based on traditional designed systems.

If could somehow create a clone “copy/paste” of a human brain on an biological computer maybe there is a chance we could understand the structural part of that system, how it works, that “maths” behind it.

How do we think?

Do we have a preset model of the world or we start with blank canvas?

Can our math support the models behind our intelligence and if we can could we copy them to create something at least equally “smart”.

Let’s see an example of a baby learning the process of speech(We’ll call it B). B takes inputs from the environment and these inputs contains some patterns, for example B’s parents repeat the word mom, dad most often and clearly. B doesn’t comprehend the meaning of the word but slowly B will try to repeat it. In the process that B try to repeat the word “mom” B’s parents will give positive feedback in every attempt that is even a little close to the real world and even bigger positive feedback when B actually repeats that word. The positive feedback is based on excitement, which is a feeling(important!), and in general feelings are important as a feedback mechanism both positive or negative. Then how a intelligence form that lacks on feelings where would rely upon for it’s feedback. Could we invent a different mechanism form of feedback than feelings and what that would be?

Have intelligence hit the ceil with humans?