Thoughts | xAI | Neuromorphic Self-Driving Cars
There are far smaller forms of thought in the brain than there are ‘smallest’ bits of data to the computer. This difference adds up to factors that make AI unexplainable.
Though it may seem that splintering of data into smaller units should be possible with sizeoffilters and sizeofpool, but those, including hidden layers, noofnodes and others, are more about data refinement.
The beam of data as an input, processed then outputted — makes it more difficult to explain AI.
The brain, contrary to computers, extraordinarily separates inputs it receives — sometimes to hundreds more of lower units than received, with each getting processed as active or passive thoughts.
Some units of thoughts may stay in the area of conversion, others go on to the memory to different groups, some for interpretation to know what that thing means or check if it’s okay, some to proceed to destination of feeling effect.
The memory stores the smallest recognizable unit of thought, then smaller.
The memory doesn’t just take in a vehicle as a whole: it takes the door, the handle, the mirror, the paint, the smoothness, the key area and lower units of each.
The memory knows what a sitting posture is from afar, or what the gait of a close person is like. That gait, seen anywhere else may remind of that person. The gait form is a store in the memory.
What is natural for the memory to split, know and remind of takes excessive processing power for the AI. An autonomous vehicle takes in a lot of data, processes and navigates everything less effectively than humans, without splitting data into the smallest of units.
For the brain, theoretically, multisensory integration is predicated on thought. Senses come in and get converted to thought or its form. What comes in {sight, smell, hear, touch, taste} is different from the unit of process, thought. It is the thought version of whatever is physical that the brain uses to know and relate with the world.
It is known that senses are integrated in the thalamus, except smell, before going to the cerebral cortex. The reticular formation and angular gyrus are also points of integration, but whatever is sensed that can be thought about is because that thing was converted to thought.
Theoretically, there are ports that convert different functions internal or external to thought. It is the thought-enabled capacity that drives use or function of parts of the body.
One hand is dominant and the other weak because one hand has more thought ports — in the brain — than the other, making it dexterous.
It is thought ports that close during sleep for most functions that makes it take a while, after waking up, to feel parts of the body are ready for use.
Thoughts or their form are basis of predictions, perception, imagination, dreams, memories, decision, reflex and so on. The memory stores things as thought or their form. If the memory knows anything internal or external, it is the thought version of that thing it stores. The memory does a lot of splitting and grouping.
Active thoughts on something, say a project, may feel like a beam as several things come to mind at once, but there was a first thought on that project, then others, in that time, followed. So while they may seem like a beam, they are in units.
How can data become like thought toward interpretability, explainability and far sharper AI?
How can data be continuously split into far smaller units than currently possible? How can that split data be used to pass through existing refining processes? How can there be two categories of split data, one active, one passive, and how can they not just become output, like currently possible, but to go on to have a feeling effect or some parallel or perpendicular reaction?
For AI, with smaller units and a new quantity of conversion from what the input is, AI can have a memory for grouping and regrouping like in neuroplasticity, then the memory can have special operators like a giver, a picker and a thrower, making it different from regular data processing, but similar to the brain.
It is possible to seek explanation and improve current autonomous vehicles, starting with an encompassing theory of thought.
The rules of thought in the brain aren’t seen by neuroimaging because what thoughts are to the mind differs from what neuroimaging shows.
Developing via where thought comes from, its split and process holds the key to improvement for AI data, for self-awareness, transparency, better caution and adaptation.