What I’ve learned last week about AI
On the way of understanding an intelligence I try to read and collect all the important information. Here is new part theories and ideas
The Evolution of Communication Systems tells us next things.
- Communication is a great tool for achieving an aim with help of another agent of common Universe.
- Communication appeared as conceptual jump when single organism met limit of his own performance.
- To learn a language you need a teacher who infers your intentions.
- Graph theory. So many tasks may be converted to a graph representation. It makes me think that graph theory is one of the basis on the way to true intelligence.
- Hilbert’s curve and Sierpinski triangle. Just as an example of math approach beauty.
- Scientific theories show predictive power and explanation power. These two essential properties help to compare particular ideas.
- Electrical stimulation of the human brain: perceptual and behavioral phenomena reported in the old and new literature. For me as for developer, this “debugging” method is pretty familiar. The most part of experiments was performed in patients with epilepsy, but it does not make results less interesting.
- Chomsky hierarchy of formal languages. Recursive languages. Extending my vision about Turing Machine. We cannot make progress towards intelligence without having a proper bias in form of language.
- The memristor as synapse analog and as a 4-th base element in the electrical circuit. That is real thing I was thinking about a lot. Perfect analog for brain-like plasticity without inventing backpropagation technics.
AHaH model is a utilization of memristors for machine learning. This is the fundamentally new idea and hardware implementation for this.
There is no separation on memory and CPU what increases power efficiency. This is a fresh idea of how should Deep Learning be implemented in hardware. As the author says an idea was taken from the most common pattern in nature.