Thoughts About Human and Artificial Intelligence
Do you have any idea on how exactly scientists and researchers are building artificial intelligence systems? What is artificial intelligence really made of? What are its real bases?
Well, if we are to understand how artificial intelligence is created, we need first to understand how human intelligence works, because pretty much all artificial intelligence is based on our own cognitive, perceptive and physical abilities. We need then to understand how we, humans, became who we are through millions of years of evolutionary process. In his 1999 book, The Cultural Origins of Human Cognition, Michael Tomasello offered a simple, yet comprehensive way to look at this issue. In his own words, “If we are to attempt to really understand the human cognition we need to consider its unfolding in three distinct time frames: in phylogenetic time, as the human primate evolved its unique ways of understanding co-specifics; in historical time, as this distinctive form of social understanding led to distinctive forms of cultural inheritance involving material and symbolic artifacts that accumulate modifications over time; and in ontogenetic time, as human children and adult absorb all that their cultures have to offer, developing unique modes of perspectivally based cognitive representation in the process.”
That’s what Artificial Intelligence researchers are doing to build their AI creatures: understanding and then artificially recreating human cognitive abilities — which were developed through our phylogenetic, cultural and ontogenetic processes — and using those abilities to empower machines. The current foundational paradigm of AI is Machine Learning, meaning: machines need to be able to learn and cognitively grow on their own — just like we do. This paradigm became a consensus once researchers realized that it wouldn’t be possible to create a human level artificial intelligence that were simply “programmed”. Machines needed to be able to further develop and learn on their own. And so be it.
But not all AI researchers take the same research approach. Generally speaking, Artificial Intelligence researchers are currently divided in 5 major categories:
1- Symbolists: This approach is based on the assumption that many aspects of intelligence can be achieved by the manipulation of symbols. Symbolic AI was the dominant paradigm of AI research from the middle 1950s until the late 1980s. And it still plays an important role in AI development, although now more adjusted to the machine learning paradigm.
2- Connectionists: This approach models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models. So, basically, this approach is trying to reverse engineer the human brain.
3- Evolutionaries: This approach creates evolutionary algorithms that are based on adopting Darwinian principles, hence the name. Evolutionary AI uses iterative progress, such as growth or development in a population. This population is then selected in a guided random search using parallel processing to achieve the desired end. Such processes are often inspired by biological mechanisms of evolution to produce highly optimized processes and networks, which have many application in AI.
4- Bayesians: Bayesian inference is a method of statistical inference in which Bayes’ theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference techniques have been a fundamental part of computerized pattern recognition techniques since the late 1950s. Recently Bayesian inference has gained popularity amongst the evolutionary community; a number of applications allow many demographic and evolutionary parameters to be estimated simultaneously.
5- Analogizers: This approach sees analogies at the core of intelligence itself. Analogy being defined as a cognitive process of transferring information or meaning from a particular subject (the analogue or source) to another (the target). Their master algorithm is the “nearest neighbour”, which can give outcomes that are similar to neural networks. Analogies are at the nucleus of extremely effective machine learning outcomes.
It is not difficult to see that human cognition is at the heart of each of these AI groups’ research. Although each one has brought their own important contributions to the AI field, the most powerful current AI systems actually combine contributions from many or all of them. A combined approach is likely what might one day create an artificial intelligence that is of human level or higher.
So, there we have it, machines being highly empowered with capacities and abilities that are human derived. Nothing wrong with that. At Inova: Ingenuity, we are generally favorable to Artificial Intelligence development, but (and this is a big BUT) we also think that this powerful knowledge should also be reversed to ourselves, to our own education and capacitation. In other words, research, development and knowledge from fields such as Cognitive Science, Neuroscience, Evolutionary Anthropology, Artificial Intelligence and others not only should be used to empower machines, but should also be used to revise and improve human education — to empower humanity.
If you are interested in an example of such educational approach, you might want to enjoy Inova:Ingenuity’s CIC patent pending program. You can find the CIC program online at www.inovaingenuity.com .
A previous version of this post first appeared at www.inovaingenuity.com