Human Level AI 2018: a Journey to Reach Human Intelligence by AI

Nermin Akel
Sep 7, 2018 · 6 min read

Future of AI, human-level AI, techniques and methodologies of AI research have been spoken by over 40+ professionals between 22nd-25th of August at Human Level AI 2018 which has been organized by GoodAI in Prague.

It would be better to share some definitions related to AI before sharing my observations.

Artificial Intelligence (AI): The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Today, there are 3 types of definition:

Artificial Narrow Intelligence (ANI, Narrow AI): AI’s ability to mimic human intelligence skills and/or behaviour to perform predefined tasks or solve predefined problems in isolated narrow range of parameters and contexts.

Artificial General Intelligence (AGI veya Human Level AI, Strong AI, Deep AI): AI’s ability to mimic human intelligence skills and/or behaviour is indistinguishable from that of a human. AGI succeeds on achieving complex cognitive tasks and solving unpredictable problems by using pre-learned skills or generating new ones.

Artificial Super Intelligence (ASI): AI doesn’t mimic human intelligence and/or behaviour but surpasses it.


Related to HLAI 2018 Conference;

How far are we from general AI?”: Marek Rosa defined narrow AI, AGI and differences between each other. AGI needs to have some skills such as; continuous life-long learning, few shot learning, working memory, long term memory, disentanglement of skills, gradual, accumulation of skills, transfer of skills, reuse of skills, learning to how to learn.

And there are some points which are still missing:

  • Architecture that can learn incremental, gradual, life long manner human level skills in similar ways with human such as; one-shot learning, reuse skills, accumulate skills.
  • Curriculum for teaching human skills and moral values, consensus on what morals and skills that need to be learned by AGI
  • Testing procedures; Robust quality assurance on value alignment & value propagation, adversarial testing and generating moral edge cases and dilemmas.

Marek Rosa has added; predictions about future of AGI doesn’t make sense because it gives more clue about the person’s emotional stage instead of predicting it.

Ingredients of Intelligence”: Brenden Lake mentioned significance of both cognitive science and discovery of human intelligence to achieve the AGI.

If we compare human intelligence and narrow AI; human can extract more knowledge from less data and interpret it consideringly its context and concept. Human build rich, flexible models of the world, while current AI is largely driven by pattern recognition. He explains human learning skills by two ingredients such as;

Causality means that human can make more consistent predictions about objects’ functions and use cases by understanding its concept and parts. For instance; you might like to term a tree which is in your garden and growing too close to your house, but also you might consider not to kill the tree. All of these judgement and reasoning are skills of human intelligence.

Compositional learning is an holistic understanding of an object and its functions while perceiving its parts, their functions and relationship between each other separately. For example; when we imagine a bicycle, we can perceive its parts such as wheels, handlebar, saddle etc. and their functions separately. This understanding provides much better basis for generalization that can allow us to recognize new examples or generalize in very rich and powerful ways.

And there are very many ingredients for human intelligence like intuitive physics, intuitive psychology, intrinsic motivation, lifelong learning, language and culture.

Current AI system interprets pictures below as;

When we consider an autonomous car, we expect that it should be able to understand past and present circumstances of a pedestrian who is running on the crosswalk and predict the future cases consideringly all elements (like other cars, pedestrians, sidewalk etc. and their balanced relationship which is significant for continuation of life) in the concept of this car has been faced.

According to Brenden Lake, AI is certainly making a lot of progress but there is still a huge gap between human intelligence and machine intelligence. And if you want to build the next generation of algorithms, AI is going to need more and more ingredients from our own intelligence and that cognitive science and the study of our own minds will play a key role in developing on the next generation of more powerful and more human-like AI.

“On Creativity, Objectives and Open-Endedness”: Kenneth O. Stanley mentioned that understanding creativity and AI that can use creativity skill are very significant.

First of all, understanding creativity is a key to understand human level intelligence. Secondly, it is significant to generate creative and open-ended processes to be able to create human level AI. We need to consider that human intelligence has been evolved in such a kind of unconstrained, open-ended environment and generated the intelligence itself.

Current optimized solutions provides an AI level which can solve pre-defined problems and accomplish pre-defined tasks. On the other hand, creativity helps us to discover new ways to solve problems and develop a new understanding. It is one of the most significant human skill helps us to change our perception.


Finally…

During HLAI 2018 Conference, much more technical subjects have been spoken such as; research and development processes and their scope and methods, AI algorithms and examples. Implementing problem oriented thinking, design thinking, divergent thinking, creativity methods and intuitive approaches in all phases of the process with machine oriented approach are very important.

Even though AI development makes us to feel like we are reading a science-fiction novel, it hasn’t been defined yet that which main goals of humanity we are trying to meet, as it mentioned in the context of many sessions of conference. Beside, a consensus about values and morals of humanity we need to protect, improve or change hasn’t been reached yet, as well. Perhaps this uncertainty is the main cause of our fears about the results of future AI solutions. What can be the main reason of feeling insecure when we think about security, unemployment etc.? Is that because of AI or do we have suspicions about our values and humanity that may manipulate AI? All of these questions that we can not answer yet are very valuable to ask.

On the other hand, our journey of developing human-level AI creates an opportunity to understand there are many details and skills of our intelligence which hasn’t been discovered yet. We can not predict that if we can reach human level AI or not, however it is a fact that this motivates us powerfully to research human intelligence and limits of our cognitive skills. Perhaps one day we may name this journey as “a journey to understand ourselves” as well as “a journey to human-level AI” and all of these endeavors may make us to move forward on the way of our intelligence’s evolution.


This article has been published SHERPA Blog in Turkish.

Resources

Nermin Akel

Written by

realistic dreamer /// Service & Interaction Designer @fjord /// previously @SHERPADD in Istanbul

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade