AGI Checklist

Peter Voss
Jan 17, 2017 · 3 min read
Related image

Any AGI must at a minimum possess a core set of cognitive abilities — as a simple description of human intelligence will confirm. The skills must be implementable in a practical way — i.e. interacting with incomplete, potentially contradictory and noisy environments using finite computing and time resources.

All of these abilities must be able to operate in real-time on (at least) 3-dimensional, dynamic (temporal) data, as well as stimulus-response (causal) relationships. Operations must be scalar, not just binary (degrees of fit, and certainty).

Here is a basic list:

While most examples here are language-based, these abilities must also operate in a purely perception-action mode.

Two other noteworthy requirements are understanding and being able to determine salience.


In addition to the positive checklist, we can also formulate a quick negative reality check:

What does not qualify as AGI?


Salience

Salience — selecting what is relevant and important to a given context and goal — is an important aspect of intelligent systems.

This comes into play at different levels of cognition:

Firstly, in autonomous data selection on input — what senses and features to process and /or ignore, and what level of importance to assign to them for processing. For example, most animals are wired to pay extra attention to fast moving items in their visual field, and to loud sounds. For AGI we have to assume that much more sensory input will be available than can (or should) reasonably be processed. We must also assume that relevant feature extractors such as edge or shape detection must be prioritized. It seems that some semi-automatic mechanism needs to do this pre-selection. This mechanism should be under overall high-level cognitive control to preset parameters; for example to, say, bias it to focus on changes in color or pitch.

Once input has been appropriately selected and prioritized, pattern matching, categorization, and conceptualization mechanisms need to be selected according to contextual requirements. What matters currently? For example, are we trying to match incoming patterns against each other, or against some internal reference; are we interested in shape or texture patterns; or are we just interested in object collisions?

Higher level goals also need to be selected and prioritized according to salience. What are we trying to achieve right now? What dependencies are there? What is most important in the current context?

Finally, the overall architecture has to allow for consolidation and forgetting. What information or experience should be consolidated? What should be forgotten (or archived)?

AGI need to have mechanisms in place at each of these levels (and probably some others) to evaluate salience and to adjust cognition accordingly.

Intuition Machine

Deep Learning Patterns, Methodology and Strategy

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store