The Spaces In -Between Artificial Intelligence and Artificial General Intelligence Development.

‘everything secondhand’ by Nueva Dimension (pen and ink)

When I started out, many years ago, as a trainee teacher I was asked to decide on where I wanted to practice in terms of age group. I started off with the little children; nursery/kindergarten age, and my reason for this was because I wanted to know how to teach literacy, communication skills and in particular I wanted to be able to teach a child to read, at the very start. I figured I could always move up across the age-range in practice, which I did-to include working with young people and adults in universities. You see, I thought at the time it would be tricky to move down from being trained at the offset with older children, as the basics were expected to be in place with regard to curriculum expectations. It was the right decision. Transitioning across different education systems; transferring my knowledge and experiences, within those in-between spaces for adaptation and growth were the greatest learning moments for me.

And so, it is at the very beginning with regard to the development of Artificial Intelligence (AI) and Artificial General Intelligence (AGI) I start this piece.

With the aim of keeping the narrative as simple and as accessible as possible, I will begin with the emergence and value of the Big Data movement. For an in-depth understanding about the history of the Big Data movement see here. In short, it can be suggested big data storage took off due to digital information and the internet, specifically in 2005, whereby Web 2.0 yielded a user-generated web; with majority of content provided by users of services, rather than the service providers themselves, achieved through integration of traditional HTML-style web pages with vast back-end databases built on SQL. It is estimated 5.5 million people were using Facebook, launched a year earlier, to upload and share their own data with friends.

And now, we find ourselves in 2017; which means we see advances in data storage as well as computer processing for improved speed and scale as well as more complex algorithms. All this provided for the next step forward in the development for AI and that is known as machine learning (algorithms that learn from data).

Remember, I said I chose the early years — at the very beginning to practice in my training as a school teacher; well, that is where we might suggest the beginning of AI for application started too with language development, specifically Natural Language Processing (NLP) software and machine learning. According to Mordatch and Abbeel (2017), by capturing stats patterns in large corpora, machine learning has enabled significant advances in NLP, for example:

  • machine translation
  • sentiment analysis

Let’s stop for a moment to think about that, and return to the concepts AI and AGI. I define AI as technology that is able to support goal-orientated task completion, such as machine translation and sentiment analysis. I define AGI as technology that uses AI, again, to support goal-orientated task completion, but this will have more capabilities than the specific parts of the AI it is made up of. I could be wrong. And, I might change my mind in the future; however, it is where I am at for the moment.

Over time, it has been identified that AI in its application, whether machine translation or bots that function in a specific way; is limited in terms of interacting with humans and so simply capturing stats patterns, is insufficient. Perhaps we see a fast growth in the existence of bots across many sectors of society due to pushing AI capabilities for the development of AGI; in other words, bot development is a testing ground for eventual AGI creations.

To expand, it is worth recognising there is a difference between AI-powered bots and AI-assisted human agents, with the former including conversational computer programs to interact directly with humans, such as in the case of customer service. AI-powered bots use deep learning (more on that in a moment) and NLP, which means such chatbots can easily understand and provide answers to consumer questions; handling low-level inquiries at first contact, see article here.

The limitation is that these bots cannot use ‘spontaneous’ question-posing activity outside of a given schema; though a question posing/answering context variable attempts to push this forward, see IBM’s development using Twilio chatbot and Watson conversation here, with Tanmay Bakshi (January, 2017). Question-posing involves many dimensions for shared thinking and understanding during human to human social interaction. We use, for example: open, closed, rhetorical, tagged, supporting and direct question forms to guide our own and another’s understanding during our sense-making when thinking about what is being communicated. So, what next?

This brings us on to the concept Deep Learning. Previously, I stated my definition of AI and AGI is about goal — directed activity, to put it simply. But, for both, the computational processing is different. It has to be, because AI in general works across an identified one goal end — point, for example such as a chatbot that provides answers — is its overall goal. Yet, for an AGI agent, the word general adds the notion of processing more than one goal. It is this exact difference, arguably, between machine learning and that of Deep Learning. Deep Learning neural networks involve layers, and it is this architecture of a layered system that has spaces to afford an in-betweenness for activity.

The most accessible way for demonstrating how such a layered network can function, I suggest, is in the following explanation and diagram taken from a source about visual recognition:

If you have not heard about AlphaGo, it is essentially an algorithm designed to play the game of Go, which was achieved by studying the moves of human experts and by playing against itself and ended up winning against the World’s best human players. In addition, recent research (19 Oct, 2017) introduces an iterative algorithm; without human data, guidance or domain knowledge beyond game rules. The research aim is to show how AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games.

In a recent interview (Oct, 2017), David Silver — project lead for AlphaGo Deepmind, spoke about AlphaGo0 and progress broadcast by a BBC podcast: Tech Tent. A summary of the items he spoke about in answer to the question:

Is AlphaGo0 an advance on the road to this vision of generalised artifical intelligence; an AI that can do all sorts of things rather than one task extremely well? Included:

  1. Evidence
  2. Limitations
  3. Theoretical implications

David Silver stated, in terms of evidence, ‘what we have come up with in AlphaGo0 is definitely a stepping stone towards a very general system…these systems are genuinely able to discover knowledge for themselves.’

When referring to limitations, he specified, ‘there are still big challenges to come in terms of generalising this to the real world, where the rules are not known, and to systems where there is luck or randomness, or where the exact state is not made available to the system.’

In regard to theoretical implications he stated, ‘in principle…can apply the system behind AlphaGo 0 to any other domain in which you have to plan ahead over a sequence of different actions so as to achieve some goal…applications could include problems in science; or any other problems where there is a simulator that tells us how to progress through a sequence of cognitive steps to achieve a goal’.

There is a common denominator for both iterations of the AlphaGo algorithm and that is reinforcement learning (more about reinforcement learning later)from self-play. According to Dr. Christopher Berger’s excellent account,

AlphaGo uses an evaluation function to give a value estimate of a given state. AlphaGo uses a mixture of the output of the value network and the result of a self-play simulation of the fast policy network:

value of a state = value network output + simulation result.

This is interesting because it suggests a mixture of intuition and reflection. The value network provides the intuition, whereas the simulation result provides the reflection. The AlphaGo team also tried to use only the value network output, or only the simulation result, but those provided worse results than the combination of the two. It is also interesting that the value network output and the simulation result seem to be equally important.

Intuition and reflective activity; this is a huge step forward for AI development. Next, I want to consider the three items I have referred to so far, in more depth:

  • Processsing
  • Learning development
  • Learning Theories (e.g. reinforcement learning)

First, below is a diagram — summary of those relationships and ends with a representative block that can be conceptualised as the in-between spaces for evolution:

Those spaces, the interstices are further detailed in the next diagram, where the representative block is surrounded by the three items, which I suggest, are ripe for research and development.

Learning Theories

Reinforcement learning can be best defined in comparison with other types of machine learning, and it is to Medha Agarwal’s work, I now turn to. The following descriptions and table explain the detail:

It is notable, all types of learning rely on a variation of inductive and deductive processing. And, it is computational learning theory, due to its inductive nature, which studies the design and analysis of machine learning algorithms.

Learning Development Modes

In the diagram above, I show learning development modes to include:

  • Communication=Language
  • Kinaesthetic=Motion
  • Cognition=Reflection
  • Affective=Intuition

In brief, we can see these modes of development applied today and continue to push forward with research and development. Hey Siri, pushes forward with its use of both deep learning and machine learning approaches to include, for example, use of variable threshold for deciding and makes allowances for possible truncation in the way the detector is initialised.

Meanwhile, in July 2017 DeepMind released its work on Emergence of Locomotion Behaviours in Rich Environments: https://www.youtube.com/watch?time_continue=30&v=hx_bgoTF7bs

And, as I outlined above, AlphaGo appears to be pushing forward with both cognition and intuition as well as reflective activity.

Processing

I began this piece in reference to processing, it seems a good place to end too.

What if, the processing that takes place across neural networks could be aided by a chip? This is the proposal from Intel’s latest design with a neuromorphic chip. It is proposed such chips could be used to speed up complex decision making, with the ability to autonomously solve “societal and industrial problems” using learned experiences adapted over time. The neuromorphic chip from intel has inbuilt analog. It is unclear as to whether it is just analog or mixed with digital in terms of weighting. Analog circuit design is about semiconductor technology/physics/device physics & electrical circuit theory, control & feedback. Digital circuit design is about Boolean algebra, linear algebra, digital signal processing, synchronous and asynchronous systems.

Current chips are predominantly of a digital design, but research and development currently, as outlined with the Intel chip, appears to be in the zone of hybridity; optimized for deep learning, see the research by Miyashita, Kousai, Deguchi and Suzuki, (2017).

The inference is that chip development is moving steadily towards quantum computing. The fundamentals for analog, simulated in quantum computing/mechanics is Hamiltonian flow : time evolution s.t energy is conserved between position & momentum. Other conservations result from alt canonical pairs, thanks to Lauri Love.

And quantum simulation research has been furthered, with regard to materials. In his book, Max Tegmark (2017) refers to high intelligence requiring both lots of hardware made of atoms and lots of software made of bits. He poses the question: How can a bunch of dumb particles moving around according to the laws of physics exhibit behaviour that we’d call intelligent?’ I thought about this question for some time. Then,I read about a perovskite as being any material with same type of crystal structure as calcium titanium oxide known as perovskite structure, or XIIA²⁺ⱽᴵB⁴⁺X2−₃. Indeed, recent research has shown perovskite simulations, used as a quantum material that mimics the brain’s ability to forget, are inspiring whole new algorithms to train neural networks to learn.

To conclude, do you see the interdisciplinary flavour here? Remember, I started out teaching little children to read! Here we have chemistry, physics, computing and me adding to the topic of AI/AGI. I suggest, it is the interstices, the in-between spaces between development of AI and AGI that affords this coming together; transitioning across learning domains, as I hope I have demonstrated to some degree.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.