Computational Introspection & Self-Awareness

On The Chase for Introspection in Robotics

Could robots ever achieve self-awareness via computational introspection?

--

While thinking about the topic of my PhD, I knew I wanted to work in Artificial Intelligence & Robotics. Self-awareness did seem like an exciting goal, albeit an extremely ambitious one. Self-awareness is an important topic to investigate in robotics since it can enable robots to achieve a sense of self which in turn can increase their autonomy. In this article, I give an overview of the key papers the helped in initially shaping my view and the content of my proposal. From these papers, my idea of introspection as a component of robotic self-awareness took shape.

Definition of Introspection

Let us build up to the definition of introspection that I aim to implement in my work. In very simple terms introspection is:

The ability to look within.

Resources show, [1,2], that an introspective system also exhibits monitoring and optimisation of behaviour.

Introspection is the observation and optimisation of behaviour.

The Context in which Introspection Exists

The context in which introspection operates consists of three components. Dehaene et al. define these components and describe their respective responsibilities. Specifically, the authors argue that current deep neural networks only perform unconscious operations. That is operations that the brain carries out without us being aware of. For completeness, the following list comprises the components that Dehaene et al. defined to enable machines to exhibit “conscious” operations:

  1. C0 — unconscious operations (what deep neural networks currently perform)
  2. C1 — global availability of information
  3. C2 — self-monitoring

Fig. 1 below displays the context within which introspection operates.

Fig 1. A self-aware context within which introspection operates.

Dehaene at al.¹ ideas are in line with Lewis et al.²; in the sense that introspection is the ability of a system to monitor itself. For instance, we are not aware of the specific operations involved when recognising a face. Our brain’s neural structures may be actually computing edges and circles, in general, the features of a face. Still, we are not really conscious of this. What we are conscious of is that we see a person we know; the end result if you will. As humans, we are only aware of the high-level representations that are the result of multiple transformations of information. Introspection operates above the level of global availability. Self-monitoring operates in a reflexive manner, meaning it can refer back to itself to obtain information and monitor ongoing operations.

I think that by equipping robots with the components described above may be a step towards to elicit a form of self-awareness in robots.

My research will utilise neural networks to a great extent. Although extremely fascinating, current deep learning architectures are not aware of the operations they perform, which means that they can’t introspect on them. To introspect, we, therefore, need a mechanism that will allow us to collect all the information from the neural network and use it to improve its performance. However, to introspect, multi-modality of information will likely be critical. To further understand this, let’s examine Dahaene’s C1 and C2 definitions.

C1 Consciousness — Global Availability

“The selection of information for global broadcasting which becomes flexibly available for computation and report.”¹

Dehaene et al. argue that there exists a relationship between the cognitive system and the specific object of thought whereby one can recall, act, and speak about that object. For example, when a car’s fuel tank is empty, we are immediately alerted to it by a light turning on. We are then able to understand the implications of this and act in a way to fill up our car; instead, the car is not aware of this and thus not able to act upon it.

C1 is where information is pooled over multiple sensory sources and memory cues are pulled to make a decision. Furthermore, a choice is made based on the globally available information, which the organism needs to stick to over time and coordinate all processes to achieve its goal. This decision will inhibit the short-term tendencies of the organism with the promise of a long-term reward. In the case of the fuel tank light, humans perceive the light and understand the steps necessary to fill the car up. Currently, machines have no such mechanisms and thus are not able to respond to changes that will likely affect them.

C1 type consciousness can be described as a global workspace where information can be pooled. This workspace, however, constitutes a limited capacity workspace because only one item can access it at any point in time. Given that our next component is self-monitoring which clearly corresponds to our definition of introspection so far, C1 is an important mechanism to have in a system to allow for introspection to take place. A globally available workspace will likely enable a system to have a form of awareness over the whole of said system.

C2 Consciousness — Self-Monitoring

“This constitutes a self-referential relationship whereby the cognitive system is able to monitor its own processing and obtain information about itself.”¹

This is where a cognitive system can successfully convey whether it has made a mistake. Humans, for example, have a sort of confidence or probability of correctness, in general, and uncertainty, for a decision they have made. The analogy I use to explain this is with those verbal jokes where people trick you into a swift answer, and as soon as you answer wrong, you realise you have made a mistake. The authors attribute this ability to specific neural circuits in the prefrontal cortex that may have evolved to monitor the performance of others.

Conclusion

Well, how can we perform introspection in the case of Artificial Intelligence? The aim of my PhD is to answer this question by adopting the literal meaning of the word introspection in my research. The definition I am using in my research has been synthesised as a result of the papers I have read and is the following:

Introspection is the observation of a system’s internal states, about which the system gathers information to improve and optimise its behaviour.

What I learned by reading these papers, among others, is that a self-monitoring approach to robotics can help improve their autonomy. Therefore, I aim to develop separate neural network architectures that utilise information from the neural networks we aim to improve. This may be an exciting direction to follow while on the chase for introspection in robotics and in Artificial Intelligence.

This is me.

My name is Nikolas Pitsillos and I am currently a member of the Computer Vision & Autonomous Systems Group at the University of Glasgow. I am a PhD student investigating how to introduce computational introspection in robotic systems.

References

[1]: Stanislas Dehaene, Hakwan Lau, and Sid Kouider. What is consciousness, and could machines have it? Science, 358(6362):486–492, 2017. https://science.sciencemag.org/content/sci/358/6362/486.full.pdf

[2]: P. R. Lewis, A. Chandra, S. Parsons, E. Robinson, K. Glette, R. Bahsoon, J. Torresen, and X. Yao. A survey of self-awareness and its application in computing systems. In 2011 Fifth IEEE Conference on Self-Adaptive and Self-Organizing Systems Workshops, pages 102–107, Oct 2011. https://www.cs.bham.ac.uk/~xin/papers/LewisAWARE2011.pdf

--

--

Nikolas Pitsillos
On Computer Vision and Autonomous Systems

University of Glasgow PhD Candidate — Computer Vision & Autonomous Systems Group — Artificial Intelligence & Robotics