Artificial Intelligence

A brief consideration of variables

The aim of this paper is to attempt to outline, briefly, a few variables that can be linked to the study of artificial intelligence (AI). First, I will refer to The Human Genome Project (HGP) and its limitations. Second, I address some thoughts of an AI researcher, and third I funnel down and unpick the concepts cognition and meta-cognition. Finally, all three strands are brought together to consider a way forward.

In very simple terms; the HGP goals can be outlined as follows:

According to Coulter (2001), the most negative effect of the HGP includes a lack of multi/interdisciplinary research, which underlines the issue of insular domain working that cuts off from cross-paradigm/domains science.

Why is this important?

When innovative models are applied in practice, there is always a reliance on old skill-set and knowledge that is made to fit within an existing framework; whereby that framing might stifle development or, produce constrained outcomes according to predictions/hypotheses. Let me elaborate. Coulter acknowledges in 2001, a very specific limitation of the HGP study was:

‘the project has enjoyed good press coverage and favorable reporting, including considerable exaggeration of its scientific achievements. Even among medical practitioners, the response has largely been enthusiastic despite the obviously inadequate training physicians have in genetics’

This makes sense, because when dealing with a totally new application of a model in science, totally new knowledge is required, as well as skill-set, to push forward for something new to emerge. I wager, the primary constraint that held back the science then, as does the majority of scientific inquiry today, is the way in which quantitative data/big data determines the validity of outcomes. The reason being, Coulter-16 years ago, outlined the need for a qualitative approach:

‘medicine should involve the patients’ understanding and experience of their illness. Biomedicine omits the person to whom the body belongs, the person whose body it is.’

The fundamental point being made here; is the need for more voices to inform scientific research, whether that be cross-discipline and/or participant/public agency to inform studies.

The Thoughts of an AI Researcher

When we think about computer science; the development of software, the design, building the infrastructure and so on… a common way of understanding as to what goes on is that of a layered approach. And, a common pitfall can often be in the guise of what some might term ‘lost in translation’. Hintze (2017), an AI researcher, pitches this perfectly:

‘engineers layer many different components together. The designers may have known well how each element worked individually, but didn’t know enough about how they all worked together.’

Hintze, goes on to cite the RMS Titanic, NASA’s space shuttle, the Chernobyl nuclear power plant as examples of system failure with unintended consequences. Again, systems built based on already established skill-set and knowledge that required ‘the new’ types of knowledge and skill-set at the time of the build-no? The knowledge and skill-set learned from when an application goes wrong. This is a dilemma-yes?

One way of considering this dilemma, whether we think it matters or not, is in a general and deeper way of understanding human cognition/meta-cognition. Why? Because, as I have outlined working with old paradigms, methods and ways of seeing are constraints; in terms of AI development, Hintze proposes AI research is heading for the exact same disaster in terms of paradigmatic system failure:

‘I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.’

Cognition & Meta-Cognition

I’ve read for a few degrees involving an in-depth study of cognition and meta-cognition. And, I have applied my own models across education research and in practice. There is some great literature out there in terms of these concepts with regard to human endeavour. Such literature stems back to the sixties with the likes of Flavell (e.g. 1963/71); Piaget (1970s/2001); Pea & Perkins (1993); Hacker et al., (2000); Kluwe (1993) and Kennewell (2000). All, focus on the development of thought and how it processes. Specifically, how cognition works and operates so that higher-order thinking can take place.

Notwithstanding the inclusion of the surround and the objects we engage with, including the use of technology (Pea, 1993 and Perkins, 1993), we develop cognition, it is suggested, through deliberate, planned goal-directed thinking, as set out by Piaget’s (2001) conceptualisation of formal operations. This enables higher-order levels of thought to operate on lower-ordered levels (Hacker & Nierderhauser, 2000). And, so it goes, formal operations ‘are operations performed upon the results of prior engagement with organised concrete operations’ (Flavell, 1963). So, it is the processes that ‘monitor the selection and application (declarative knowledge), as well as the effects of solution processes to regulate the stream of solution activity, which represent meta-cognitive procedural knowledge’ (Kluwe, 1982), leading to formal operations that constitute a kind of meta-thinking, i.e. thinking about thinking itself (Flavell, 1971).

Movement Forward

In drawing together literature, the voices of others and a cross-disciplinary approach in my writing; I hope I have modelled possibilities to exist for a new wave, in our approach to the development and ethical considerations for future AI research? Simply put, there has never been a greater need than now- for the role of reflective activity; thinking about our own thinking if we wish to develop AI safely and humanely.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.