Should we Program Learners Like Robots?

Is learning simply programming? Plug in a certain set of inputs and you will get the learning outcomes you want?

Last time around I wrote about emergent learning or emergent curriculum as it’s also called. If learning is an active process then emergent learning is a reasonable description of what that action can look like. I spoke about affordances, or the opportunities available for a learner to act within the learning environment. The more affordances a learner has, the more opportunities there are for learning to happen (assuming the learner is aware of and can access all of them).

A popular view of human cognition has been the mind as computer and something that is separate from the body. In this view, the brain is hardware to which inputs can be programmed in order to achieve a specific set of outputs. It is rule-based and logic driven. To understand cognition, say cognitivists, you only need focus on internal processes.

Contrast that with an embodied approach to cognition, which says not only is the mind connected to the body but the body also influences the mind. That is, cognition is not limited to only what our eyes can see and our brains can process. To be embodied means that our cognition arises from our physical interactions with the world.

The classic cognitivist view sees thought as the ability to grasp the meaning of symbols by using a set of rules that ensure those symbols appropriately represent the world. On the other hand, embodied cognition sees thought as a result of our interaction with our environment. Action not only grows from thought but thought can also grow from action.

With embodiment, each environment presents a different set of opportunities (i.e., affordances) and information for interpreting those opportunities. And each of us possesses a different set of capabilities (i.e., skills) for learning. When environmental affordances and learner capabilities interact within a group of learners, the end result can be a variety of different learner responses. It could be that no two learners have exactly the same response or answer to the question or problem. And it’s not that some of those responses are ‘completely wrong’ and others are ‘exactly right’. It’s more like there is a degree of latitude of acceptable answers and learners will fall somewhere within that spectrum. Where each learner falls depends specifically on how his/her learning capabilities interacted with the affordances within the learning environment.

And that is a difficult thing to predict. This could be a reason why setting up a learning environment doesn’t always achieve what you thought it would regarding learning outcomes.

In a February 2012 post in Psychology Today, Dr. Jeff Thompson provides a great way of comparing the traditional non-embodied approach with the embodied approach by using two different types of artificially intelligent robots. There is a certain type of AI-based robot that is programmed using what the Internet Encyclopedia of Philosophy calls a stored-description model. This requires programmers to guess what the robot will encounter and then specifically spell out routines for each of those scenarios. The end result for this type of robot is that it can execute quite nicely as long as it can easily recognize the routines needed from the environmental cues. If there is ambiguity or if someone or something purposefully tries to alter its course, it will become slower or even unresponsive.

A significant problem is that these types of robots cannot adapt quickly or efficiently to emerging changes in their environment. So contrast that with a second type of robot. This one is created with a generic set of overarching principles and not specific routines. It can respond much quicker in real-time because it is programmed to recognize general phenomena (i.e., walls) as opposed to specific targets (i.e., the wall with the door in it).

I can’t help but want to try and relate this to teaching and learning. I picture traditional means of learning, like the transmission model, to be similar to the stored-description model of programming robots. You provide all the specific definitions and facts and as long as the inputs request those definitions or facts the outputs should reflect an understanding of the information presented.

However, ask learners to provide a deeper level of understanding of that information, just like the slow or non-responsive robots, learners struggle to respond in appropriate ways. The way we ‘program’ learners then should equate to how responsive and adaptable we want them to be in the environments they will encounter in their day to day lives. Since the living and working environments of most of us are quite complex and constantly changing, it seems logical then that our learning environments should prepare learners for that.