“Against the natural order of things”: why eLearning refuses to take off
I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.
Douglas Adams, The Salmon of Doubt
The pace of change
I recently attended a talk by someone who enthusiastically told the audience about all the “new” things technology would soon allow us to do, and how it would transform education. He pitched it not just as disruption, which can be a positive, but (rather gleefully) as destruction; eLearning, he told us, sounded the death knell for universities as we know them.
There were distinct groups in the audience. When he predicted we would soon be embracing tools such as Twitter in our teaching I could tell many in the room had, like me, been using it as a teaching tool since it first appeared and were somewhat surprised that it was still worthy of being called “new”. However, there were some in the room who had clearly never heard of Twitter, never mind used it.
I have been to several talks like this over the past 15 years or so, and they never seem to get much further than predicting how technology will make everything we are doing now look silly. The message is the same; only the technology they use to project their predictions on the screen changes: in the early days they used slides on a carousel, then OHPs, then data projectors dragged around in a suitcase on wheels, and more recently wirelessly transmitting images to a large screen.
I find myself wondering why eLearning is still seen as new, despite the fact that I and many others have been doing it for nearly two decades or more. I couldn’t quite believe that one audience member admitted at the end of the talk that he had never heard of Moocs, and was shocked that the idea of using an iPad to hold a video call with a student on the other side of the planet was still seen as revolutionary. There is a large group of educators who remain outside the bubble and who are almost blissfully unaware of what is going on. When told about it they respond in the same way as if they’d been told that their favourite brand of washing detergent had a new improved formula: polite but mild annoyance. But there is another group for whom the basic concept of eLearning is still the subject of intense debate and rather a large amount of “FUD” (fear, uncertainty, doubt) gleaned from articles about plagiarism (students just copy their essays off Wikipedia!), high drop-out rates (nobody finishes an online course!), heavy workloads (I’ll have to assess 3,000 students!) and plots to do away with academics altogether (if I put my course online the university won’t need me any more!). All these arguments, in one form or another, have followed me throughout my career as a lecturer, manager, and advocate of technology-assisted or facilitated learning.
Technology has changed drastically since I started teaching. In 1999 there was no broadband, most monitors were 256 colours at 640x480 pixels, Apple was about to go out of business, AOL was most people’s idea of what the internet was, and getting your email meant dialing up on a very loud modem, grabbing your messages as quickly as possible and then disconnecting before your phone bill mimicked the national debt. But the debates surrounding eLearning have hardly changed at all. We are still experimenting and wondering if any of this will ever catch on when, by Douglas Adams’s maxim, the vast majority of academics should be more than comfortable with technology in teaching and learning.
In this article I want to explore why it is that eLearning hasn’t taken off in quite the way many have predicted at various times over the last century. Is it because of an inbuilt ludditism among academics? Or overpromising on the part of enthusiasts? Or is there, as I suspect, a fundamental problem with the way technology is talked about?
Two anecdotes about the future from the past
A former colleague worked in computing in the pre-desktop days, back when a computer was supposed to take up a large part of a room. He worked for a firm that was producing computers that could in theory sit on the corner of a desk. That was quite radical. But in order to sell them, they discovered that they had to put a concrete block in the computer casing so that it was extraordinarily heavy.
In 2013 I sat in a packed cinema to watch the 1965 science fiction movie, Daleks: Invasion Earth 2150. The hero takes apart a mind-controlling device and, poking around among wires and large transistors, describes it as “highly advanced”. The audience burst out laughing. But I remember watching that same scene on TV as a child in the 1970s and being rather horrified by it. At some point between 1965 and 2013 that scene went from awe-inspiring to laugh-out-loud funny.
Why did early desktop PCs need to be heavy to be accepted, and why did an electronic mind-controlling device go from being horrific to being comical?
I’ll return to these questions later.
Innovators versus luddites
Looking back, I was both cursed and blessed to be born when I was. Blessed because of all the new inventions coming out, but cursed because I seemed to work with people who could not see the potential.
For my first job interview, as a layout artist, I arrived with a portfolio full of leaflets I had created in the then-new Aldus PageMaker on an Apple Macintosh SE. The man interviewing me entered in to a long lecture on how I had wasted my time, and that what he needed was someone who could use a scalpel and cow gum. As far as I know he was out of business eighteen months later.
A few years later in an attempt to persuade the FTSE 100 company I worked for that they should take this new thing called the worldwide web seriously, I demonstrated a site I had created in my spare time. Our finance director dismissed the idea, stating that the amount of money we would need to spend on it (a few thousand pounds) would never be recouped, and the web was a fad that would never replace traditional stores. Today that company’s website makes more money than all its stores put together.
It was shortly after this that I made the move in to education to help set up an online course. The year was 1999 — in technological terms it might as well be a hundred years ago — and the idea of online learning had the whiff of science fiction about it. The stumbling blocks I ran in to have remained the same to this day: fears that this was a way of getting rid of teachers, protestations that people could not learn without face-to-face interaction, suggestions that while it might work for other disciplines, it would not work for (insert any discipline here).
This reaction to technology, particularly among the people whose way of life or jobs are potentially disrupted, is nothing new. In Britain during the 19th century textile workers reacted strongly to the invention of new machinery that threatened to turn their skilled labour over to unskilled people who were much less expensive, and increase the supply of cloth making their product easier and therefore cheaper to obtain. This group gave their name to the largely pejorative term “luddites”, which has come to mean anyone resistant to change.
The five stages of grief
Reactions to technology could be compared with the famous “five stages of grief”: denial and isolation, anger, bargaining, depression and, finally, acceptance. However the idea behind the five stages is that people move through them, meaning that when it comes to eLearning, the deniers should eventually become accepting of change. But while I’ve witnessed a lot of denial and anger during both my careers in design and education, in my experience the educators who display those responses do not eventually accept or even embrace it — the anti-eLearning, and the indifferent, are quite distinct groups from the enthusiasts.
The diffusion of innovation
Responses to eLearning are not following what the textbooks tell us to expect. Everett Rogers’s conceptualization of the “diffusion of innovation” (Rogers, 2003) will be familiar to many. It breaks down the adoption of new ideas, products or services into a process that moves through the population via distinctive groups:
* Innovators (2.5%)
* Early adopters (13.5%)
* Early majority (34%)
* Late majority (34%)
* Laggards (16%)
In this model, a new product has to be taken up by the innovators who give it something of a “shakedown” and, hopefully, evangelise to their friends who potentially become the early adopters, once it has become more widely available and cheaper. The laggards are the last group to catch up and are, in the words of Simon Sinek, author of Start With Why, the kind of people who only bought touchtone phones because they stopped making ones with dials. By the time the laggards have adopted something, everybody else has already moved on to something else.
While the diffusion model is useful, something odd seems to be happening when it comes to eLearning. Despite the fact that some of us have been involved in eLearning for 20 years or more, we don’t seem to have moved much beyond the early adopters. My guess is that only around 16% of educators are actively embracing eLearning.
Predicting the future of technology
At the time of writing, YouTube has a growing collection of “predictions of the future” going back to the 1920s (see the General Motors ride at the 1964 World Fair for one example). One such video, from Microsoft, gives an end-of-century view of the smart home of the future. Time has not been kind and the things they got right are lost among the things they got wrong (wait for the scene with the “pocket” PC).
A more recent video is a live presentation at CES 2014 of Samsung’s home of the future. This shows a number of devices that at the time were close to being available but, rather than use a family setting like Microsoft, the focus here is a professional single woman in her 30s.
Another video, produced by NTT (Nippon Telephone & Telegraph), is well worth a few minutes of your time. It shows how they think technology will be used in a variety of situations such as remote conferencing, education, disaster relief and medicine. This is a typical example of the corporate prediction genre, which essentially groups together quite separate ideas under a (very) loose narrative.
A fourth video is rather more famous and dates from 1987: Apple’s Knowledge Navigator (Apple, 2011) created for a presentation to higher education managers.
Apple’s video directly addresses the future of education and the way technology would change it. Given the audience for the video, it focuses not on teaching but on an academic checking his email, doing research, and video conferencing with a colleague in another country.
The video slowly began to attract a lot of attention leading Apple to produce another in 1988 focusing on students, with various speakers predicting the future of computers in the classroom (Apple, 2011).
Two minutes in to the video a child is giving a presentation about volcanoes to his peers using what we would now identify as an iPad. The video is full of predictions that turned out to be accurate, but also a few misses.
Why technologists are the wrong people to predict future technology
For the first seven minutes, Apple’s 1998 video seems rather prescient. The things it shows are things we now take for granted. But then something odd happens. A woman is designing a new aircraft engine on screen using computer visualisations to model the effect of different nozzle shapes. This kind of thing certainly happens today. But she is talking to the computer, asking it to make the changes, instead of directly manipulating the designs using a pen or mouse. For this viewer at least, there is something of a problem here, and it’s not because I’m a luddite; it’s because this prediction goes against the way design is done.
Microsoft’s “home of the future” video also contains things we either accept today or are looking forward to in the near future. But the video leaves me cold, not because of the technology, but because of the relationships. Microsoft’s video is a prediction of the home of the future, when what it really needed to be was a prediction of the family of the future. Technology should not be the thing that defines family interaction, is should be the thing that enables it. This requires social scientists, not computer scientists.
Samsung’s presentation eschews relationships entirely, and focuses on the idea of saving time. All the home’s gadgets are automated and controlled remotely. But the ideas themselves are unappealing because, at the end of the day, all that happened was the user got home and fell asleep alone on the sofa. That is not a life to which many would aspire. Like Microsoft, they are selling the technology, not the life.
To explain why the Apple video stands out for the wrong reasons it is worth thinking about the success and failure of educational games. Gabe Zichermann in a potted history of gamification points to the 1980s educational game Where in the World Is Carmen Sandiego? and calls it “the first and last time that parents, teachers and children all agreed that a game was good for them”. But it is also, argues Zichermann, “the first and last time that an educational game was a good game”. Why? “Because parents and teachers got involved in the design of edutainment titles. Kids can smell that shit a mile away. It’s not fun anymore. It’s work.” Carmen Sandiego was a good game because it was made by people who understood games. And that’s key: let family specialists explain how families work, let gaming specialists create games, and let educators figure out what to teach and how, adapting tools to help as they see fit.
In the Apple video, up until the point where the designer starts telling the computer to change her concept, the people whose ideas were being realized on screen were educators expressing ideas of which they had direct experience. Those bits worked — they understood how people learn and interact. But the section on how computers would revolutionize design was a non sequitur arising from one contributor’s belief that voice input was far superior to keyboard input. He was talking about words — dictating text; for some reason the video’s director interpreted it in a field of which, I would wager, he knew little: design. Imagine showing an artist of the future creating a portrait by instructing the computer to “add hair, make it shorter, more wavy, make it flick across the left eye”. Or a writer creating a novel not by dictating the words she wants it to transcribe, but by telling the computer to “add more suspense”.
Designers think with their hands and articulate their thoughts through visualization and making physical prototypes, not by speaking. This is why my interviewer was wrong: he was rejecting a new tool believing it was an attack on his craft. But tools don’t do anything — they still need to be mastered and applied in appropriate situations.
This helps to explain why many predictions of the future fail: not because the technology itself will not materialize, but because the people doing the predictions are not experts in the situations or domains they are aiming to affect. They develop tools without watching the way people work. This is why their visions strike us as funny, odd, or even offensive. And it’s why when a technologist tries to tell a designer, a doctor or a teacher “you will work like this in the future” they laugh.
To use Douglas Adams’s explanation of why some people refuse to accept technology in their lives, it is “against the natural order of things”. But while Adams focused humorously and self-deprecatingly on age, the slow adoption of technology in teaching is less about how old the teachers are (I witness acceptance and rejection equally across all age groups, some of it ludditism but much of it not), and more about the nature of the thing being changed.
Of those who are focused on teaching in universities, it is often the human interaction that is important. But many people working in universities did not become academics to teach: they are focused on research. As Terry Pratchett puts it:
Many things went on at … University and, regrettably, teaching had to be one of them. The faculty had long ago confronted this fact and had perfected various devices for avoiding it. But this was perfectly all right because, to be fair, so had the students.
So telling someone that technology can replace the need to actually talk to students may be greeted more enthusiastically by the non-teaching focused than by the teachers.
But showing the teaching enthusiasts how technology can enhance rather than replace the things they value is a far better approach than effectively insulting and threatening them, which is how many evangelists come across because they are evangelizing technology, not teaching (or the “e”, not the “learning”). So long as eLearning resides in the purview of school and university IT departments, or in technology companies, it will never get past the educational innovators and early adopters who are operating largely independently anyway.
The problem with hype
All of the above demonstrates why visions of the future fail to engage if they are proposed by technologists, rather than actual users. Apple’s video is the only one that appears to use experts but even it fails to include teachers or students, except as characters. Students and teachers are, to borrow from sociology, “actors”, not “characters”. Technologists who ignore the difference are doomed to forever predict things, but never meaningfully change anything.
The videos cited above are somebody else’s vision and the further removed it is from reality the less enticing it becomes. A kind of “uncanny valley” is in effect. Microsoft’s family of the future seems not to have any fights, it has a family room, an entertainment room, and a music room; almost, but not completely, unlike my home or that of anyone I know. The kitchen is spotless. Is yours? And because I reject the scenario, I reject the things that apparently create it. Similarly with Samsung’s presentation: if the future means I’m rushing to work in the morning so desperately that I can remember to put a single shirt in the wash but forget to turn on the machine or turn off the air conditioning, and then get home so late I don’t even have time to eat before I fall asleep drooling onto the sofa, that’s not a great vision. I reject it. As for NTT’s video — it seems to be promising a glorious future in which we all get to attend extraordinarily dull meetings while sitting isolated in our cubicles. Reject.
The hype cycle
This rejection is predicted in another model, Gartner’s “hype cycle” (Gartner Inc., 2013).
Gartner have applied this cycle to many sectors, including education (Lowendahl, 2013). In this report, Gartner show technologies such as education tablets, mashware and affective computing as being “on the rise”, gamification, MOOCs and adaptive learning as being “at the peak”, and e-textbooks, cloud email and virtual environments as “sliding in to the trough”. Meanwhile, lecture capture, retrieval tools and open-source repositories are “climbing the slope” with eBook readers and self-publishing “entering the plateau”.
There is something about the idea of the hype cycle that is instantly recognizable. In particular the notion that technology hits a “peak of inflated expectations” rings uncomfortably true. But how does it explain how eLearning doesn’t seem to be following the traditional diffusion of innovation? Why is it stuck with the early adopters?
If you overlay the “hype cycle” on the “diffusion of innovation”, the trough of disillusion comes right at the point where the early adopters connect to the early majority. And the “slope of enlightenment” does not occur because the innovators have already moved on to the next big thing.
In other words, the reason why eLearning in all its various guises has failed to get beyond that initial 16% is because of the hype. There’s a disconnect between the promise and the reality, largely because the people doing the promising are unfamiliar with the practical realities of the situation they are seeking to change.
I think there’s a simple explanation for that.
Think back to the two stories that opened this article: The desktop computers that had to have concrete inside them; the mind control device that was scary in the 60s and 70s but funny in 2013.
The desktop computers my colleague sold in the early 1980s had to be heavy because while our minds could comprehend something the size of a room being miniaturized, we have a harder time imagining that it would be portable. Particularly at those prices. “Always be wary of any helpful item that weighs less than its operating manual.” (Pratchett, Jingo, 2006).
The mind control device in the 1960s movie could have been depicted as a box of pulsing lights (as it might be today), with the same narrative effect, but with no emotional effect. It was depicted using exactly the same type of things that you would see if you opened any radio or TV of the day. That is what made it believable and, as a result, scary. My radio can control my mind? That’s terrifying. By 2013, of course, the “sophisticated” electronics were dated and so the idea the audience was supposed to focus on — mind control — was smothered by the way the message was communicated. But the point is that at the time the idea made sense because it was in the hands of a storyteller telling a story in a way that made sense to audiences at the time, not a technologist pitching an idea about the future to which nobody can relate.
And this is what happens when we talk about the future of technology in education — good, potentially revolutionary ideas are lost because the people with the message cannot tell good stories, and often know little about their audience.
We have certain expectations in life, and we hold certain values and beliefs. To convey a complex or new idea, it is usually best to position it within those expectations, or connect it to those values. It may come across as dated in the future, but that’s not a problem if your task is to make a difference today. If you want eLearning to take off, don’t tell someone a story about somebody else in the future, tell a story about them and their students today. The innovators and early adopters occupy two overlapping camps: those that love technology (and so will give anything a go), and those that enjoy an adventure and a bit of risk. The next group, the early majority, like a good story too — but they want others to write it for them, and no risk.
Revolutions do not start with a PowerPoint presentation or slick video and a ridiculing of the audience; they start with a belief held by that audience, and an urge to use that belief to enhance or change something. eLearning will only get beyond the innovators and early adopters when it stops being pitched as revolutionary in itself. Technology is not the revolution, education is. And we need to get excited again about what we want to achieve in our teaching and in our students’ learning before we get excited about the technology.
This will cheer you up
I wanted this article to be positive. eLearning offers us so much and I consider myself an advocate of its use not just in enhancing existing provision but in widening access to excellent educational opportunities to those who currently do not, or cannot, access them. That is as much the shop worker in anytown USA or the UK as it is the stereotypical teenager in a developing country.
I cited a few videos about the future — now let me cite a couple about the recent past that will simultaneously amuse and distress you.
Douglas Adams missed a line about anything invented before you were born.
Adams, D. (2012). The Salmon of Doubt: Hitchhiking the Galaxy One Last Time. London: Pan.
Anon. (2006). learndirect. Retrieved June 2, 2014, from Wikipedia: http://en.wikipedia.org/wiki/Learndirect
Apple. (2011, April 13). Apple’s Future Computer: The Knowledge Navigator. Retrieved July 17, 2014, from YouTube: https://www.youtube.com/watch?v=9bjve67p33E
Apple. (2011, October 18). Knowledge Navigator Implications — Apple 1988. Retrieved July 17, 2014, from https://www.youtube.com/watch?v=VWlA_cDE5RU&feature=youtu.be: https://www.youtube.com/watch?v=VWlA_cDE5RU&feature=youtu.be
Colligan, B. (2011, November 20). How the Knowledge Navigator video came about. Retrieved July 17, 2014, from Dubberly Design Office: http://www.dubberly.com/articles/how-the-knowledge-navigator-video-came-about.html
D’Orazio, D. (2014, June 8). Microsoft’s 90s vision for the smart home looks a lot like today. Retrieved Hune 8, 2014, from The Verge: http://www.theverge.com/2014/6/8/5790798/microsoft-1999-vision-of-the-smart-home
Eveleth, R. (2013, September 2). Robots: Is the uncanny valley real? Retrieved May 5, 2014, from BBC Future: http://www.bbc.com/future/story/20130901-is-the-uncanny-valley-real
Gartner Inc. (2013). Hype Cycles. Retrieved July 17, 2014, from Hype Cycles: http://www.gartner.com/technology/research/methodologies/hype-cycle.jsp
Lowendahl, J.-M. (2013, July 25). Hype Cycle for Education, 2013. Retrieved July 17, 2014, from Gartner: https://www.gartner.com/doc/2559615
Microsoft. (1999). Microsoft Smart Home. Retrieved July 17, 2014, from YouTube: http://youtu.be/9V_0xDUg0h0
Microsoft. (2013, July 14). Microsoft’s Concept — Future Vision 2020. Retrieved July 17, 2014, from YouTube: https://www.youtube.com/watch?v=ozLaklIFWUI
NTT. (2013, June 12). The Future of ICT. Retrieved July 18, 2014, from YouTube: https://www.youtube.com/watch?v=GpJ36KzHJG4
Pratchett, T. (1994). Interesting Times. London: Corgi.
Pratchett, T. (2006). Jingo. London: Corgi.
Rogers, E. (2003). Diffusion of Innovations. Free Press.
Samsung. (2014, January 9). Samsung Smart Home at CES 2014. Retrieved July 18, 2014, from YouTube: https://www.youtube.com/watch?v=mEzSF29EBgI
Sinek, S. (2011). Start With Why. Portfolio Trade.
Zichermann, G. (2010, November 1). Fun is the Future: Mastering Gamification (Google Tech Talk). Retrieved July 17, 2014, from YouTube: http://youtu.be/6O1gNVeaE4g