Mobile Learning and Contested Digital Spaces: deception, demonstration, debate.

David Longman
Mirandanet
Published in
24 min readMay 18, 2018

--

Authors: David Longman and Professor Sarah Younie

May 2018

Abstract: This paper is a critical overview of some current perspectives on educational technology, with some specific reference to the use of mobile digital devices as a platform and outlining a model of critical method, the ‘Deception, Demonstration, Debate’ framework . The role of digital technologies in education must be routinely subjected to a searching critique particularly in the context of a turbulent digital landscape and, in the UK at least, an increasingly fragile policy framework for publicly funded education.

  1. Introduction: deception, demonstration, debate

Our recent paper “A critical review of pedagogical perspectives on mobile learning”, the resulted of an ITTE Research Fellowship project, presented an critical analysis of an emerging pedagogy of mobile learning. In that analysis we found aspects that were either not strongly based on evidence and/or that overlooked some key issues arising from the wider context of online socialisation. As yet there no convincing case for the characterisation of mobile technologies (in particular tablets and smartphones) as offering a pedagogical freedom and richness of educational experience hitherto unrealised. Much depends on the situated use of these and all digital technologies.

In this paper we widen the scope of that critique to illustrate the context of activity for educational technology (edtech). A key observation is that, today, it is impossible for us to account for the value and effectiveness of edtech in isolation from this wider context of socially motivated, technologically mediated interactions. While it is trivial to observe that in a digitally networked society all things are connected it is nevertheless a fundamental point. We cannot easily insulate ourselves (individually or as groups) from myriad external influences bearing on the specialised spaces created by educational institutions, i.e. ;physical or virtual classrooms, gyms, halls, fields, laboratories etc.)

Indeed, a fundamental and broad claim for mobile learning as depicted in the literature is that while all computer-based activity can contribute directly to teaching and learning, mobile devices of all kinds are the best tool for the job. A recent libertarian perspective (e.g. such as that of Lee, M. and Broadie, R. 2018, Digitally Connected Families and the Digital Education of the World’s Young, 1993–2016, Armidale, Australia, Douglas and Brown) holds that there is nothing but ‘the digital’ which is gradually (and perhaps for them not quickly enough) supplanting established models of schooling and education. As ‘the digital’ becomes more widespread and more deeply embedded, so ‘thinking digitally’ is acquired earlier and more naturally so that by age three, kids are are already fully developed to think digitally, learning on the go as they grow.

The headline issue for us, however, is that winder context of online activity in which nearly all do now live to some degree, and from which edtech cannot be isolated, has become increasingly toxic. It is as if a tipping point has been reached in which networked socialisation has become a powerful tool of deception and which can be used manipulate, en masse, our thinking and actions.

Since mobile learning rests on and presupposes the infrastructure of socialised interactions through software we cannot ignore the implications of this heightened awareness of deceptions perpetrated explicitly or covertly en-masse by self-interested agents. These deceptions must be demonstrated and subsequently debated if edtech is to move forward to create a trusted, and safe environment in which education can flourish with minimal risk.

Originally developed in the context of how pre-Internet media reported on issues and concerns about global environment and development (mainly the printed press and television) the ‘Deception, Demonstration and Debate’ model (DDD) (see Lacey et al (eds), 1990, Deception, Demonstration and Debate: Toward a critical environment and development education, Kogan Page) may yet have value as a description for the way in which, post-Internet, network corporations, for example Google, Facebook, Apple or Microsoft, represent themselves either directly or indirectly in the ‘public sphere’.

The editors and authors of ‘Deception, Demonstration and Debate’ were concerned about the mounting evidence of severe problems of environmental and climatic change but which was rarely in the mass media of the day in a non-partisan, evidence-based manner. Today although those old-time, pre-Internet forms of ‘broadcast’ mass media have evolved into a more complex system encompassing many forms of participative and interconnected media of reportage and built around social media models of information sharing and exchange, we find that we have certainly not made it any easier to arrive at well articulated, accurate content, and perhaps the problem has become more complex.

Thus, in spite of the accompanying ideology of this new media environment, an ideology that emerged early with its declarations of intellectual freedom, openness and real democracy (see Barbrook, R. and Cameron A. 1996. The Californian Ideology) it is clear that what J.K.Galbraith call ‘institutional truth’ continues to obscure our thinking (for ‘institutional truth’, a concept developed by J.K. Galbraith:

Institutional truth in our times bears no necessary relationship to simple truth. It is instead, what serves the needs of purposes of the large and socially pervasive institutions that increasingly dominate modern life. (J.K. Galbraith. 1989. ‘In pursuit of the simple truth.’ Guardian 28 July 1989. see also the discussion in Ennals, R. 1990. Artificial Intelligence and Human Institutions. pp98–101).

Exposing institutional truth is more than revealing the ‘tricks of the trade’ that have long been known to us from the history of advertising and propaganda (see BBC News, 14 April 2018. The design tricks that get you hooked on your phone). Lacey et al (1990) argue that, in relation to exposing the contradictions and the blight of environmental problem, education is one pathway to exposing the deceptions wrought by institutional truth. In 1990, arguing for a more critical approach to understanding the causes of global environmental change they wrote:

“… education should play a transforming role in the creation of a new generation, equipped with new skills and prepared to assimilate and act on the emerging information about a rapidly changing future. In short, they must be equipped with a new intelligence. It is an intelligence that we can only just begin to define and create. The process of creation must start now by accurately and openly diagnosing the problems. this will Breaking open the complex interest and deception within institutional truth.” (Abraham et al. p17)

This paper adopts a similar position with respect to the crisis of confidence in the online environment, particularly with respect to the exploitation of personal information and worse, the manipulation of democratic outcomes. Education is the key although the DDD model is more than simple teaching about esafety. Instead, as should be clear, it can be an approach to understanding how educational technology specifically is embedded in social action.

Deception, of course, may or may not be intentional although for the purposes of this discussion that distinction is not too important. Only brief descriptions of ‘digital deceptions’ are given with some consideration of their intended effects. In particular the exercise is to look for the intended benefits of some physical, social or digital process and look for intended or unintended adverse effects. At least some of the time these adverse effects will interfere directly with the process (e.g the combustion engine makes air poisonous; search engines reveal the wrong things), or indirectly (e.g the web cloud relies on high energy consumption; social media spread falsehoods). In other words, to what extent is the process itself causing the problem? And, more importantly, to what extent is the process, the deception, capable of responsive change towards reducing those adverse effects.

Of course demonstrating a deception and in some sense offering a diagnosis and remedy is not straightforward. The purpose of demonstration is to provide “authoritative” (Lacey, et al, p19) reasons why and how there should be some change so as to either offset or modify the adverse effects that have been demonstrated. Quite what might be understood by “authoritative” may itself be open to debate although it is for example entirely consistent the explicitly expressed principle in all teacher training standards that knowledge about teaching and learning should be based on evidence. This is the same thing as saying it should be authoritative.

The educational technology environment, like the global environment generally, is certainly one of constant, frequently turbulent change. For this reason, addressing some of the adverse effects of our socialised media should be core concern in the design and creation, training and evaluation of edtech. Managing change is itself core challenge for everyone in education but it is most keenly felt perhaps at the level of the practitioner. For some stability we should remember that educational technology now has a long history and there are very many many well-founded case studies, catalogues of numerical data and theoretical arguments that can guide our analysis.This is the topic of a separate paper (a key source of excellent critical studies of roots and practices in education technology is Audrey Watters).

Demonstration and debate rely on debate but it does not follow that the analysis, conclusions or solutions are automatically accepted, even if they are ‘authoritative’. It is possible too that that in an environment of ‘always on’ social media where observations and refutations are in ceaseless flow, this part of the model, and especially debate, may be the weakest if it relies entirely on an an ill-matched tradition of reason and analysis. Can such tools and techniques survive the tumult of a Twitter storm? This too is the subject of a separate paper.

AI illusions

The first and perhaps the most obvious deception now prevalent in our computational culture is that of Artificial Intelligence, or AI, and its sibling Machine Learning. In part this deception has evolved because, in decades past, the idea that Artificial Intelligence is about making computers do the sort of things that minds can do was a remote and academic pursuit concerned more with understanding the human mind than with building machines that can perform ‘intelligently’. As recently as 2016 books such as Boden’s “AI: its Nature and Future” continue this cognitive science perspective, regarding AI as a tool for understanding minds and thought as an elaborate information processing system.

The roots of this scientific paradigm reach back into the early twentieth century and perhaps its most famous expression comes down to us from Alan Turing and his formulation of the Turing Test (Turing, 1950): can a person interacting with a computer for a short period time (say 5 minutes) judge with more than 30% confidence that they are interacting with a machine or a computer. Perhaps Turing himself did not really regard this test as definitive as it has come to be seen (Boden, 2016, p120) but nevertheless it has to provide a useful, cultural yardstick against which to measure the performance of some kinds of computer software.

Let’s emphasise that last point. AI is nothing more than software, an expression of algorithms in code. In that respect it is the same kind of thing as the spell checker in a word processor. The algorithms and associated hardware (e.g. ‘AI chips’) involved are both sophisticated and highly complex, perhaps some of the greatest collective achievements of the human mind to date, but there is nothing ‘magical’ about AI, no mysterious ingredients that somehow introduce ‘intelligence’. It is, like all computing, reliant both on elegant software-hardware constructions and, ultimately, reliant on increasing its brute force processing power that follows similar logarithmic patterns as for Moore’s Law.

So, an overarching deception associated with AI is built-in for the very idea that it represents ‘intelligence’ is a kind of mystification of what is really going on. No doubt our conceptions of ‘intelligence’ and ‘intelligent behaviour’ must change — we cannot be too anthropocentric about this — but at the same time we should beware the tendency to ascribe too much agency to the machine, to software. For educational technology this is a key teaching point: the machine merely simulates intelligent behaviour, at best. Machine Learning, for example, rests on associative thinking rather than causal thinking, magnificent systems for establishing patterns of correlation beyond the capacity of human thinking and which seem like a causal relationships but in the end machine learning remains probabilistic. Given the ‘traditional’ scepticism associated with the phases “correlation does not equal correlation” why should we trust the outputs of Machine Learning or AI? (For an important analysis of this issue by a pioneer of machine learning algorithms see Pearl, J. and Mackenzie, D. 2018. The Book of Why: The New Science of Cause and Effect. Allen Lane)

This is not intended to be a conclusive rejection of the value of ‘AI-machines’ but a comment on the fact that we areN’Tt yet quite sure where the boundary is that separates me from the machine. In everyday life this might or might not be important. For education practitioners it feels like an necessary debate. After all, it was Papert and the entire MIT-led culture of the 1970s that promoted the idea that a computer can be made to behave a little bit like me — it can write poetry, make music, steer a wheelie machine around somer skittles — and I am a bit like the machine (if I can figure out how to make the Turtle do it, then I can do it to). This anthropomorphising of the computer did sit well with everyone in education at the time of Logo’s first incarnation in classrooms. It may be a small part of an explanation as to why Logo never really caught on. At that time, it was still a radical notion to think of the computer as a machine you could have a relationship with. Today, we seem more comfortable with the basic idea and although, of course, machine dependency among children remains a widespread concern this is less and less universal. We are more accepting of the computing machine as providing something of value even if we are still not quite sure where the control lies. This is why education is a key, and why educational technologists in particular must engage critically with the debate.

in the early period of their invention and use, both the microscope and the telescope were often regarded as deceptive devices, constructing the phenomena that observers claimed to see. Today, ‘intelligent’ machines may be more like microscopes, devices that help us see minutiae more clearly. But on their own they do not create the images they present, they not explain them, they do not ‘see’ anything. Yet computing machines do introduce differences into what we can see. Looking at the new recently published photographic map of our galaxy by the European Space Agency Gaia Project we must appreciate that we are not looking at a photograph at all — something made using chemical reactions to light itself — but to a software representation of a vast database of datapoints. There is no original photograph. The only stage at which light is recorded is the moment at which, after optical lensing, some photons are converted into a digital value on a huge sensor. That value can represents several spectrographic features (e.g intensity and colour). One immediate result of software transformation is the application of the mathematics of star physics to provide a position in a Cartesian three dimensional space. A 3D map of heaven!

More obviously perhaps this debate is to be found in the history of software for playing chess and latterly Go. Once upon a time chess was regarded as the ultimate in intelligent behaviour for humans. If we can just crack that, it was thought, then we have made an important advance towards, if not a solution for, an understanding of general intelligence. It turned out, quite early on, that by today’s standards chess is a fairly simple affair and chess machines that will defeat you time and again can now be bought off the shelf in any stationery shop. These early experiments in AI sat alongside less successful, more intractable problems: how to write a decent analytic essay about limpets, a poem about butterflies, or how to recognise a face in a large, moving crowd of people, and so on. It turned that what was hard for humans was easy for a computer, and what was easy for a human was much harder for a computer. This line of development has continued so that now Go, once regarded as an impossible challenge, is now embodied in a machine that can defeat even the greatest human player.

Maybe we have become inured to the reality of this. At the ‘mechanical’, procedural level, machines can outperform us all the time … and it was ever thus. But we are moving into a different kind of environment in which those more intractable aspects that could not so easily be solved by mechanical means are now rendered routinely operational. We now have machines that can defeat human competitors in general knowledge quiz games by parsing questions expressed in natural language and, of greater concern perhaps, we now have machines that can take a live feed from cameras that have a view of a shopping mall, or the streets during a carnival, and isolate faces from that ‘blooming buzzing confusion’, such that they can be identified as the faces of known individuals. Such systems can also make attempts to read emotional states from facial expressions, or detect the subtlest tics of body language to infer all kinds of inner, mental states and intentions.

All this, we must keep in mind, is based on probabilistic programming, relying not merely on a database of propositional information atoms but on algorithms that have ‘learned’ to filter inputs — this is is a face, this is a cat, this is a man with a guilty expression — before any databases are interrogated. Of course, these systems are, not yet, totally reliable (witness the number of accidents arising from automated cars, or the high number of false positives produced by facial recognition systems). But the assumption seems to be that through some mysterious processes called ‘machine learning’ they can only improve.

But what are the criteria we should apply to this improvement? Turing asked only for a better than 30% success rate in determining if the machine was indistinguishable from a human. What should we require when it comes to facial recognition? Or, more extreme perhaps (although this is a real area of investigation) what should we require when it comes to inferring from a facial tic if a witness in a court proceeding is lying? (e.g note here the relevant but problematic science of polygraphs as a tool for determining such issues). To bring it closer to our interests in educational technology, such questions apply for example to any use of such technology for determining the difference between honesty and fraud in any form of educational assessment.

AI Illusions: Case 1 — Google Duplex: the Turing Test undone?

Taking a very recent case in point let’s consider the impressive demonstration of Google Duplex (see here for Youtube video and here for a transcript.) a new component in the Google Home Assistant technology. Here is a natural language system that can make voice telephone calls to a designated recipient number and carry out relatively constrained tasks such as making appointments for meetings. The examples demonstrated included setting up a hairdressing appointment or booking a restaurant table for four people. One immediate observation on this startling demonstration is the apparent banality of the tasks that were demonstrated which seem echo themes in Douglas Adams’ Hitchhikers Guide to the Galaxy (where the first people on earth were exiles from an advanced civilisation that had no further use for telephone sanitisers, hairedressers or estate agents) but more serious, is the eerie naturalness of the conversation involving many features of a ‘real’ conversation between people — hesitations and speech gestures, ad hoc queries and responses (e.g. the Assistant must know how to calculate the appropriateness of an offered appointment time based on how long it might take to get to the destination), requests for clarifications etc. This is no ordinary text-to-speech system and while it remains a Google experiment at the present time there seems little doubt that such tools will eventually find their way into the online ecosystem we inhabit.

While impressive it did not take long for the more thoughtful, concerned responses to emerge. Was it a staged demonstration — almost certainly, for after all this was a marketing event. In a news segment the next day, Duplex was contextualised as a privacy and trust issue. Perhaps this is merely a problem of presentation — in the demonstration the recipients did not seem to realise that they were talking to an AI. It would seem that the Turing Test has been passed and surpassed. Such concerns seem to have obliged Google to respond that such tools would be constructed to identify themselves as an AI (well, not so much an AI as a Google product), to emphasise that they would not be allowed to act autonomously and to point out that at the present stage of development they can only operate in a constrained context.

These are early days, but the magician has demonstrated that the trick can be performed. But it throws us back onto the need for socio-political oversight of such activities if we are to avoid the increasingly real possibility of feral AI which may be much less susceptible to detection than current generations of bots that already masquerade as legitimate sources of social media information across the spectrum.

While the deception in this case is both obvious, i.e. quickly demonstrated and demonstrated, this does not guarantee that the exposure will ensure our trust. Without external policy-based regulation and statute any machine-based conversation (of which we already have many) we can never be completely sure. Interestingly too, the deceptive potential of such online interactions is made more problematic, perhaps, by the idea that both Google and Amazon will add a politeness requirement to their smart speakers. Seemingly, in response to user feedback, the word ‘please’ will be included in oral requests to answer a question or carry out a tasks because it is rude not to (and sets a bad precedent for children’s to follow)! Presumably too, machine responses could also be made to vary depending on the perceived ‘politeness’ of the human.

AI Illusions: Case 2 — Facebook: who’s the dumbf**k now?

Purportedly, in the very early days, the beginning of Facebook when it was ‘thefacebook.com’ Mark Zuckerberg wrote the following message in reply to someone asking about his project:

“Yeah so if you ever need info about anyone at Harvard just ask. I have over 4,000 emails, pictures, addresses, SNS

[Redacted]: What? How’d you manage that one?

Zuck: People just submitted it. I don’t know why. They “trust me” Dumb fucks.

For commentators this just indicates that there has always been a rather ‘relaxed’ respect for the personal information that people willingly surrender online. So we should not be too surprised perhaps that Facebook has recently been at the centre of a major scandal about the irresponsible use of personal data, trading user data along with the hiring of specialised data services has taken place in pursuit of winning an election ballot for one group against another. In this case both Brexit and the US Presidential Election are implicated in outright manipulation of information system to expose groups of users to different but similarly persuasive ‘messages’.

These events, this data engineering, went on even as the targets of the manipulations (the audiences) were participating in their own reinforcement and manipulation. IThe revelation of this major and systematic digital deception has been won largely, it is worth noting, a professional journalist’s persistent enquiry — Carole Cadwalladr. This deception here is of great interest to practitioners of educational technology for it involves a willing collaboration between the ‘client’ (Cambridge Analytica) and the service provider (Facebook) in which the services provider is selling a commodity, a huge database of personal information, and also undertaking commissioned analysis of that data. This is beyond what anyone might have expected even with the terms and conditions the ordinary user might have signed up to.

Facebook engaged in willing deceptions and manipulations to provide its client (and the client’s partners and subsidiaries) with further data obtained under false pretences (e.g enabling the use of survey apps to collect personal data data for purpose other than that stated), or deceptively (e.g. you are a member of a segment text cohort).

“… consider the present situation. We know that social media has been successfully deployed to disrupt societies,and we know that the price to do so is remarkably low. We know that relevant companies take in an astounding amount of money and that they don’t always know who their customers are. Therefore, there are likely to be actors manipulating us — manipulating you — who have not been revealed.” (Lanier J., 2018. “Ten Arguments For Deleting Your Social Media Accounts Right Now”. Kindle location 172)

Lanier’s argument is that it is built into the business model of the social internet. It thrives by incentivising ‘bad emotions’ (these get a reaction in the constant loop of algorithmic feedback) and thus tends to corrupt. It’s a moral argument more than an explanation. Lanier’s response is to shut off our use of social media and thus a clear, consumer signal to the algorithms. When the revenue drops the corporations will react. (See also Who Owns the Future?, 2013, for a more detailed critique.) Fundamentally, the ills of the present situation with regard to the licentious use of user or data (and Facebook is not alon in this) system-genic (or sys-genic?) they are created by the system itself. Lanier would claim to have seen this happening at an early stage in the emergence of the Web, as did many other observers, but we may have reached some sort of tipping point where the scale of the deception should leave many of us reeling in confusion if not awe.

In some sense the situation we are currently experiencing — the deep questioning of the economy of attention — needs to be a turning point. It’s a dilemma though. Education, and edtech in particular, is part of the attention business but we are losing out. If Education is to succeed it cannot simply join in with the economy as it is and expose our new generations to the kinds of risks that commentators ask us to address. That would be to break the fundamental pledge of education’s social contract “in loco parentis”.

From the viewpoint of educational technology the issue is clear: how do we ensure a protective demarcation between the ‘commercial web’ and the ‘learning web’, our online activity environment, classroom or learning space. The Scratch Community is probably one of the best current example of how this can be done. Transparent and well articulated terms and conditions are essential wherever public-private partnerships occur.

AI Illusions: Case 3 — Precision education

It may be that to understand the risks of deceptive software we need look no further than the growing literature on the bias and blindness of algorithms, those celebrated abstractions that underlie the design and behaviour of all code (and since hardware is ony software made physical, hardware too). There is a growing literature that exposes the issue of algorithm bias due both to technical design and to human influence. Virginia Eubanks (2018) has shown how the welfare system in the US is built on a machine that does too little to alleviate poverty and hardship. Instead, (as in the UK we would add), for those in need of welfare support the systems can make the situation worse and more difficult.

Of course, computer-based systems are not the creators of this problem for “…digital tools are embedded in old systems of power and privilege … ” (Eubanks, p178). The “panoptic society” has been with us much longer (refs?) but the new kinds of tools that make algorithmic oversight of human activity speak of a ‘third’ industrial revolution already emergent in the 1990s:

“… complex technology that involves the collection, processing, and sharing of information about individuals and groups that is generated through their daily lives as citizens, employees, and consumers and is used to co-ordinate and control their access to the goods and service that define life in the modern capitalist economy.” (Gandy, 1993, p15)

Gandy talk about the ‘panopic sort’, which is the power of these social technologies to differentiate and classify people. Indeed, the possibilities continually open up yet more intensive and detailed bureaucratic observation of our action. How do we, a Western European democracy react to the concept of ‘social credit’, now rolling out in China. What similarities does it have to the UK Home Office digital visa system? Nowhere perhaps should a professional community be more mindful of this than in education where every connected digital device we carry or use is by design a logging device, and potentially usable for surveillance or manipulation if required by covert action (such as a secret service) or a policy decision (such as by an elected government).

Perhaps one obvious area of education where data-rich, panoptic approaches might be help is in assessment. And within that we might point to the kind of real-time assessment that most teachers believe in, assessment that guides the learner constructively, with insight into their specific learning, and backed up by clear sighted advice about what to do next. Broadly this might be known as formative assessment or more precisely diagnostic assessment. It is also personalised to the individual.

Personalised education on the face of it sounds like a good idea although it clearly raises some immediate problems about how schooling is organised (for example Larry Cuban argues that it can never arise as long as schools contnue to be age-graded). As this paper comes to a conclusion we need only point out that the issues identified by Eubanks (2018) apply also to the idea of personalised learning. It’s the same kind of process — a panoptic logging of all available online activity.

But who owns it? What else is it used for? How does it change? Can it be changed? Can we be sure it does not reproduce educational inequalities already present? Is it ‘neutral’? (And here we should invoke Eubanks’ reference to Stanley Cohen’s concept of ‘cultural denial’: we must be careful to see bias where we do not want there to be any!)

Concluding remarks

The main point of this long discussion concerns ‘digital pedagogy’ and in particular it is an attempt to highlight problematic aspects of the social and political context into which our everyday mobile computing is tightly woven. It is no longer so easy to determine the ‘boundary’ between the technology and the activities they make possible.

Does this matter? In general these issues of values and consequences could be said to belong to the realm of commentary. But from the perspective of what education should mean for a society in which computational tools are ubiquitous and often nearly invisible the boundary should be a constant focus. Although we have, as a community and on the whole, never been convinced about such stereotypes as the ‘digital native’ these sorts of ideas are still out there. For example, Lee and Broadie, 2018, put the threshold at three years old for a nearly complete development of a ‘digital mind-set’ (or digital literacy we might call it in our curriculum terms). It is not clear how anyone over the age of three is supposed to respond to such a situation, because everyone has ‘it’! Does that mean there is nothing left for us to teach our children?

As the argument of this paper hopes to promote, the very openness of a digital infrastructure is also its weakest link, and perhaps nowhere more so than if we rely on ideas about mobile pedagogies, where the individual, not the community, is the ultimate ‘learning unit’, and also the individual is where the network is more easily broken or reshaped.

The digital environment changes fast. In 2007 Traxler presented an enthusiastic, sometimes breathless portrait of the pedagogical potential of mobile devices. However, we see today that mobile learning cannot be so simple. For example, every dimension of the Kearney et al 2012 framework is highly problematic, even though the framework is derived from observations on practice. Just a few years further on we can begin to see that:

  • personalisation leaves individual(s) open to various forms of psycho-social manipulation, both explicit and unseen;
  • collaboration has not thrived in formal educational settings;this can present problems of inclusion; assessment practices remain unchanged.
  • authenticity is seriously damaged by the extent of digital pollution (fake news; fake emails; hacking; extreme social media) (this is a dominant issue at the present time?)

The theoretical contribution put forward in this paper therefore is a model of a ‘critical curriculum’ for teacher training built around the deception, demonstration, debate model. In itself this is nothing new — over generations there have been calls for education to be a means to open up debate, not to close it down, and for education to be adversarial rather than doctrinaire in its treatment of curriculum content.

One way to see the present moment in the politics of personal data, therefore, is as an opportunity for the TPEA to build a more reliable foundation for thinking about teacher professionalism and the digitisation of schooling. The dilemma is this: there are many ways in which machine learning can bring benefits to professional work with people. In medicine and health care the applications can be profound but machine learning cannot work without data about human subjects, lots of it. As we have noted in this paper this carries risks.

Bibliography

  • Abraham, J., Lacey, C. & Williams, R. eds., 1990. Deception, Demonstration and Debate: Toward a critical environment and development education, World Wide Fund for Nature and Kogan Page.
  • Barbrook, R. & Cameron, A., 1996. The Californian Ideology. Science as Culture, 6(1), pp.44–72.
  • Bartlett, Jamie, 2018. The People vs Tech : How the Internet Is Killing Democracy (and How We Save It). Ebury Press
  • Eubanks, V., 2018. Automating inequality : how high-tech tools profile, police, and punish the poor. New York, USA: St. Martin’s Press.
  • Keen, Andrew, 2018. How to Fix the Future : Staying Human in the Digital Age. Atlantic Books
  • Lanier, J., 2018. Ten Arguments for deleting your social media accounts right now. The Bodley Head Ltd.
  • Lanier, Jaron, 2014. Who Owns the Future? Penguin.
  • Luckin, Rosemary (ed), 2018 Enhancing Learning and Teaching with Technology : What the Research Says. (UCL Institute of Education Press, 2018
  • Sentance, Sue, Erik Barendsen, and Carsten Schulte (eds). 2018. Computer Science Education : Perspectives on Teaching and Learning in School. Bloomsbury Academic.
  • Williamson, Ben, ‘Personalized Precision Education and Intimate Data Analytics’, Code Acts in Education, 16th April 2018. https://codeactsineducation.wordpress.com/2018/04/16/personalized-precision-education [accessed 5 June 2018]
  • Watters, Audrey, 2014. The Monsters of Education Technology. HackEducation.

Books on Algorithm bias

  • Christian, B., 2017. Algorithms to live by. William Collins.
  • Dormehl, L., The formula : how algorithms solve all our problems, and create more.
  • Eubanks, V., 2018. Automating inequality : how high-tech tools profile, police, and punish the poor. New York, USA: St. Martin’s Press.
  • Gandy, O.H.J., 1993. The Panoptic Sort: a political economy of personal information. Westview Press.
  • Maccormick, J., Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today’s Computers. Princeton University Press.
  • Noble, S.U., Algorithms of oppression: how search engines reinforce racism. NYU Press.
  • O’Neil, C., 2017. Weapons of math destruction: how big data increases inequality and threatens democracy.
  • Snow, J., 2018. Algorithms are making American inequality worse — MIT Technology Review. MIT Technology Review.
  • Wachter-Boettcher, S., Technically wrong: sexist apps, biased algorithms, and other threats of toxic tech. W. W. Norton & Company.

Visit our website: mirandanet.ac.uk

--

--

David Longman
Mirandanet

Teacher, learner, enthusiast. Independent Academic.