
In this paper I focus on some methods of computer-aided data analysis, and show how hypertext descriptions of particular learning events might generate explanations of those events. I argue that the validity of this approach resides in its creativity.
ASK
The work reported here was funded by ESRC project Children’s Application of Subject Knowledge (ASK) [*2]. The central focus of this project is on how children — in the age range 7–11 years — use their knowledge to learn. The research is informed by an eclectic view of learning which encompasses information processing theory and theories which implicate the role of social practices in knowledge acquisition.
A sample of 24 7 year olds and 24 9 year olds was recruited to act as collaborators in this longitudinal research project — focusing on subject domains English, Science and Mathematics. Data were collected by observation — a descriptive record — usually hand-written; sometimes documented on video. And by clinical interviews recorded on audio tape.
Interviews focused on elucidating aspects of children’s on-task thinking during a learning activity — using talk-aloud protocols. Children were encouraged to say everything that came into their head — to espouse a stream-of-consciousness narrative — whilst engaging with our learning tasks with researcher present. Any intervention the researcher makes is merely to clarify reasoning.
Sometimes children are left to operate the tape recorder themselves — to talk about the task aloud with each other as they collectively work through to find solutions. The researcher re-caps with the children once they have finished the task.
I’ve included reference to a collective task which centres around a television programme and one child’s extraordinary and skilful strategic application of knowledge to transfer power of classroom practice to take control and move the task on — and re-focus the group on learning. An interpretation of task protocols are briefly reported on later in this paper — and shows why undertaking research away from the normal classroom learning environment — and teacher direction — is important.
These sorts of activities are designed to establish the children’s relevant subject knowledge — including facts and skills; knowledge of classroom routines and working practices; and personal knowledge of self as-learner — then move the children to points of difficulty in order to expose how they use their knowledge to learn.

The specific aims of the project are to describe how children go about the business of organising and monitoring their cognitive resources in learning situations — how these strategies interact with their subject knowledge — and how social practices in the management of classroom work influence the application of strategies to learn.
The research mainly takes the form of case study and emphasises the positive aspects of how children bring their knowledge to bear on difficult tasks. It seeks to better understand knowledge acquisition through application — and make a contribution to the management repertoires of classroom learning.
computation
In this paper I show how hypertext descriptions of particular learning events help generate explanations of those events. I also mention two other methods of computer-aided data analysis — and show how their reflective-transformative aspects have informed my own learning. I argue that the validity of this approach to qualitative data analysis and interpretation resides in its creativity — and in its serendipity.
My research purpose is to explore and expose how the computer helps me learn about learning. But first, I want to steer an historical path to where we’re at — computationally — with a brief critique around crunching numbers and machine interpretation of text.
In the nineteenth century, the word computer referred to people who performed calculations. Only since the 1950’s has it referred to machines. In the fifties and sixties, the computer was a big unreliable number-crunching research tool — a brain for solving complex problems involving numbers (re-calculating pi; flight-path of missile).
In the seventies, this number cruncher cornered the routine markets in business, banking, and billing. In the eighties, computers became cheaper in price and faster in processing. They surpassed television for teenage entertainment. They were hailed as do anything black boxes. Only, they couldn’t. And they can’t.
By the nineties, some of these problems have been researched to-death, dumped or shelved: others have been solved. Computers have always had a bad press: the human kind were slow and made mistakes; the machine kind were, and are, expensive, riddled with bugs, inaccessible, user-unfriendly, time consuming, and frequently the subject of excessive hype and exaggerated claims — but they are rather fast.
machine processing
Machines that crunch numbers have a good track record for crunching numbers — but they’re pretty useless at crunching words. Much of what researchers do quantitatively doesn’t have its qualitative computational equivalent.
Statistics software routinely batch-processes data by the bin full. Once programmed, it searches and sorts, collates and catalogues by applying user-defined rules (eg. if…then: and, or, Xor). Numbers are analysed and processed and end up as pictures — output as graphs and tables — or more numbers. There is, as yet, no equivalent batch-processing paradigm for transforming qualitative data into pictures and new words.
Texts cannot be machine-analysed. Qualitative research is labour-intensive, manual, and only marginally supported by tailor-made computer software. Here, probably the best-known research-support software is the word-processor. The expert-shell, by contrast, is probably the least-well known.
Outside these extremes is a range of unintelligent wares which adequately enable researchers to wallow in data world-wide (eg. on-line databases); keep in touch (e-mail; electronic conferencing); and get the best presentational layout on paper (eg. PageMaker™), on screen (eg. HyperCard™) or on video (eg. MacroMind Director™).
Computer Scientists promise that a machine might some day fool us into thinking it is intelligent — or believing it is an expert in its field. A computer is not an expert at anything. Nor is it intelligent. I want to say — I am intelligent — I am an expert.
qualitative research
A computer is a hypertext-device: it has multi-media capabilities. This provides the possibility of a gain in creativity at the level of analysis and interpretation — and at the level of learning-materials production. A computer in qualitative research is at least three devices in one machine — and computer scientists promise it will be many more.
It is a labour-saving input and output device: it helps us store and print text and pictures. More than this: it helps us manage our data. It has a do once — done always advantage. It emancipates the researcher. If this is a saving, then it is a saving in time — not a gain in learning about learning.
A computer is also a reflective device. It stores how I get some place. This is a gain in learning because it provides the possibility of reflecting on how got there — I can backward chain through the moves I’ve made and see where an idea lead to a dead end or where I got lucky.
I want to say: Hypertext is a special kind of scratch-surface. It gives me a theoretical leg-up — but it doesn’t offers answers. It is a better bet than the promise of software intelligently capable of automatically interrogating qualitative texts. Even if realisable, they’re not what qualitative researchers want.
learning about learning
How does the computer help me learn about learning? To see how, we first must see what. I shall outline the history of an ASK file.
An ASK file has a life. It is born as a structured text — as data — out of an audio and video taped question-response type interview protocol, which includes field notes, and pupil texts — writing and drawings. Its parentage is our current theoretical model and our fixed research aims.
Different things happen to it. Each is analysed by the field-researcher who got it. This involves selecting, ordering and tagging salient text chunks, and writing summarising links. Its purpose is as a check-list — have we got everything we set out to get? — and as a means of indexing what we have got. Its product is an inert list of data under three broad-brush headings — subject knowledge and strategy and working practices.
The corollary of analysis is interpretation. In my view, an interpretation is a kiss of life — it provides a route for the field researcher to tell a story of this child’s learning event — to describe the interaction of subject knowledge, strategy, and practice in this child’s attempts in learning. It is here, that description becomes explanation. I’m not claiming to know the definitive or fundamental answer to how this happens — I’m just telling you what I do.
My focus on hypertext as a method of data analysis is grounded in a collection of heuristics drawn from innovative — largely ad hoc and idiosyncratic — use of relational and object-oriented software. A familiar example of a relational database is a spreadsheet — where data, and the formula that process it, are organised as a grid of horizontal and vertical cells. A relational database aids analysis and storage of data because formula are applied automatically and output used to model change.
Object-oriented software is less well known. It uses a method that enables data to be modelled as a set of object with attributes which can be modified and manipulated — and linked together — on a virtual page. Objects can interact with each other. This focus is part of a broader approach to object-oriented text processing — particularly use of SuperPaint™ and HyperCard™ [*4].
text analysis
Our model of learning says there is no single verb to learn. One activity in learning is collecting, categorising and looking for familiar patterns. I experiment with hypertext presentation in the belief that it helps to make better the quality of my intellectual move from transcript, through analysis to interpretation.
The assumption is that the research process itself is a learning activity. The justification for computer-situated hypertext document creation is nested in the assumption that this activity affords greater insights into our data and our organisation of it, than otherwise. I want to say: exploiting relational and object-oriented software for its virtual representational capabilities is beneficial in producing richer interpretations — and electronically avoids the paper-spread across sitting-room floor syndrome.
These examples of hypertext-documents are based on transcriptions of audio-taped interviews conducted with pupils engaged in learning to generate headlines. Here, pupils work on our tasks and attempt to learn more about headlining.

This is a typical hypertext document. It describes this child’s attempts to create headlines for short pieces of newspaper-based text printed on flash-cards without headlines. To the right of centre is what he said; and far right is what he did. Below, is strategy — my interpretation of how he did it. This flow-chart models what I think Glen does in his head in order to make the moves he makes. I want to say: he doesn’t do precisely this — this is my description of what he does.
Left of centre is the flash-card text on which Glen worked. Left of that is my running interpretation of this learning event. Far left is a list of what Glen knows about headlines. And this column extends downwards — like a pull-down menu — as he recalls, applies knowledge — and learns. New knowledge is aggregated with old knowledge.

Part of the power of this hypertext document resides in its usefulness as a template for other target pupil-groups in the sample. And, I’m interested in charting a child’s knowledge application strategies — where the purpose of engaging with a task is acquisition of new knowledge. I’m interested in a child’s learning — and making comparative data interpretations. I’m not interested in routine or rehearsed performance.
When I run one child’s hypertext documents alongside another, I notice differences — at the level of eyeball. Barry’s headlines are generated in fewer moves — he articulates more knowledge. And, I notice similarities. I describe Glen and Barry as if they operate routinely — only that Barry operates more metacognitive functions. Here, when I ask, he says more about what he’s up to. I want to say: he does more.
Each map does not speak for itself — each is a working drawing. They help me as I shift things around in my mind. As they stand, these maps are extremely important once they are explained — because we are interested in context and how the context changes cognition.

strategy
Another kind of learning is seeing things in a new light — and asking new questions. And this is probably the key way of making theoretical gains. When I read one of our ASK files, rather than take away some result, or analysis, I often come away with a new question I want to ask another file. I want to say: the hypertext approach helps generate new questions.
Imagination is a fluid process — once the mind has an image, imagination stops. So, by having several images I keep imagination in play. The process of creating graphical models of learning events not only helps me better describe them, but also aids my interpretation of the data at the level of explanation. I describe what target-pupils do in a number of ways. And I say: this description is dynamic because it shows data in relation — an interpretation file.

The headlining task was set to establish the children’s relevant subject knowledge; their knowledge of classroom working practices and their knowledge of self-as-learner. This hypertext chart describes and compares how these children endeavoured to organise and monitor their cognitive resources — and exposes their use of strategies to learn. Comparison helps me generate an explanation.
We know teachers routinely help children apply their knowledge implicitly in project and topic work — or explicitly through teaching problem solving or study skills or learning to learn skills. Here, we want to evidence the extent to which these children endeavour to organise and monitor their cognitive resources — and apply their knowledge strategically to learn.
The flowchart shows how Barry applies relevant knowledge of headlines to generate and simultaneously evaluate his headlining decisions. He asks himself: Does my headline make sense? Neither Holly nor Glen deploy their intellectual learning resources in this way.
Barry’s explicit headline generation strategy is — pick key words (traps, set, catch, speed, boats) and jigsaw with a new word or apply knowledge of morphological change (‘speed’ into ’speeding’). Neither Glen nor Holly apply knowledge of morphological change.
Barry evaluates within a perceived dichotomy: less obvious — obvious. Making sense is “a really important issue”. A less obvious headline “doesn’t say it all… so you have to read the actual story to find out what its about” whereas an obvious headline “says it all — nobody’s going to read the story”. But there’s a caveat — “some headlines are better for newspapers not television or radio — they’d be complicated if you were listening… what you hear has to be more obvious.”
Barry knows more than Glen and Holly. He applies an explicit theory of headlining to check the validity of his own reasoning but strategically makes fewer moves to finish the task. Barry works in performance mode — within routine classroom management practice. His purpose is to get the task done. He learns little new. Consolidation come later.
When prompted, Glen espouses a restricted theory of headlining — “headlines can’t be funny” — but this implicit theory is a concurrent outcome of learning — not of performance. He adds a powerful keyword element to his theory but makes no morphological changes. Glen does more with less knowledge. Glen applies partial strategy — he operates no validity check.
Holly knows that a fictional story has a title and surmises that a headline is “like a title … but longer — a good one says half of it”. Headlines are “probably used to tell the audience what sort of story it is”. She knows to “pick a word from the story” to generate its title but struggled to apply this strategy in practice to generate a headline after reading a short factual text shown on a flash-card.
Holly shows minimal cognitive capacity to apply knowledge to learn — in this context. Her linear strategy is culled from limited knowledge of titles and is inadequate for generating headlines — “I can think up lots of stupid ones but it’s usually hard to think up a serious one”. I want to say: learning here is acquisition not application. Does Holly have relevant applicable knowledge — is it forgotten or just missing?
Opportunities for learning are normally created in the classroom through teacher intervention. Holly, like Andrew, relies on explicit teacher direction guiding how she thinks, what she learns and what she does. She is unable to direct her own learning, yet. This is managed by her teachers.
Andrew is not shown on the chart — yet he demonstrates knowledge of himself as a learner and makes strides in subject learning — on the fly. How he does this falls at the margin of our project remit. I cannot generate a strategic map because Andrew’s strides in learning occur when he is taught, shown, prompted, guided or nudged. Like Holly, Andrew relies on explicit teacher intervention around how to think and what to do. Here I remind myself — my purpose is research, not teaching.
Andrew comes to know new facts — “headlining can be fun” — but falls short of application to develop practical headlining skill. And he knows about morphological change but doesn’t apply this knowledge either. Prompted to use part of a story’s first sentence as a headline, Andrew was motivated to talk about particular words but remained unsure which to chose. This child struggled to produce a headline — “I’m not very good at headlines” and “that is a hard one”. “if I was older I expect I could do that …because I’d know more things.”
This child demonstrated limited learning management practices. Andrew is not yet able to direct his own learning. The power of teacher working practices is implicit and essential to his learning — he adds new knowledge to his initially restricted theory of headlining as advice to himself — “try and make it as tricky as you can to make people read the story” — but doesn’t apply it.
Unlike Barry and Glen, Holly and Andrew demonstrated limited cognitive capacity to apply knowledge to learn. I want to say: for Holly and Andrew learning is, by default, acquisition of facts — signalled by: “I’ve never hear of”. The role of self-as-learner is implicit — but plays little part in learning. Knowledge is not generally gained through strategic application.
How do we explain this difference — between Barry and Glen, and Holly and Andrew? Is it simply maturation — a developmental longitudinal difference? Do teachers know how these processes — of knowledge application and acquisition — operate in practice? I think this is unlikely — because there is a big gap in learning theory explanations of how old learning is linked to new. There is no good account of how knowledge application works.
working practices
Earlier, Barry and Andrew worked with Adam on a television-based headlining task — Newsround. Here, Andrew’s paucity for self-direction impacts on peers where teacher direction and intervention are minimal.
Newsround is a children’s programme broadcast nationally on BBC television weekdays around late afternoon. Most children are aware of the show — but few regularly watch at home. The particular episode we used was chosen arbitrarily. The broadcast format runs several news stories in sequence — read aloud by a presenter. Each story has a headline. There is no text shown on-screen.

In this hypertext summary document, the boxes — left of centre — provide short summaries from the interview transcripts — right, Barry; far right, Andrew. Arrows are tagged to names — in code — and show, running upward, how Andrew attempts to impede Barry. Arrows running downward indicate how Barry moves the group on to finish the task.
Forensic analysis about how these children worked together or separately is not relevant here. My purpose is to show how a single hypertext document can capture a mass of data in a structured and visual way — to expose Barry’s knowledge of classroom working practices in strategic application.
Why does Barry take control? He perceives that Andrew needs explicit direction and invokes an implicit classroom working practice. I want to say: Barry is working within an old goal structure but with two added pests. We can see him manage his partners and manage the task. And we read how he does this. When he perceives disagreement, he takes hold of the remote and presses the play video button. Barry applies classroom working practice to move the task on — and re-focus on learning.
By analysing and interpreting text data in this way, I’m trying to describe what this child is up to, and I’m saying: here is one version; there, another. And as I do this, I’m opening up possibilities of explanation — and more questions. And all the time, my original data remains in-tacked. This is crucial.
Can a machine ever do this? If it were realised, then its power would reside in taking seconds to see what I might take years to see — and only then because I got lucky. And it won’t be a computational device working on a-priori embedded questions and binary logic. It will be more like my brain.
The vision is of intelligent qualitative-software that strategically applies qualitative-researcher expertise right at the level of transcript to learn — what we might call the text-crunching equivalent of statistics-type number crunching — is surely a dream.
data management
Hypertext is a multi-representational mill. A hypertext document involves collecting and categorising — it creates possibility for seeing patterns. By taking data and racking it through various modes of representation, first of all, I make explicit what I’m up to. Secondly, I soften intellectual traps. I want to say: looking at something in one form of representation then in others, leads to creativity.
The computer records my routes through the ASK data. Different versions of hypertext maps build into a vast date-and-time indexed database of intellectual development — they are selective staging posts and snapshots of where I was. Run in sequence, they chart my attempts at navigating learning, and show how I got here.
One research advantage Is being able to see immediately whereabouts in a transcript a quote and its interpretational tag is located. I can check back when I want to know why I’m going down this route or that. I can watch as I trail a question through files to test its validity. If its not productive, it isn’t kept. If it is, it becomes another object of interpretation.
What a researcher creates this way is a user model — an audit trail of his path through data. I want to say: this audit-trail is part of our data on this child’s learning event.
There’s no doubt machines will become powerful pattern-recognition devices. Here, artificial intelligence and knowledge engineering — together with the unparalleled power of neural networks, appears to offer the promise of machine-dominated text-interrogation, and bring nearer the possibility of computer-generated models of pupil learning directly derived from machine-analysed text.
indexing
More recently, I have experimented with PageMaker™ because this software provides a means of running physically separate yet related files in parallel on the same page — eg. transcript, and Its interpretation. PageMaker™ also has a powerful hierarchical indexing facility. SuperPaint™ has the advantage of image-scaling: maps can be output as A3 or A4 hard copy; print size can be increased or decreased.

At a more practical level, HyperQual™ — and Nudist™ — look promising. Both are text-analysis packages although they don’t automatically machine-analyse text as the hype suggests — but they do allow a researcher to read a text on-screen and endlessly tag and bin it.
Nudist™ assumes analysis is a process of ongoing exploration of emerging ideas — thus, at one level, it supports searches for words or word-patterns occurring in a text, and at another, allows an index to be browsed, explored and changed …pruned, re-organised and simplified [*4].
Both packages help a researcher catch and interrogate meanings emerging from data — to tag and record emerging ideas and link these to an interview text. Both include an integrated notebook.
Tagging is electronic indexing: it helps with data analysis, and assists data management and storage for better retrieval — eg. as linear and serially sequenced text-chunks; or as organised and indexed structures — eg. hierarchical; nested; or networked. Binning Is electronic filing.
Tagging — by pointing, clicking and dragging — is no less time-consuming than coding manually. Its purpose is to ensure quotes, linking-text, and interpretation get anchored in data. This rooted-in-data approach to learning research is what I want — each file has a life and it never loses its ancestry.
Once tagging is done, researchers working with pen and paper have a massive practical disadvantage because every time I want to shift text about, I tell the machine. They must reach for their scissors. Again.
This is the single biggest advantage of HyperCard-based hypertext. Once-tagged, a text can be output — almost in first draft form. This is a gain in time — not creativity. Its single biggest drawback is the temptation to endlessly tag.
These hypertext documents and HyperCard™ Stacks are maps which operate both at the level of generalisation and at the level of the particulars. On screen — as part of a HyperCard™ Stack — clicking a spot of the map opens one of several windows of interpretation.
Since much of what I do involves text-input, text-management and text manipulation (commonly known as cut and paste editing), voice-to-text transformation software — which quickly scans an audio tape and slavishly punches its verbal contents on screen and paper — would reduce research time and energy.
I want to say: computers help in qualitative research not because they emancipate researchers. If they do this it’s because they save time by dealing with these tedious and time-consuming chores. But it’s what the researcher does with the extra time that creates the possibility of more creative data interrogation. Deciding what counts as chore and what counts as creative is part of the Catch-22 of computational research.
interface
If a picture is worth a thousand words, then qualitatively-oriented research software ought to be multimedia. It should — amongst other things — include an intellectually-sound interface. It would include an object-oriented paintbox alongside word-processor and note-pad, and also include a means of reflecting on — and annotating — how I got here.
But there’s a fundamental problem with the man-machine interface — at the level of intellect — not at the level of eyeballs and mouse-clicks. It discourages reflection and promotes conformity. This hegemonic requires vigorous challenge. Qualitative researchers should to be more assertive in promoting research about computer-aided qualitative application — and this research ought to inform the critique around computer-assisted learning.
Normal word-processing software has poor man-machine interfacing — unlike the technology of pen and paper. The computer-situated researcher doesn’t have full control over either whereabouts on the page to write or what kind of marks to make — whether text, drawn, or painted. A big-screened Macintosh running MultiFinder™ provides simultaneous access to different software — but that is very expensive.
What we need is software grounded in the professional needs of qualitative researchers. The nineteen nineties will be characterised by the growth in the infrastructure which will enable researchers to participate in research about research using computers.
learning materials
Desktop publishing gave us a quick customised job to a high standard. it provided the possibility of quickly moving from task idea through piloting materials to data collection.
At the level of learning materials production I have used paintbox software — SuperPaint™ and MacPaint™ — for their desktop publishing and presentational capabilities. The Headline task materials (English) were created using SuperPaint™ — and for the Light task learning materials (Science). MacPaint™ — was used for creating Area, Fractions, Ratio learning materials (Maths).
The shape and design on the page makes a difference to the quality of data gathered. It was important that our task learning materials had perceived validity. But this difference ls trivial and light-weight when compared to the difference created by big decisions — such as: What’s going on the cards?; Do we do refraction or gravity?; or Do we given them statements or do we talk? Media people have to believe that presentation makes a difference: but there’s little evidence it makes the sort of difference they think it makes.
And, desktop publishing didn’t much influence our shaping and planning of the way we thought about gathering data. The shape and design on the page doesn’t help qualitative researchers address fundamental decisions about the shape and design of data-collection — unless they make data-gathering decisions based on machine-analysis capabilities. But then, what the machine can read limits what is asked.
conclusion
I’ve given a potted history of computers in qualitative research, briefly tangled with the promise of expert and neural systems, and outlined some of the digital benefits of several text-based research-oriented software packages — which help me with analysis and interpretation. I have attempted to demonstrate the validity of these approaches.
I conclude that the promise of software tools intelligently capable of automatically interrogating qualitative texts — as I do — is a promise. Even if realised, it won’t be what qualitative researchers will want. Greater gains can be made by creatively exploiting existing relational and object-orientated software — as I show here.
Emerging tentative results suggest little progress has been made by educators instrumental in promoting strategic knowledge transfer — and this raises important questions about how children are taught. This applications gap in pedagogy has important implications for teacher practice and training.
written by popadog
* This paper was written for a symposium at the BERA Conference, Stirling University, in 1992 — entitled Learning Through Knowledge Use. The opinions expressed are those of the author. ® 23.08.92. Some editorial changes have been made to incorporate additional explanatory text — and two additional slides are included which show graphically our research remit.
*2 This research project — Children’s Application of Subject knowledge in Learning — was undertaken at the University of Exeter UK under the direction of the late Professor Charles Desforges — between September 1989 and December 1993.
*3 The interested reader is urged to refer to earlier publications describing relevant Action Research undertaken in mainstream classrooms which deploys experimental application of relational and object-oriented software. See — Instant Publishing — Getting Into Print — Cut And Paste. See also Tint Panels — Hints For More Tints — Quick On The Draw — and Cartooning. These are re-published on the Medium platform. For related published Action Research undertaken at the same time — and also re-published on the Medium platform see — Full Proof — first published in the Times Educational Supplement UK, September 1988; and — Anatomy Of Young Viewers — first published in the Observer newspaper UK, in June 1987.
*4 Introducing Nudist 2.2 — Richards; p.3; and Qualitative Analysis with HyperQual — Raymond V. Padilla, 1989.
— —
