Between the idea
And the reality
Between the motion
And the act
Falls the shadowT.S. Eliot, The Hollow Men
Do you think it is drawing too long a semiotic bow to suggest that "the shadow" signifies the outer darkness into which academic "hollow men" are cast by the gatekeepers of academic life if they do not produce "the right kind of research?" You think so? Just close your eyes, lie back and relax and maybe it will all make sense in the end.
The pretentious literary reference wasn't the original title for this piece. I first thought of calling it "Exploring the slimy swamp" after a memorable ecological metaphor from Schon (1987) writing about what he called the "real-life problems of the classroom eco-system." Then I thought of "Opening the black box" which is an equally significant metaphor because of its historical association with the operant conditioning model of learning which disavowed the thought process because it cannot be observed. An article by Robert Kozma (1994) also suggested a sexy post-modern title, "Photographing the whirlwind," when he characterized the positivist approach to the study of learning as similar to examining the effects of a tornado by taking photographs before and after the event: "the photographs enable us to assess the extent of the damage but not the process by which the damage was wrought" (p.10).
But in the end I plumped for The Bishop's famous lines because poetic insight is the highest achievement of any research.
This is not a polemic in praise of humanistic, ethnographic, holistic, phenomenographic, artistic, warm and fuzzy, trendy, "soft," or postmodern research methods. My purpose here is to make a case for considering non-quantitative methods as an indispensable monkey-wrench in the research toolbox and to demonstrate that the outcomes of qualitative investigation can be equally productive, though often differently valid, to the selective, positivist, analytico-deductive, scientific, statistical research methods sanctioned by the gatekeepers of our discipline.
My grandmother sent me a newspaper clipping in which it is reported that a gentleman from California taught half his class using lectures and tutorials and the other half using the Internet: on the final test the students on the Internet course did better than the classroom group. I think there was a number somewhere which proved this. "You must be so pleased, dear," wrote grandma, "to know that you have finally been proved right." OK, I confess, I've been delivering course material via the Internet since 1994 but until now it has been a guilty secret between me and the students, and because they said they liked it I kept going, in spite of the lack of research evidence. (By the way, granny asked me to thank the nice California man for his Christmas present of a laser-driven virtual egg-sucker, "The piano doesn't wobble now," she says.)
Let us agree, first of all, that the terms "quantitative" and "qualitative" are ambiguous: they are commonly used for both the contrasting paradigms and the methods associated with them. However the contrasting paradigms could equally well employ either or both quantitative and qualitative methods. Although adherents of the quantitative paradigm are more likely to use experimental and quasi-experimental tools, while qualitative researchers are more likely to employ more descriptive techniques (Fetterman, 1988).
Salomon (1993) contrasts "analytic" and "systemic" approaches to research design. The goal of analytic research is to manipulate and control situations so as to increase internal validity and isolate specific causal mechanisms and processes; whereas systemic research is based on the assumption that "each event, component, or action in the classroom has the potential of affecting the classroom as a whole." He proposed that ethnographic or naturalistic methods, such as long-term observation, interviews, and artifact analysis provide a richness of detail about the social processes within which cognition is embedded.
The two approaches are by no means exclusive: they can co-exist perfectly well and the methodologies can complement one-another.
An article by Kearins (1986) in the Australian Journal of Psychology reports an investigation of Lockard's (1971) hypothesis that human populations are shaped by natural selection to fit a particular ecological niche. Why is it, asks Lockard, that Aboriginal people do so poorly on IQ tests, yet possess such remarkable survival skills? Perhaps there are different, genetically determined patterns of intelligence.
Kearins first conducted a series of carefully controlled experiments to establish whether the IQ hypothesis was valid. She used a standardized "visual memory test" in which groups of suburban European and Western Desert Aboriginal children were given 30 seconds to memorize a set of objects presented on a tray, then to recall them: the Aboriginal children scored significantly worse at this test than the Europeans. She then administered a set of "spatial relocation tests" in which objects which could not always be differentiated by name (rocks, leaves, twigs) were arranged on a grid and presented for 30 seconds; they were then disarrayed and the children were asked to replace them in the original positions. The Aboriginal children scored significantly better than the Europeans at these tasks. Her studies confirmed Lockhard's hypothesis that the Aboriginal children had differently developed visual memory skills, but the statistically significant results provided no clues to the underlying causes of the difference.
Kearins notes Rowe's (1985) observation that, "Intelligence does not operate in a vacuum... if our assessment of intelligence is to increase in validity... we shall have to observe the individual's functioning in real life, rather than in a laboratory or in a standardized testing situation" (p.10).
Kearins' recognized that there are serious methodological problems in trying to conduct experimental studies in a cross-cultural situation. Her follow-up research used a more ethnographic methodology, consisting of long-term observation, interviews with teachers, artifact collection, etc. The outcomes of these studies were even more fascinating: e.g., she observed that Aboriginal babies' heads are not supported when they are carried, forcing them to develop their neck muscles, and consequently their visual acuity, much earlier than European children; additionally, Aboriginal value systems have little or no sense of "ownership" and the memorization of objects by name ("I want that") is of far less importance to Aboriginal than to European children, whereas the need to observe and memorize spatial relationships in a desert landscape is an essential navigation skill.
Most fascinating of all, "White children who performed well on the visual spatial memory test were... derogated by teachers, who considered them lazy, inattentive underachievers--views not supported by school records. It is possible that their cognitive strategies did not fit teacher expectations..." (p.212).
Remaining open to the possibility of unpredicted outcomes is a central tenet of qualitative research.
Kearins' work is an example of the complexity which arises once you begin to delve into Schon's "slimy swamp" or to peep beneath the lid of the Pandora's box opened by qualitative research. And complexity does not sit comfortably with those who search for simple, definitive answers within what Biggs (1995) terms "the whistle-clean, four-square symmetry of the psycho-lab" (p.50).
The "laboratory versus life" debate is controversial in the social sciences, particularly psychology, a discipline which has long craved scientific respectability. Neisser (1978) writing in Practical Aspects of Memory (a precursor to the educational technology debate provoked by Clark (1983)) attacked the laboratory approach that emphasizes internal validity over external validity, charging that nothing interesting or important had resulted from roughly 100 years of effort in the laboratory. Ten years later, Banaji & Crowder (1989) were still arguing that "the more complex a phenomenon, the greater need to study it under controlled conditions, and the less it ought to be studied in its natural complexity" (p.1192).
Typically, positivists search for social facts apart from the subjective perceptions of individuals; by contrast, phenomenologically oriented researchers seek to understand human behavior from the "insider's" perspective. A qualitative researcher argues that what people believe to be true is more important than any objective reality.
My colleague Dr. Daniel Lam Tai Pong
Epidemiological studies have clearly demonstrated a link between the consumption of dried fish and nasopharyngeal cancer and Hong Kong has abnormally high levels of this form of cancer due to the popularity of the food in the southern, coastal diet. This link could only have been established by using large statistical samples and sophisticated quantitative methods. Since the link became widely known, dried fish consumption and nasopharyngeal cancer has begun to fall--everywhere but in the tightly knit fishing community.
In his harbourside clinic Daniel is conducting open-ended interviews with fishing families to see how lifestyle factors contribute to health problems. He has found that the Seui Seung Yan are quite aware of the link between their traditional food and cancer, but they attribute it to the commercial drying process, "which uses chemicals," in contrast to their own "natural" product, which they continue to consume with confidence. (Not true, unfortunately--the carcinogen is related to the drying process.) They justify their "traditional" lifestyle choice by quoting the slogans of the health food industry.
People act on what they believe.
My approach to research could also be described as a "lifestyle choice" based on my own background and beliefs. I have worked for over 20 years as a documentary film maker: the highly subjective and most unscientific art of recording, structuring, and interpreting other people's lives on film and videotape. My heroes were Dziga-Vertov, Flaherty, Rouche, Wiseman, the Maysles bros., Levy-Strauss...
When I gave up full-time film-making in the late 1970s to work in a university, which required me to do research into educational technology, I stood aghast before the body of media comparison studies which uniformly regard the central participant in the learning process, the learner, as no more than a "black box." I wrote a rather bitter polemic against it (Hart, 1981) in the Journal of Educational Television (still quoted back to me at conferences as a source of embarrassment). But at the time, I recall seeing Richard Clark's notorious "trucks and nutrition" metaphor (1983) as a personal vindication of my position.
There's a statistical theory that if you gave a million monkeys typewriters and set them to work, they'd eventually come up with the complete works of Shakespeare. Thanks to the Internet, we now know this isn't true.
Qualitative research involves the gradual development of ideas about data and the exploration of these ideas. Sometimes the project begins with descriptive categories, derived from research or intuition; more often, categories are derived from the data during the project and linked in ways that describe the data. New theories are constructed and tested by exploring their links with data.
Researchers normally use some or all of the following methods:
(Fielding & Lee, 1991; Miles & Huberman, 1994)
Qualitative researchers typically collect text material, divide it up by content and file it under descriptive headings. The same piece of information may need to be cross-indexed under a number of headings and before computer programs enabled this to be accomplished electronically it was done with the aid of photocopier, scissors, adhesive tape, manila folders, and a cataloguing system based on library cards and memo paper.
The test of the ultimate conclusion is to see how elegantly and methodically the evidence was shaped into the conclusion, how the conclusion was coaxed (never forced) to "emerge" from the data, "how evidence and grand account form a well-connected, seamless web of belief that illuminates and enriches our perceptions and understanding of phenomena we see every day. To be credible, the report must show these processes in action, and demonstrate how the conclusions were reached" (Richards & Richards, 1992).
There are many texts that describe and justify versions of the methodology in detail, e.g., Burgess (1984) and Strauss (1987; 1990). The point to note here is that data-driven research is widely practiced in many fields, including educational research, is essential to many research problems, and is not hypothesis-driven.
In 1993 I was invited to collaborate on a research project to establish whether a constructivist, problem-based approach to the teaching of Architecture was as valid as the conventional lecture-assignment method. Validation of teaching methods is important for professional accreditation and funds were available for a study.
It was proposed that we divide the class into two: one half would take the lectures, the other half would be thrown into the deep end of the computer lab to fend for themselves... (sounds familiar?). I managed to persuade the Architecture Department that this neo-scientific paradigm had been discredited for over ten years and there were other, equally sound, methods of validating the new curriculum. Why not use Action Research, I suggested, which is a perfectly acceptable model? (I liked Rob Foshay's distinction between "medical research" and "clinical practice" in last month's paper.) So, for my pains, I was given the direction of the project.
The "Building Systems" covers issues of construction, materials, maintenance, and management. Students worked collaboratively on real-life problems which required them to build three-dimensional computer models of buildings. They had to master very complex modeling software (on SGI workstations) and to apply it to existing structures in Hong Kong. Their final presentation was to be a multimedia report explaining these issues to others. (URL below)
My job was to track their learning and produce a document at the end of the term which could be used in the accreditation process.
I instinctively began with the methodology of the documentary film maker: record everything without discrimination, then try to make sense of it through reflection at the "editing" stage. We rigged up a video splitting device which enabled us to record both the student and the computer screen as they worked; we conducted regular interviews with the students, individually and in groups; we gave them standardized tests; we asked them to draw concept maps and we generated Pathfinder Nets; and we systematically collected their work. We also provided feedback--both to the teaching staff and to the students themselves--according to the action research "spiral" (act - observe - reflect). For example, when our concept maps clearly demonstrated that the reason the students were losing data was that they had inaccurate mental models of the network file structure, we recommended both adding a self-paced module on network computing to the resource base for the course and simplifying the file system.
The data was indexed and analyzed using the NUD*IST software package*. This too was a cyclic process: the indexing classifications were first developed using a grounded theory approach and were then fed back into our research methodology and further refined by an iterative process. This is not as difficult as it sounds with NUD*IST's flexibility and memo facility.
The project ran for 18 months. Unsurprisingly, we did not feel able to conclude there was "no significant difference" between the old lecture method and the project method--although this was what the accreditation body wanted to hear. We concluded that the course was a substantial improvement on the lecture method. We were able to demonstrate that:
We were also able to point to aspects of learning which needed to be more strongly reinforced, such as:
You can check out two of the projects at the following URL:
http://arch.hku.hk/projects/bsys1/buildingServices/servicemenu/mainmenu.html (make sure there is no line break)
Isaac Asimov, in his great Foundation series of novels, devised the ultimate quantitative science of "psychohistory" by which Hari Seldon was able to use statistical methods to predict the future of the galaxy for a thousand years and build a "Foundation" on a distant planet which would eventually be ready to step in and save humanity from itself. Throughout the first two novels it appears to work remarkably well, but in the final episode it is revealed that a secret "Second Foundation" of powerful thinkers and philosophers has been in the background all the time to nudge things back on track when the mathematics of psychohistory went astray.
This is a complex argument and I've tried to tackle it by providing you with some unstructured data, and making the optimistic assumption that insights may emerge from the next week's discussion. I've also carefully avoided committing myself on contentious issues such as internal versus external validity, the merits and restrictions of comparative methodologies, empiricism versus connoisseurship, etc. But I've suggested some questions (below) which we might pursue, and which might tempt someone to make an unequivocal statement.
So let us conclude with this thought: if it is the purpose of Science to explain what is happening, it is the job of poets and philosophers and qualitative researchers to explain why.
Asimov, I.: The "Foundation" series of novels includes Foundation (1951), Foundation and Empire (1952), Second Foundation (1953). The "Foundation" and "Robot" novels were eventually brought together in Foundation's Edge (1982) and Foundation and Earth (1983).
Banaji, M.R., & Crowder, R.G. (1989). The bankruptcy of everyday memory. American Psychologist, 44, 1185-1193.
Biggs, J.B. (1995). Quality in education: A perspective from learning research and theory. In P.K. Siu & T.K.P. Tam (Eds.), Quality in education: Insights from different perspectives (pp. 50-69). Hong Kong: Hong Kong Educational Research Association.
Burgess, R.G. (1984). In the field: An introduction to field research. London: Allen & Unwin.
Clark, R.E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445-459.
Fetterman, D.M. (Ed.) (1988). Qualitative approaches to evaluation in education. New York: Praeger.
Fielding, N.G., & Lee, R. (Eds.). (1991). Using computers in qualitative analysis. Berkeley: Sage.
Hart, I. (1981) Educational television--The gulf between researchers and producers, Journal of Educational Television, 8, 91-98.
Kearins, J. (1986) Visual spatial memory in Aboriginal and white Australian children, Australian Journal of Psychology, 38(3) 203-214.
Kozma, R.B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research & Development, 42(2), 7-19.
Lockard, R.B. (1971) Reflections on the fall of comparative psychology--Is there a message for us all? American Psychologist, 26, 1680179.
Miles, M.B., & Huberman, A.M. (1994). Qualitative data analysis: An expanded sourcebook. (2 ed.). Thousand Oaks, Ca: Sage.
Neisser, U. (1978). Memory: What are the important questions? In M. M. Gruneberg, P.E. Morris, & R.N. Sykes (Eds.), Practical aspects of memory (pp. 3-24). San Diego, CA: Academic Press.
Richards, T., & Richards, L. (1992, ). Qualitative computing: Making data work. Paper presented at the International Conference of the Australian Evaluation Society, Melbourne.
Rowe, H. (1985) So intelligence tests don't work. First Australian Conference on Testing and Assessment of Ethnic Minority Groups, Darwin, 17-18 October, 1985.
Salomon, G. (1993). No distribution without individuals' cognition: A dynamic interactional view. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 111-138). New York: Cambridge University Press.
Schon, D.A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the professions. San Francisco, CA: Jossey-Bass.
Strauss, A.L. (1987). Qualitative data analysis for social scientists. New York: Cambridge University Press.
Strauss, A.L., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage Publications Inc.
NUD*IST (Non-numeric, Unstructured Data--Indexing, Searching, & Theorizing), is a flexible computer tool for Mac and Windows published by Qualitative Solutions & Research Pty. Ltd., 2 Research Avenue, La Trobe University, Victoria, Australia 3083. Version 4.0 has just been released.
In developing this issue further, you may like to consider some of the following questions:
1. Clark tells us that the media (trucks) are only the vehicles which deliver instruction (nutrition), but many correspondents to this forum believe that there is a substantial difference in the quality of the instruction (e.g., McDonald's versus vegibranburgers) delivered by different media (e.g., horse & cart vs refrigerated trucks). How might we investigate this now we know that media comparison studies are a dead-end street?
2. Do Tom Reeves' well-known arguments about pseudoscience apply equally to qualitative research? What kinds of qualitative outcomes could be branded as pseudo- ? (and what should be the root of this compound noun?)
3. How might we investigate the quality of learning experiences available from web-delivered instruction?
4. What kind of validity is appropriate to studies of media and learning? (And what does external validity really mean anyway?)
The buttons that appear below will be found at the top and bottom of each page of the discussion. The first button will take you back to the previous page (in this case, to the beginning of paper #20). The middle button will take you to the ITForum home page. The last button takes you forward into the discussion as it progressed on-line.