ITForum Paper #5

Questioning the Questions
of Instructional Technology Research

Thomas C. Reeves
The University of Georgia

[This is an abbreviated version of the 1995 Peter Dean Lecture presented for the Division of Learning and Performance Environments (DLPE) at the 1995 National Convention of the Association for Educational Communications and Technology (AECT), Anaheim, CA, USA, February 8-12, 1995.]

My First Flame

Flaming is a '90s phenomenon whereby someone verbally attacks another person on the Internet (Seabrook, 1994). I vividly recall the feelings of shock and anger that swept through me when I received a "flame" note calling me a "jerk" on a "listserv" shared by hundreds of members around the world.

It all began last spring when I read two queries from doctoral students on the Qualitative Research for the Human Sciences listserv . Both students came from large public institutions of higher education in North America. The first student wrote that she intended to focus her dissertation research on the quality of "discourse" that takes place in cafes located within bookstores. She complained that she had found no "literature" on this topic and asked the listserv participants for some guidance. The second student announced that he was preparing a dissertation prospectus centered on the question of how people learned about opportunities to take SCUBA diving lessons and what motivated them to register for such courses. He also sought directions to relevant literature and advice from the listserv membership.

After pondering these queries, I posted a message asking whether faculty members at taxpayer-supported universities have a moral responsibility to guide their students toward "socially responsible" research questions. In my posting, I suggested that in the face of problems such as adult illiteracy, attacks on public education, "at-risk" students, homelessness, AIDS, and the like, faculty members should attempt to inspire in students a dedication to research that would "make a difference."

Shortly after posting my note, the graduate student who had sought help with his SCUBA query "flamed" me with his "jerk" note. He went on to criticize my "attack" on his freedom to address whatever research questions interested him, especially given he was a taxpayer as well. A small grass fire of flames then erupted as several listserv members castigated the student for calling me a jerk, some agreed with my critique, and others defended the perspective that the social relevance of educational research was irrelevant. No resolution of this issue was reached on the listserv, but I was impressed by the response of an education professor from another university in the USA who agreed with my criticism, but went on to suggest that much of the research he has read in the field of instructional technology was equally irrelevant. This prompted me to ponder the social relevancy of research in our field.

Is Instructional Technology Research Socially Relevant?

Social relevance is an issue that is obviously subject to much debate, but I'll attempt to define social relevance with respect to scientific inquiry. My definition includes the following principles that guide scientific research (derived from Casti, 1989):

1. Science is an ideology that consists of a cognitive structure concerning the nature of reality together with processes of inquiry, verification, and peer review.

2. Views of reality differ according to one's philosophy of science, e.g., realism maintains that an objective reality exists, instrumentalism asserts that reality is the readings noted on measuring instruments, and relativism claims that reality is what the community says it is.

3. Scientific research is a social activity that has certain standards and norms, e.g., it should not intentionally harm humans and it must be able to be replicated by other researchers.

4. Socially responsible research in education adheres to the basic principles listed above while at the same time it addresses problems that detract from the quality of life for individuals and groups in society, especially those problems related to learning and human development.

At first glance, instructional technology (IT) research might seem automatically "socially responsible" in that, at some level, all IT research can be said to focus on questions of how people learn and perform, especially with respect to how learning and performance are influenced, supported, or perhaps even caused by technology. As long as research is focused on learning and performance problems, and adheres to the principles listed above, it would seem to be socially responsible.

At the same time, many in the research community argue that concern for the social responsibility of research in IT or any other field is ludicrous. They maintain that the goal of research is knowledge in and of itself, and that whether research is socially responsible is a question that lies outside the bounds of science (cf., Carroll, 1973). For example, as reported by Farley (1982), Nate Gage, a past president of the American Educational Research Association (AERA), has been a staunch defender of the notion that the goal of basic research in education is simply "more valid and more positive conclusions" (p. 12) regardless of their applicability or relevance.

Not everyone in education or society as a whole agrees with the "pursuit of knowledge for its own sake" position of empiricists such as Gage. Another past president of AERA, Robert Ebel, proclaimed:

"....the value of basic research in education is severely limited, and here is the reason. The process of education is not a natural phenomenon of the kind that has sometimes rewarded scientific investigation. It is not one of the givens in our universe. It is man-made, designed to serve our needs. It is not governed by any natural laws. It is not in need of research to find out how it works. It is in need of creative invention to make it work better." (Farley, 1982, p. 18).

In my opinion, Ebel's stance is directly relevant to the issue of socially responsible research in IT. There is little social relevance in research studies that are largely focused on understanding "how" instructional technology works without substantial concern for how this understanding makes education better. On the other hand, there is considerable social relevance in IT research studies that are largely focused on making education better (and which in the process may also help us understand more about how instructional technology works).

Much of the research in IT is grounded in a "realist" philosophy of science, i.e., conducted under the assumption that education is part of an objective reality governed by natural laws and therefore can be studied in a manner similar to other natural sciences such as chemistry and biology. If this assumption about the nature of the phenomena we study is erroneous (and I believe it is), then we inevitably ask the wrong questions in our research. Further, even if there are underlying laws that influence learning, the complexity inherent in these laws may defy our ability to perceive, much less control, them (Casti, 1994). Cronbach (1975) pointed out two decades ago, our empirical research may be doomed to failure because we simply cannot pile up generalizations fast enough to adapt our instructional treatments to the myriad of variables inherent in any given instance of instruction.

Of course, there have been many critiques of research in our field. I adopted the title of this paper from one published twenty-seven years ago by Keith Mielke (1968) titled "Questioning the Questions of ETV Research." Other critics of the questions and methods of IT research include Lumsdaine (1963), Schramm (1977), Clark (1983), and Salomon (1991). However, few critics have dealt directly with questions of whether instructional technology research is, can be, or should be socially responsible. I hope to do so in this paper.

The State of Instructional Technology Research

Before returning to the issue of the social relevance of IT research, it is necessary to examine the state of research in the field today. I reviewed the contents of two of the primary research journals in the field, Educational Technology Research and Development (ETR&D) and the Journal of Computer-based Instruction (JCBI) over the periods 1989-94 for ETR&D and 1988-93 for JCBI. (Although I would have preferred to examine research publications in both journals during an identical six year period, this was not possible. ETR&D began its new format in 1989, and JCBI ceased publication at the end of 1993.)

After reflection and consultations with several experts (see Note 1 below), I developed a research classification scheme. This classification scheme represents an effort to distinguish between the "goals" and the "methods" of research. First, I propose that most research studies in instructional technology can be classified according to the six research goals represented in Figure 1.

Theoretical--research focused on explaining phenomena through the logical analysis and synthesis of theories and principles from multiple fields with the results of other forms of research such as empirical studies.

Empirical--research focused on determining how education works by testing hypotheses related to theories of communication, learning, performance, and technology.

Interpretivist--research focused on portraying how education works by describing and interpreting phenomena related to human communication, learning, performance, and the use of technology.

Postmodern--research focused on examining the assumptions underlying applications of technology in human communication, learning, and performance with the ultimate goal of revealing hidden agendas and empowering disenfranchised minorities.

Developmental--research focused on the invention and improvement of creative approaches to enhancing human communication, learning, and performance though the use of theory and technology.

Evaluation--research focused on a particular program, product, or method, usually in an applied setting, for the purpose of describing it, improving it, or estimating its effectiveness and worth.

Figure 1. Research goal classification scheme.

Second, given the aforementioned desire to separate the goals of research studies from the methodologies employed in them, I propose the methodology classification scheme represented in Figure 2. Of course, there are numerous methods available to researchers in instructional technology (cf., Driscoll, 1995), but for the sake of simplicity, these five methodological groupings provide sufficient discrimination to allow the analysis represented below.

Quantitative--experimental, quasi-experimental, correlational, and other methods that primarily involve the collection of quantitative data and its analysis using inferential statistics.

Qualitative--observation, case-studies, diaries, interviews, and other methods that primarily involve the collection of qualitative data and its analysis using grounded theory and ethnographic approaches.

Critical Theory--deconstruction of "texts" and the technologies that deliver them through the search for binary oppositions, hidden agendas, and the disenfranchisement of minorities.

Literature Review--various forms of research synthesis that primarily involve the analysis and integration of other forms of research, e.g., frequency counts and meta-analyses.

Mixed-methods--research approaches that combine a mixture of methods, usually quantitative and qualitative, to triangulate findings.

Figure 2. Research methods classification scheme.

The combination of the goal classification and the methods classification schemes yields a matrix of research goals by research methods. Unfortunately, a matrix is difficult to represent in the plain text file limitations of this listserv. (See Note 2.) Table 1 presents my analysis of the research goals of the 104 articles published in ETR&D (1989-1994). (See Note 3.) Not every article could be classified according to the proposed classification scheme. Six "methodological articles" (presenting a new method or procedure for carrying out research) and three "professional articles" (analyzing the state of the profession of instructional technology) are in the "Other" category in Table 1. Table 2 presents an analysis of the same 104 articles according to research methods.

Table 1. Research goals of 104 ETR&D articles (1989-94)


Table 2. Research methods of 104 ETR&D articles (1989-94)

Critical Theory0
Literature Review38

Table 3 presents my analysis of goals of the 129 research articles published in JCBI (1988-1993). Five "methodological articles" and one "professional article" are included in the "Other" category in Table 3. Table 4 presents an analysis of the same 129 articles according to research methods.

Table 3. Research goals of 129 JCBI articles (1988-93)


Table 4. Research methods of 129 JCBI articles (1988-93)

Critical Theory0
Literature Review24

There are some obvious trends in the articles that appeared in ETR&D and JCBI during the respective review periods. First, the most common type of article in either publication is empirical in intent and quantitative in method. Thirty-nine articles (38% of the total 104) in ETR&D and fifty-six articles (43% of the total 129) in JCBI fall into the "empirical-quantitative" cell of a "goals by methods" matrix.

The next largest subset of articles in these publications can be classified as theoretical in intent and employing literature review as the primary method. I was liberal in my classification of articles into this category. For example, I assigned all of the "influence of media debate" articles from ETR&D into this classification (cf., Clark, 1994; Kozma, 1994). The extent to which literature review methods were actually used in these articles varies greatly.

Another trend that stands out is the paucity of interpretivist articles (one in ETR&D and three in JCBI) during this time. This seems surprising given the numerous applications of the "Constructivist-Hermeneutic-Interpretivist-Qualitative Paradigm" in other fields of education (cf., Eisner, 1991). Although Neuman (1989), Driscoll (1995), Robinson (1995), and others promote interpretivist approaches to research in IT, interpretivist research reports seem to rarely find their way into our publications.

Developmental research studies are also scarce in each of these publications. (With respect to ETR&D, it may be that most developmental research studies appear in the development section of the journal, but this is a hypothesis that has not been investigated.) Other possible explanations are that instructional technologists rarely conduct developmental research, those that do have too little time to report it, or the review panels for the journals do not recognize this approach as legitimate research.

The complete absence of any articles in these journals that are postmodern in intent or that employ critical theory as a methodology is disappointing, but not too surprising. First, Hlynka and Belland's (1991) volume on the application of postmodern criticism to instructional technology may not be widely known. Second, the gatekeepers of ETR&D and JCBI appear to have strong preferences for empirical research employing quantitative methods. They may be unwilling or unable to entertain such radical departures from standard research methods as have been proposed by Yeaman (1994) and other critical theorists.

The buttons that appear below will be found at the bottom of each page of the discussion. The first button will take you back to the previous page (in this case, to the beginning of paper #5). The middle button will take you to the ITForum home page. The last button to the right below will carry you forward to the concluding sections of the paper.