It's (Beyond) Time to Drop the Terms
Causal-Comparative and Correlational Research in Education

Burke Johnson
University of South Alabama
Instructional Design & Development Program


Presentations of causal-comparative and correlational research methods in educational research textbooks are critiqued. The first major contention in this paper is that, ceteris paribus, causal-comparative research is neither better nor worse than correlational research in establishing evidence of causality. The second contention is that the terms causal-comparative research and correlational research are outdated, and researchers should use more current and accurate terminology. The two most effective ways to classify quantitative, nonexperimental research are, first, based on the primary research objective (i.e., description, prediction, and explanation) and, second, based on the time dimension (i.e., cross-sectional, longitudinal, and retrospective). Another classification dimension is based on the scaling of the independent, dependent, and control variables. Because of the importance of nonexperimental research in education, it is essential that we work toward the development of defensible nonexperimental classifications that make sense in an interdisciplinary research environment.

What is the Issue?

Authors of several popular educational research methods books make a distinction between two nonexperimental methods called causal-comparative research and correlational research (e.g., Charles, 1995; Fraenkel & Wallen, 1996; Gay, 1996; Martella, Nelson, & Marchand-Martella, 1999). According to these authors, a primary distinction between these two methods is that causal-comparative includes a categorical independent and/or dependent variable (hence the word "comparative" implying a group comparison) and correlational only includes quantitative variables. Most of these authors also suggest that causal-comparative research provides better evidence of cause and effect relationships than correlational research. The following quotes from Gayís (1996) popular text demonstrate these points:

Charles (1998) says, "Causal-comparative research strongly suggests cause and effect..." (p.305) but that correlational research may sometimes be used to "examine the possible existence of causation" (emphasis added) (p.260). In one of the newer educational research methods books on the market, Martella, Nelson, and Marchand-Martella (1999) contend that "correlational research has a lower constraint level than causal-comparative research. There is not an active attempt to determine the effects of the independent variable in any direct way." (emphasis added) (p.20)

The next quote is from Fraenkel, the first author of the leading selling educational research text (Fraenkel &Wallen, 1996). Fraenkel appears to agree with Gay and the others quoted above.1 In the following message (available on the AERA Division-D archives, February 11, 1998), Fraenkel answers the question, "Why do educational researchers ... seem to believe that evidence for cause and effect will be any stronger in causal-comparative research than in correlational research?":

Based on the above quotations, and the space allocated to these two methods in popular textbooks, it should not be surprising that almost 80 percent (n=330) of the participants in an Allyn and Bacon (Fall, 1996) survey of teachers of educational research said that the distinction between causal-comparative research and correlation research should be retained. Although the majority supported the distinction, only a minority of those replying to a follow-up open-ended question (asking why they felt the distinction was needed) indicated a clearly faulty view of causality in nonexperimental research. Nonetheless, it is a problem when any teachers of educational research have a faulty view of causality!

If the primary distinction, in design, between a causal-comparative and a correlational study is the scaling of the independent variable (and not the manipulation of this variable), then, the obvious question is "Why can one supposedly make a superior causal attribution from a causal-comparative study?" The answer is that the contention is completely without basis. Fraenkelís point that one approach compares groups but the other only looks at one group has absolutely nothing to do with establishing evidence of causality using nonexperimental research. Likewise, the fact that some writers choose to label the independent variable a "predictor" variable in correlational research but not in causal-comparative has nothing to do with establishing evidence of causality.

To illustrate the point about variable scaling, consider the following example. Suppose one is interested in learning whether two variables, "time spent studying per day" during the past week before a test (based on a self-reported, retrospective question) and actual "test scores" (as recorded by the teachers), are associated. If time spent studying is measured in minutes per day, then a "correlational" study results. If, however, time is artificially dichotomized into two groups--10 minutes or less per day and more than 10 minutes per day--a "causal-comparative" study results. The only true distinction between these two studies is the scaling of the variables. This is a trivial distinction and does not warrant the claim that the causal-comparative study will produce more meaningful evidence of causality. Third variables (i.e., confounding extraneous variables) such as ability, age, and parental guidance could be affecting study time and exam scores in both of the above studies. For another example, there is no reason to believe a stronger causal claim can be made from a study measuring the relationship between gender and test grades (a causal-comparative study) than from a study measuring the relationship between time spent studying for a test and test grades (a correlational study). (Note that the researcher could calculate a simple correlation coefficient in both of these cases.) The point is that only a relationship between two variables would have been demonstrated.

The first contention of this paper is that, ceteris paribus, causal-comparative research is neither better nor worse in establishing evidence of causality than correlational research. When you compare apples to apples (e.g., the simple cases of causal-comparative and correlational research) and oranges to oranges (e.g., the more advanced cases), you simply cannot conclude that causal-comparative research is any better than correlational research for making causal attributions. I am defining the simple cases of causal-comparative and correlational research here as studies that include two variables and no controls, and advanced cases as studies where controls are included. It is essential that we understand that what is always important when attempting to make causal attributions is the elimination of plausible rival explanations (Cook & Campbell, 1979; Huck & Sandler, 1979; Johnson & Christensen, 2000; Yin, 2000). It is also very important to understand that one can attempt to eliminate selected rival explanations in both causal-comparative and correlational research. The second contention of this paper is that the terms causal-comparative and correlational research are outdated and should be replaced by more current terminology. Suggestions for an improved nonexperimental quantitative research terminology are provided below.

What are the Similarities and Differences Between Causal-Comparative and Correlational Research Methods?

Causal-comparative and correlational methods (as defined in educational research textbooks) are similar in that both are nonexperimental methods because they lack manipulation of an independent variable which is under the control of the experimenter and random assignment of participants is not possible. This means, among other things, that the variables must be observed as they occur naturalistically. As a result, the key and omnipresent problem in nonexperimental research is that an observed relationship between an independent variable and a dependent variable may be spurious. That is, the relationship is not a causal relationship; it is a relationship that is the result of the operation of a third variable (see Figure 1). For an example of the third variable problem, note that self-reported "gender role identification" and high school "algebra performance" may be related. However, that relationship would probably largely be due to the joint influence of the third variable of "gender-role socialization."

Figure 1.  The problem of spuriousness.

Because of the lack of manipulation of the independent variable and the problem of spuriousness, neither causal-comparative nor correlational research can provide as strong of evidence for causality as can a study based on a randomized experiment or a strong quasi-experimental design (such as the regression discontinuity design or the time series design). Indeed, even the more sophisticated theory testing or confirmatory approaches relying on structural equation modeling (which are "correlational") provide relatively weak evidence of causality (when based on nonexperimental data) as compared to the evidence obtained through randomized experiments.

Causal-comparative and correlational studies differ on the scaling of the independent and/or dependent variables. That is, according to popular textbooks, causal-comparative studies include at least one categorical variable and correlational studies include only quantitative variables. The most common type of independent variable used in causal-comparative and correlational studies (i.e., in nonexperimental research) is an attribute variable (Kerlinger, 1986). These rarely manipulable variables are called attribute variables because they represent characteristics or "attributes" of different people. The concept of attribute variable can easily be generalized to non-human units of analysis (e.g., schools, books, cultures, etc.). Some categorical independent variables that cannot be manipulated and might be used in a "causal-comparative" study are gender, parenting style, learning style, ethnic group, college major, party identification, type of school, marital status of parents, retention in grade, type of disability, presence or absence of an illness, drug or tobacco use, and any personality trait that is operationalized as a categorical variable (e.g., extrovert versus introvert). Some quantitative independent variables that cannot be manipulated and might be used in a "correlational study" are intelligence, aptitude, age, school size, income, job satisfaction, GPA, amount of exposure to violence in the media, and any personality trait that is operationalized as a quantitative variable (e.g., degree of extroversion). Again, the key characteristic of the independent variables used in causal-comparative and correlational studies is that they either cannot be manipulated or they were not manipulated for various reasons (e.g., because of ethical concerns or a lack of resources).

Causal-comparative and correlational studies are similar in that both are used to examine relationships among variables. The data from both of these approaches are typically analyzed using the general linear model (GLM), and it well known that all special cases of the GLM are correlational (e.g., Kerlinger, 1986; Tatasouka, 1993; Thompson, 1999) where the relations between variables are modeled. Given this, it is misleading to suggest, as is sometimes done in educational research texts, that only correlational research examines relationships.

Causal-comparative and correlational studies are similar on the techniques available for controlling confounding variables. For example, one can statistically control for confounding variables in both approaches by collecting data on the key confounding extraneous variables and including those variables in the GLM. Likewise, one can eliminate the relationship between selected confounding and independent variables (regardless of their scaling) using matching or quota sampling approaches. Today, statistical control is usually preferred over individual matching (Rossi, Freeman, & Lipsey, 1999; Judd, Smith, &Kidder, 1991).

According to some authors, causal-comparative and correlational research have different research purposes. In particular, Gay (1996) suggests that the purpose of causal-comparative research is to examine causality and the purpose of correlational research is to examine relationships and make predictions. This is misleading because, first, one can also examine relationships and make predictions in the presence of nonmanipulated categorical variables (i.e., in causal-comparative research), and, second, some evidence of causality can be obtained by controlling for confounding variables and ruling out plausible rival hypotheses in both causal-comparative and correlational research. However, the idea of making a distinction within nonexperimental quantitative research between approaches dealing with causality and those that do not deal with causality does have merit. This idea is explained below, but, first, I examine the origin of the belief that causal-comparative research provides stronger evidence for causality than correlational research.

Where Did the Idea that Causal-Comparative is Superior Come From?

The term causal-comparative appears to have originated in the early 20th century (see Good, Barr, & Scates, 1935). The early writers did not, however, contend that evidence for causality based on causal-comparative research was superior to evidence based on correlational research. For example, according to Good, Barr, & Scates (1935),

It was also known at the time that selected extraneous variables could be partialled out of relationships in correlational research.2 This idea is illustrated in the following quote (from Good et al):

The fallacious idea that causal-comparative data are better than correlational data for drawing causal inferences appears to have emerged during the past several decades, and it has been popularized in some of the educational research texts during that time (e.g., Gay, 1996).

There may be several sources that have led to confusion. First, some may believe that causal-comparative research is superior to correlational research for studying causality because a causal-comparative study looks more like an experiment. For example, if a researcher categorized the independent variable it may look more like an experiment than when the independent variable is continuous because of the popularity of categorical independent variables in experimental research.3

Second, perhaps the term causal-comparative suggests a strong design but the term correlational suggests a simple correlation (and hence a weak design). I sometimes ask my beginning research methods students which approach is stronger for studying cause and effect: causal-comparative or correlational research. Many respond that causal-comparative is stronger. When I ask them why they believe causal-comparative is stronger, they frequently point out that the word causal appears in the term causal-comparative research but not in the term correlational research.

Third, the term correlational research has sometimes been used as a synonym for nonexperimental research over the years in education and in the other social and behavioral sciences. Unfortunately, this use may lead some people to forget that causal-comparative also is a nonexperimental research method. Causal-comparative research is not experimental research; it is not even quasi-experimental research. Causal-comparative research, just like correlational research, is a nonexperimental research method.

Fourth, perhaps the confusion is linked to a faulty view about the difference between ANOVA (which is linked to causal-comparative research) and correlation/regression (which is linked to correlational research). For example, some writers appear to believe that ANOVA is only used for explanatory research and correlation and regression is limited to predictive research. It is essential to understand that multiple regression can be used for explanatory research (and for the control of extraneous variables) as well as for descriptive and predictive research, and, likewise, ANOVA can be used for descriptive and predictive research as well as for explanatory research (Cohen & Cohen, 1983; Pedhazur, 1997). ANOVA and MRC (multiple regression and correlation) are both "special cases" of the general linear model, and they are nothing but approaches to statistical analysis. The general linear model "does not know" whether the data are being used for descriptive, predictive, or explanatory purposes because the general linear model is only a statistical algorithm.

Fifth, perhaps some students and researchers believe causal-comparative research is superior because they were taught the mantra that "correlation does not imply causation." It is certainly true that correlation does not, by itself, imply causation (it is a necessary but not sufficient condition). It is equally true, however, that observing a difference between two or more means does not, by itself, imply causation. It is very unfortunate that this second point is not made with equal force in educational research textbooks. Another way of putting this is that evidence for causality in the simple case of causal-comparative research (two variables with no controls) and in the simple case of correlational research (two variables with no controls) is virtually nonexistent. One simply cannot draw causal conclusions from these simple cases. Some evidence of causality can be obtained by improving upon these simple cases by identifying potential confounding variables and attempting to control for them.

How Should Nonexperimental Quantitative Research Be Presented in Educational Research Methods Books?

First, the terms causal-comparative research and correlational research should be dropped. The use of these terms is no longer justifiable because too often they mislead rather than inform. The term causal-comparative suggests that it is a strong method for studying causality (Why else would it include the word "cause?"), and the term correlational places the focus on a statistical technique rather than on a research technique (Correlational statistics are also used in causal-comparative and experimental research). Writers should have followed Thomas Cook and Donald Campbellís advice on this issue. Over twenty years ago, in their book on quasi-experimentation and field research, Cook and Campbell (1979) made the following point: "The term correlational-design occurs in older methodological literature...We find the term correlational misleading since the mode of statistical analysis is not the crucial issue." (p.6, emphasis in original).

It is telling that Fred Kerlinger (1986), who was one of educationís leading research methodologists, made no distinction between causal-comparative and correlational research (or between "ex post facto" research and correlational research). Kerlinger used the term nonexperimental research, which is the term that I believe educational methodologists should readily adopt.4 Here is how Kerlinger defined the inclusive term nonexperimental research:

Although Kerlinger originally (1973) called this type of research ex post facto research (which some believe is a synonym for the term causal-comparative research), Kerlinger later (1986) dropped the term ex post facto (probably because it apparently excludes prospective studies). An examination of Kerlingerís examples also clearly shows that Kerlingerís nonexperimental research classification is not limited to studies including at least one categorical variable. Kerlinger was an expert on the general linear model, and he would never have contended that causal-comparative studies were inherently superior to correlational studies for establishing evidence of cause and effect.

Students and beginning researchers need to learn how to think about causality and understand that the scaling of a variable (categorical or quantitative) has nothing to do with evidence of causality. For example, when an independent variable is categorical, the comparisons are made between the groups. When an independent variable is quantitative, comparisons can be made for the different levels of the independent variable; the presumed effect can also be described through a functional form such as a linear or quadratic model. It is generally a poor idea to categorize a quantitative variable because of the loss of information about amount and process (Kerlinger, p.558; Pedhazur & Schmelkin, 1991, p. 308).5

When interest is in causality, researchers should always address the three necessary conditions for cause and effect (Cook & Campbell 1979; Johnson & Christensen, 2000). The first necessary condition is that the two variables must be related (i.e., the relationship or association condition). The second necessary condition is that proper time order must be established (i.e., the temporal antecedence condition). If changes in Variable A cause changes in Variable B, then A must occur before B. The third necessary condition is that an observed relationship must not be due to a confounding extraneous variable (i.e., the lack of alternative explanation condition or the nonspuriousness condition). There must not remain any plausible alternative explanation for the observed relationship if one is to draw a causal conclusion. A theoretical explanation or rationale for the observed relationship is also essential to make sense of the causal relationship and to lead to hypotheses to be tested with new research data. Generally speaking, nonexperimental research is good for identifying relationships (condition one), but it is weak on necessary conditions two (time order) and three (ruling out alternative explanations). Nonexperimental research is especially weak on condition three because of the problem of spuriousness.

A potential problem to watch for when studying relationships is that commonly used statistical techniques may miss the relationship. For example, a Pearson correlation coefficient (or any other measure of linear relationship) will underestimate or entirely miss a curvilinear relationship. Model misspecification can also result in failure to identify a relationship. For example, if there is a fully disordinal two-way interaction (where the graph for two groups forms an "X") there will be no main effects, and, therefore, if one of the independent variables is excluded from the study and the interaction is never examined, it will appear that there is no relationship between the included variables (even experimental manipulation and randomization are to no avail here). Another important form of model misspecification is when one or more common causes are excluded from the model, resulting in the failure to purge an observed relationship of its spurious components (Bollen, 1989; Davis, 1985). Simpsonís Paradox (Moore & McCabe, 1993, pp. 188-191) can result in a conclusion (based on a measure of bivariate association) that is the opposite of the correct conclusion. One must also be careful in interpreting a relationship when suppression is present (see Cohen & Cohen, 1983, pp. 94-95; Lancaster, 1999).

Researchers interested in studying causality in nonexperimental research should determine whether a functional relationship (that is presumed to be causal) disappears after controlling for key antecedent or concurrent extraneous variables that represent plausible rival explanations of the observed relationship. The more tests of this sort a theoretically derived model survives, the better the evidence of causality. Researchers must be careful, however, when interpreting the reduction in an observed relationship between two variables after controlling for another variable because controlling for either a confounding variable (i.e., one that affects both the independent and dependent variables) or an intervening variable (i.e., one that occurs after the independent variable and before the dependent variable) will reduce the magnitude of the observed relationship (Pedhazur, 1997, p. 159). It is essential that this process be guided by theory in nonexperimental research.

The following strategies are also helpful in establishing evidence for causality: collection of longitudinal data, explication of intervening mechanisms, evidence of a dose/response relationship, explicit use of one or more control techniques (e.g., matching, statistical control, etc.), empirical tests of theoretical predictions with new data, replication of findings to rule out chance and sample specific factors, cross-validation on large samples, evidence of construct validity, evidence of convergence obtained through triangulation (methods, data, and investigator triangulation), pattern matching (see Trochim, 1989), selection modeling (see Rindskopf, 1992), comparisons of competing theoretical models, and extensive open and critical examination of a theoretical argument by the members of the research community with expertise in the research domain. Little can be gained from a single nonexperimental research study, and students and researchers must always temper their conclusions. Greater evidence can be obtained through meta-analytic research studies (assuming that the included studies are of acceptable quality; see Cordray, 1990) and through the development of theories that have survived numerous disconfirmation attempts (e.g., Bollen, 1989; Maruyama, 1998). The strongest nonexperimental quantitative studies usually result from well controlled prospective panel studies and from confirmatory structural equation (theoretical) models (Johnson & Christensen, 2000).

Developing a Better Classification of Nonexperimental Research

One effective way to classify quantitative, nonexperimental research is based on the major or primary research objective. Studies can be usefully classified into the categories of descriptive research, predictive research, and explanatory research. To determine whether the primary objective was description, ask the following questions: (1) Were the researchers primarily describing the phenomenon? (2) Were the researchers documenting the characteristics of the phenomenon? If the answer is "yes" (and there is no manipulation), then apply the term descriptive nonexperimental research. To determine whether the primary objective was predictive, ask the following question: Did the researchers conduct the research so that they could predict or forecast some event or phenomenon in the future? If the answer is "yes" (and there is no manipulation), then apply the label predictive nonexperimental research. To determine whether the primary objective was explanatory, ask the following questions: (1)Were the researchers trying to develop or test a theory about a phenomenon to explain "how" and "why" it operates? (2) Were the researchers trying to explain how the phenomenon operates by identifying the factors that produce change in it? If the answer is "yes" (and there is no manipulation), then apply the term explanatory nonexperimental research.

Nonexperimental, quantitative research should also be classified according to the time dimension. Here the types of research include cross-sectional research, longitudinal research, and retrospective research. In cross-sectional research the data are collected from research participants at a single point in time or during a single, relatively brief time period and comparisons are made across the variables of interest. In longitudinal research the data are collected at more than one time point or data collection period, and the researcher is interested in making comparisons across time. Data can be collected on one or multiple groups in longitudinal research. Two subtypes of longitudinal research are trend studies (where independent samples are taken from a population over time and the same questions are asked) and panel or prospective studies (where the same individuals are studied at successive points over time). The panel or prospective study is an especially important case when interest is in establishing evidence of causality because data on the independent and control variables can be obtained prior to the data on the dependent variable. This helps to establish proper time order (i.e., necessary condition two). In retrospective research, the researcher looks backward in time (typically starting with the dependent variable and moving backward in time to locate information on independent variables that help explain differences on the dependent variable).

The two dimensions just presented (research objective and time) provide important and meaningful information about the different forms nonexperimental research can take (Johnson & Christensen, 2000). Use of these terms will convey important information to readers of journal articles and other forms of professional communication. In short, the use of these terms will more clearly delineate what was done in a given research study. Notice that the two dimensions can be crossed, forming a 3-by-3 table and resulting in nine very specific forms that nonexperimental research can take (see Table 1).

Table 1
Types of Research Obtaained by Crossing Research Objective and the Time Dimension

Time Dimension

Research Objective





Type 1

Type 2

Type 3


Type 4

Type 5

Type 6


Type 7

Type 8

Type 9

Two specific examples of the study types shown in Table 1 are mentioned here. First, in the article "Psychological Predictors of School-Based Violence: Implications for School Counselors" the researchers (Dykeman, Daehlin, Doyle, & Flamer, 1996) wanted to examine whether three psychological constructs could be used to predict violence among students in grades five through ten. The psychological constructs were impulsivity, empathy, and locus of control. This study is an example of Type 5 because the research objective was predictive and the data were cross-sectional. A second example is the study titled "A Prospective, Longitudinal Study of the Correlates and Consequences of Early Grade Retention" (Jimerson, Carlson, Rotert, Egeland, & Stroufe, 1997). The researchers in this study identified groups of retained and nonretained students that were matched on several variables and then followed these groups over time. Statistical controls were also used. This study is an example of Type 9 because the research objective was explanatory and the data were longitudinal.

The third and last component of the classification of nonexperimental quantitative research involves the scaling of the independent, dependent, and any control variables that may be included. All three of these variable types can be viewed as being either categorical or quantitative (for simplicity). Table 2 shows the variable combinations for the case of one independent variable, one dependent variable, and one control variable. Although a study may include multiple independent, dependent, and/or control variables, Table 2 demonstrates one set of combinations.

Table 2
Types of Research Obtaained by Crossing Independent Variable Type and Dependent Variable Type By Control Variable Type*

  Control Variable
  Categorical Quantitative
  Independent Variable Independent Variable

Dependent Variable















* Table entries were formed using the following order: dependent variable, independent variable, and control variable. The letter C stands for categorical, and the letter Q stands for quantitative.

It is important to remember that the scaling of the variables has no necessary relation to the strength of the design for the different research purposes (description, prediction, explanation). The variables may also be measured at one point in time or at more than one point in time regardless of their scaling. The importance of examining the scaling of the variables is for determining the statistical model and type of statistical analysis that is appropriate for each variable combination. For example, if all three variables are categorical (type C-C-C), a three way contingency table or a loglinear model could be used. ANCOVA would be appropriate for type Q-C-Q. It is left as an exercise to the reader to identify the appropriate statistical analysis techniques for the other entries shown in Table 2 and for the construction of other tables for different variable combinations.


The terms causal-comparative and correlational are dated and misleading and they suggest a false dichotomy. Textbook authors should stop misleading educational researchers in-training with the suggestion that causal-comparative research provides better evidence for causality than correlational research and get down to the more important issues surrounding causality and how to design relatively strong nonexperimental designs. Time spent making the causal-comparative versus correlational research distinction will be wasted (think of the opportunity costs). Leading authors in other (non-education) disciplines apparently agree because they have found no need for the distinction between causal-comparative and correlational research (e.g., Babbie, 1998; Checkoway, Pearce, & Crawford-Brown, 1989; Christensen, 2001; Davis & Cosena, 1993; Jones, 1995; Judd, Smith, & Kidder, 1991; LoBiondo-Wood & Haber, 1994; Neuman, 1997; Frankfort-Nachmias & Nachmias, 1992; Malhotra, 1993; Pedhazur & Schmelkin, 1991; Singleton, Straite, & Straits, 1993). Educational researchers currently participate in an increasingly interdisciplinary environment, and it is important that they use terminology and research classifications that are defensible and make sense to researchers in related fields (e.g., psychology, sociology, political science, anthropology, business). Because of the importance of nonexperimental research in education, it is essential that we provide meaningful distinctions among the different forms of nonexperimental research, design nonexperimental studies to meet the intended research purpose, and conduct defensible nonexperimental research studies when experimental research studies are not possible.


Babbie, E. (1998). The practice of social research (8th ed.). Belmont, CA: Wadsworth.

Bollen, K.A. (1989). Structural equations with latent variables. New York: Wiley.

Charles, C.M. (1995). Introduction to educational research (2nd edition). White Plains, NY: Longman.

Checkoway, H., Pearce, N., & Crawford-Brown, D.L. (1989). Research methods in occupational epidemiology. New York: Oxford.

Christensen, L.B. (2001). Experimental methodology (8th ed.). Boston: Allyn and Bacon.

Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences (second edition). New York: Wiley.

Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.

Cordray, D.S. (1990). Strengthening causal interpretations of non-experimental data: The role of meta-analysis. In L. Sechrest, E. Perrin, & J. Bunker (Eds.), Research methodology: Strengthening causal interpretations of nonexperimental data (pp. 151-172). Washington, D.C.: U.S. Department of Health and Human Resources.

Davis, D., & Cosena, R.M. (1993). Business research for decision making. Belmont, CA: Wadsworth

Davis, J.A. (1985). The logic of causal order. Beverly Hills, CA: Sage.

Dykeman, C., Daehlin, W., Doyle, S., & Flamer, H.S. (1996). Psychological predictors of school-based violence: Implications for school counselors. The School Counselor, 44, 35-44.

Fraenkel, J.R. (1998, February 2). Professor Johnson--Reply. Available at Http://

Fraenkel, J.R., & Wallen, N.E. (1996). How to design and evaluate research in education (third edition). New York: McGraw-Hill.

Frankfort-Nachmias, C., & Nachmias, D. (1992). Research methods in the social sciences (4th ed.) New York: St. Martinís.

Gay, L.R. (1996). Educational research: Competencies for analysis and application (fifth edition). Englewood Cliffs, NJ: Prentice-Hall.

Good, C.V., Barr, A.S., & Scates, D.E. (1935). The methodology of educational research. New York: Appleton-Century-Crofts.

Huck, S.W., & Sandler, H.M. (1979). Rival hypotheses: Alternative interpretation of data based conclusions. New York: Harper and Row.

Jimerson, S., Carlson, E., Rotert, M., Egeland, B, & Sroufe, L.A. (1997). A prospective, longitudinal study of the correlates and consequences of early grade retention. Journal of School Psychology, 35(1), 3-25.

Johnson, R.B., & Christensen, L.B. (2000). Educational research: Quantitative and qualitative approaches. Boston: Allyn and Bacon.

Jones, J.L. (1995). Understanding psychological science. New York: Harper Collins.

Judd, C.M., Smith, E.R., & Kidder, L.H. (1991). Research methods in social relations (6th edition). Fort Worth, TX: Harcort Brace Jovanovich.

Kerlinger, F.N. (1973). Foundations of behavioral research (2nd ed.). New York: Holt, Rinehart & Winston.

Kerlinger, F.N. (1986). Foundations of behavioral research (3rd ed.). New York: Holt, Rinehart & Winston.

Lancaster, B.P. (1999). Defining and interpreting suppressor effects: Advantages and limitations.

In B. Thompson (Ed.) Advances in Social Science Methodology (pp. 139-148). Stamford, CT: JAI Press.

LoBiondo-Wood, G., & Haber, J. (1994). Nursing research (3rd ed.). St Louis, MO: Mosby.

Malhotra, N.K. (1993). Marketing research: An applied orientation. Englewood Cliffs, NJ: Prentice Hall.

Martella, R.C., Nelson, R., & Marchand-Martella, N.E. (1999). Research methods: Learning to become a critical research consumer. Boston: Allyn and Bacon.

Maruyama, G.M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage.

Moore, D.S., & McCabe, G.P. (1993). Introduction to the practice of statistics. New York: W.H. Freeman.

Neuman, W.L. (1997). Social research methods (3rd edition). Boston: Allyn and Bacon.

Pedhazur, E.J. (1997). Multiple regression in behavioral research: Explanation and prediction (3rd ed.). New York: Holt, Rinehart & Winston.

Pedhazur, E.J., & Schmelkin, L.P. (1991). Measurement, design, and analysis: An integrated approach. Hillsdale, NJ: Lawrence Erlbaum.

Rindskopf, D. (1992). The importance of theory in selection modeling: Incorrect assumptions mean biased results. In H. Chen & P.H. Rossi (Eds.), Using theory to improve program and policy evaluations (pp. 179-191). New York: Greenwood Press.

Rossi, P.H., Freeman, H.E., & Lipsey, M.W. (1999). Evaluation: A systematic approach. Thousand Oaks, CA: Sage.

Singleton, R.A., Straits, B.C., & Straits, M.M. (1993). Approaches to social research. New York: Oxford.

Tatsuoka, M. (1993). Elements of the general linear model. In G. Keren & C. Lewis (Eds.) Data analysis in the behavioral sciences (pp. 3-41). Hillsdale, NJ: Lawrence Erlbaum.

Thompson, B. (1992). Misuse of ANCOVA and related "statistical control" procedures. Reading Psychology: An International Quarterly, 13, iii-xvii.

Thompson, B. (1999). Five methodology errors in educational research: The pantheon of statistical significance and other faux pas. In B. Thompson (Ed.) Advances in Social Science Methodology (pp. 23-86). Stamford, CT: JAI Press.

Trochim, W. (1989). Outcome pattern matching and program theory. Evaluation and Program Planning, 12(4), 355-366.

Yin, R.K. (2000). Rival explanations as an alternative to reforms as experiments. In L. Bickman (Ed.) Validity and social Experimentation. Thousand Oaks, CA: Sage.


1. Fraenkelís message is one of many in an extensive discussion taking place on the AERA Division-D Internet Discussion Group, spanning several weeks, about the relative strengths of causal-comparative and correlational research. You can access the archives of the discussion from the AERA homepage. The discussion began on February 6, 1998 with a message posted by Burke Johnson and ended on March 5, 1998 with a message posted by Michael Scriven. The discussion took place under several headings including "research methods question," "causal-comparative vs. correlational," "causal-comparative and cause," "Professor Johnson," "the correlation/causal-comparative controversy," "correlational/C-C questionnaire," and "10 reasons causal-comparative is better than correlational."

2. The early writers were also, perhaps, overly optimistic about the power of statistical techniques for control. For example, partial effects are difficult to interpret in the presence of random measurement error for predictor variables (e.g., Cook & Campbell, 1979, pp., 157-164; also see Thompson, 1992).

3. Although the simple causal-comparative design looks much like the static group comparison design (which is a weak or preexperimental research design), the simple causal-comparative design has even less going for it than this design.

4. If, given the context, it unclear that a study is quantitative then use the term nonexperimental quantitative research.

5. The practice of categorizing quantitatively scaled variables may have developed prior to the widespread use of computers as a result of the ease of the mathematical computations in ANOVA (Tatsuoka, 1993).

ITFORUM PAPER #43 - ITíS (BEYOND) TIME TO DROP THE TERMS CAUSAL-COMPARATIVE AND CORRELATIONAL RESEARCH IN EDUCATION by Burke Johnson. Posted on ITFORUM July 19, 2000. The author retains all copyrights of this work. Used on ITFORUM by permission of the author. Visit the ITFORUM WWW Home Page at

(c) Burke Johnson, 2000