Hypothesized performance on complex tasks
as a function of scaled instructional strategies

M. David Merrill

Professor Center for Instructional Technology and Outreach

Brigham Young University Hawaii

Professor Emeritus Utah State University


The author identifies a set of instructional strategy principles that have been prescribed by a number of different instructional theories and recommended practices.  He then proposes a set of hypotheses for the interrelationships among these principles.   While many of these instructional design principles have been supported by individual experimental studies the author has been unable to find an integrated body of research that has studied the interrelationships among these instructional principles.  The hypotheses proposed in this paper are designed to provide a framework for integrated research on instructional strategies.  Perhaps this paper might serve as a catalyst for dissertations and other research activities in the science of instruction.

Note: the two footnotes are at the end of the article in the HTML version.



Hypothesized performance on complex tasks
as a function of scaled instructional strategies

M. David Merrill

Professor Emeritus Utah State University

Professor Center for Instructional Technology and Outreach

Brigham Young University Hawaii



“Instruction involves directing students to appropriate learning activities; guiding students to appropriate knowledge; helping students rehearse, encode, and process information; monitoring student performance; and providing feedback as to the appropriateness of the student’s learning activities and practice performance.


“Instructional design is the technology of creating learning experiences and learning environments which promote these instructional activities.”  (Merrill, Drake et al. 1996).


 Does instruction work?    What is the evidence?  Can we find out what is learned from instruction?  Can we determine what makes instruction efficient, effective, and engaging?  Is it possible to conduct evidence-based research to determine if students have learned and how they have learned?

 “Like other sciences, instruction is verified by discovery and instructional design is extended by invention. Instructional science, the foundation for the technology of instructional design, is the discovery of instructional strategies. Instructional science involves identifying the variables to consider (descriptive theory), identifying potential relationships between these variables (prescriptive theory), and then empirically testing these relationships in the laboratory and the field.

“Instructional science is concerned with the discovery of the principles involved in instructional strategies; and instructional design is the use of these scientific principles to invent instructional design procedures and tools.”  (Merrill, Drake et al. 1996)


The author maintains that it is possible to ascertain if learners have acquired specified knowledge and skill from instruction.   The author has identified a set of instructional strategy principles that are prescribed by a number of different instructional theories and recommended practices.  While many of these instructional design principles have been supported by individual experimental studies the author has been unable to find an integrated body of research that supports these interrelated principles as a whole.  This paper proposes an interrelated set of hypotheses based on these first principles of instruction.  It is hoped that this set of hypotheses might provide a basis for a systematic program of research designed to validate these prescribed instructional design principles and their interrelationships.  


Instructional Principles

A first principle of instruction is a prescriptive strategy that promotes effective, efficient and engaging instruction for the acquisition of complex tasks.  A first principle of instruction is a relationship that has been identified by a number of different instructional theorists and supported by research.  The author systematically reviewed a number of different instructional design theories and abstracted from these theories a set of interrelated prescriptive instructional design principles(Merrill 2002a; Merrill 2002b; Merrill In Press a; Merrill in press b; Merrill In Press c).  One of these papers  (Merrill In Press a) quoted similar principles that have been identified by other authors(Reigeluth 1983; Andre 1997; Rosenshine 1997; Tennyson, Schott et al. 1997; van Merriënboer 1997; Clark 1999; Reigeluth 1999; Marzano, Pickering et al. 2001; Mayer 2001; Clark and Estes 2002; Allen 2003; Clark 2003; Clark and Mayer 2003; Clark 2003; Dembo and Young 2003; Foshay, Silber et al. 2003; Mayer 2003; O'Neil 2003). Many of these authors cited relevant research support for their principles while other authors based their prescriptions on their experience in designing and evaluating instructional products. The reader is encouraged to review this synthesis paper (Merrill In Press a) and the works by the authors cited above for their prescriptions and the research base for these principles. 


From this effort five fundamental principles were identified.

Learning is promoted when the learner:

·                                                        observes a demonstration  – demonstration principle

·                                                        applies the new knowledge – application principle

·                                                        undertakes real-world tasks – task-centered principle 

·                                                        activates existing knowledge – activation principle

·                                                        integrates the new knowledge into their world – integration principle.


These five principles are elaborated via the following corollaries:


Demonstration Principle

Learning is promoted when learners:

·                    observe a demonstration of  the skills to be learned  – demonstration principle;

·                    observe demonstrations that are consistent with the content – consistency;

·                    receive guidance that relates instances to generalities – guidance;

·                    observe media that is relevant to the content – media.

Application Principle

Learning is promoted when learners:

·                    apply their newly acquired knowledge or skill – application principle;

·                    undertake practice that is consistent with stated or implied objectives – consistency;

·                    receive intrinsic or corrective feedback – feedback;

·                    are coached but gradually receive less coaching with each subsequent task – coaching.

Task-centered Principle

Learning is promoted when learners:

·                    do real-world tasks – task-centered principle;

·                    observe the whole task they will be able to do – outcome;

·                    acquire component knowledge and skill – components;

·                    undertake a progression of whole tasks – progression.

Activation Principle

Learning is promoted when learners:

·                    activate relevant cognitive structures– activation principle;

·                    engage in relevant experience -- experience

·                    recall, describe or demonstrate relevant prior knowledge – prior knowledge

·                    acquire or recall a structure for organizing the new knowledge -- structure.

Integration Principle

Learning is promoted when learners:

·                    integrate their new knowledge into their everyday life – integration principle;

·                    reflect-on, discuss, or defend their new knowledge or skill – reflection;

·                    create, invent, or explore personal ways to use their  new knowledge or skill – create.

·                    publicly demonstrate their new knowledge or skill – go public.


These principles may seem obvious.   The author is often told that these principles are well known and frequently used.  However, a recent study (Barclay, Gur et al. 2004) analyzed over 1400 web sites in five countries that claimed to provide instruction on marriage relationships.  Each site was scored on a 15 point scale indicating the degree to which the first principles of instruction were implemented.  The highest score on any site was 7.0 indicating that even the best site implemented less than half of these principles.  The average scores[1] indicate that most of these sites do not implement any of these principles.  The result is that these so called instructional web sites fail to implement the principles that instructional design theorists have identified as fundamental for effective, efficient and engaging instruction.  However, this research merely indicates whether or not these principles are implemented, it does not provide information about the contribution of these principles to student performance or whether those sites that score low teach less than those sites that score high on the implementation of these principles. 


A study (Thompson_Inc. 2002) conducted by NETg that compared their standard off-the-shelf Excel course with a new task-centered course designed by the application of first principles found that students on the new task-centered course completed complex spread sheet tasks significantly (p. < .001) more efficiently (29 minutes versus 49 minutes) and effectively (89% versus 68%) than students in the off-the shelf course.   


Another study in the Netherlands (Collis and Margarkyan 2005) has also rated a number of courses in Shell Oil Company on the degree to which they implement first principles.  Data is currently being collected to determine if those courses who rate high result in better performance than those who rate low on their implementation of first principles.


Are these principles of equal value?  Do they contribute equally to learning effectiveness or efficiency?  Are some of these principles more fundamental than others?  How are these principles related to one another?  What is the relative contribution of these principles to the acquisition of the skill and knowledge necessary to complete complex real-world tasks?  This paper hypothesizes the relationship among these first principles of instruction.  This paper hypothesizes the relative contribution of these principles to performance on complex real-world tasks.


Performance Scaling

If research is to be able to assess the hypothesized relationships among these first principles it is necessary to be clear about the type of learned performance is prompted by these principles.  Too often instructional objectives are so vague that it is difficult to know what evidence is relevant.   When the outcome is to learn an unspecified goal from an unspecified experience, then, as advised by the Cheshire cat[2], it really doesn’t matter which theoretical path you take or what evidence you collect.  But unspecified learning from unspecified experience is not instruction.  Instruction requires a specific goal to acquire a specific skill.   If outcomes are carefully specified then appropriate evidence can be collected.  Once a complex task has been clearly specified the level of skill to perform this task can be measured.  But that doesn’t mean it’s easy.  Measuring performance level in complex tasks is in itself a complex task.  Too often the measures used assess only component of skill or individual actions rather than level of performance on the whole task. 


Instructional research for the principles identified in this paper requires a careful definition of a real-world complex task.  In the real world we are required to complete integrated tasks to produce an artifact, solve a problem, or bring about a desired consequence.  Such tasks usually require a variety of different kinds of knowledge and skill all brought together in an interrelated way to complete the task.  Too often in instruction we assess only pieces of the task, fragments of knowledge, or sub skills rather than the whole task.  Such partial assessment often misses much of the complexity that characterizes authentic, real-world tasks.


Instructional research for the principles identified in this paper requires that performance measures must be based on whole-task performance.  These first principles of instruction form an integrated set of prescriptions which taken together are designed to promote the acquisition of all of the knowledge and skill necessary for the learner to complete whole integrated complex tasks.  Remember-what-I-told-you, performing individual steps, and making isolated predictions are not sufficient forms of measurement to get at the complexity of real-world tasks or to assess the contribution of instructional strategies based on these first principles of instruction.


When performing a complex real-world task there is no such thing as a simple right/wrong answer.  Complex tasks allow for many levels of performance.  At first the learner may only be able to complete simple versions of the task.  As skill increases the learner can complete more and more complex versions of the task.  In solving problems early solutions may be unsophisticated and may consider only a portion of the factors involved.  As the learner gains skill the solutions become more elegant, more complex and take into consideration more and more factors.  Measurement of task performance must reflect this gradual acquisition of skill.  Bunderson (2003) described the need for a domain theory, a scaled measurement of increased levels of performance in a given subject matter domain.  Adequate measurement of performance in complex real-world tasks requires that we can detect increments in performance demonstrating gradually increased skill in completing a whole complex task or solving a problem.


What is a complex task?  For purposes of this discussion a task that is performed with the same steps in virtually the same way each time is not a complex task.  Simple data entry probably does not qualify as a complex task since the only increase in skill is efficiency in performing the same steps.  On the other hand, using a word processor to write reports probably does qualify since each report may require different operations, these operations can be performed in a variety of ways, and learning advanced operations may increase the efficiency and quality of the resulting report.    In this case the word processing of a series of reports can be scaled such that reports early in the series require only few operations and each subsequent report may require more operations and use of more complex operations to attain the required efficiency and quality of the resulting product. 


Assessment of incremental expertise is relatively straight forward in well defined tasks such as chess, athletic or musical performance.  Assessing incremental performance in less well defined domains is more of a challenge but must be done if we are to assess the contribution of scaled instructional strategies to the acquisition of skill.  


What are possible procedures for designing scaled measurement of performance level in complex tasks?  Three possible approaches are briefly described.   The author is sure that researchers undertaking research based on the hypotheses of this paper will find additional effective methods for scaling performance on complex tasks.


(1)    Identify a series of whole tasks moving from simple to increasingly complex in a given domain.  The appropriate performance measure is not the number of correct answers to isolated questions, as is so typical in much assessment, but rather how many of the complex tasks in the progression learners are able to complete.

·        Identify a progression of tasks.  Arrange this progression such that each subsequent task requires additional operations to perform or that the operations of themselves become more complex, that is, have more sub-steps or more alternative paths.  For each task in the progression establish a rubric for acceptable performance.  The learner then completes the tasks in succession until they are unable to complete a task.  Or the learner is given a task somewhere in the middle of the range.  If the learners are unable to complete the task, they are moved back in the progression until they are able to complete a task,   If they complete the task, they are moved forward in the progression until they are unable to complete a task.  Scoring the number of operations completed is not a satisfactory measure.  Knowing how to do eight or the nine operations in the task would not be an appropriate score.  The appropriate score is the highest level in the progression of tasks at which the student was able to complete the whole task in an acceptable way.  


(2)  For some domains reducing task complexity may not be an option, tasks in the domain may all be relatively complex making a progression of successively more complex tasks difficult to identify.  In this situation the learners are given a task with various levels of coaching available.  When the learner is unable to proceed, the first level of coaching is provided.  If the learner still has difficulty the second level of coaching is provided and so forth until the learner is able to complete the task.  The score is an inverse of the amount of successively more elaborate coaching required for the student to solve the problem or complete the task.  In this case it is not a progression of tasks that is scaled but the amount of help required within a task. 


(3)  For some domains it may be possible to use a single nested complex task to assess increasing levels of performance.  This is similar to the task progression previously described but in this situation solving the problem or completing the task can proceed incrementally.  Each stage toward the complete solution is an incremental increase in expertise.  A student is scored on the number of stages completed toward the problem solution.  In other words, in this type of complex task completing the first stage is solving a single whole task and each succeeding stage adds complexity and hence requires an increment in expertise.


All of these procedures may give some measure of performance level in completing complex tasks.  Determining such task progressions is not a trivial activity and will require considerable empirical trials to get the progression so that it does indeed reflect level of performance. 


Instructional Strategy Scaling

In the following discussion we have used only one or two words to represent the principles involved.  The reader should refer to the list of principles in the introduction to this paper for the full statement of the principle under consideration. 


The author hypothesizes that instructional strategies can also be scaled such that the level of instructional strategy employed correlates with the level of  effective and efficient performance on scaled complex real-world tasks.   The author hypothesizes that instructional strategy scaling is determined by the degree to which an instructional strategy implements these first principles of instruction as described in the remainder of this paper. 


Presenting information is assumed to be the base-line (level 0) instructional strategy.  Most of the web sites analyzed in the Barclay, Gur & Wu (2004) study were level 0, information-only, instructional strategies.  This paper hypothesizes that performance on complex real-world tasks will be incremented when an instructional strategy implements each of the first principles in turn.  Adding consistent demonstration to information promotes the first increment (level 1) in learning effectiveness, efficiency and engagement.  Adding consistent application with corrective feedback to information with demonstration adds a second increment (level 2) in learning effectiveness, efficiency and engagement.   Using a task-centered instructional strategy adds the third increment (level 3) in learning effectiveness, efficiency and engagement. 

Activation added to level 1, 2 or 3 will add an additional learning increment.  Integration added to levels 2 or 3 will also add an additional learning increment.  In the following paragraphs this paper will try to elaborate each of these instructional strategy levels and a very brief explanation of why it is hypothesized that each level correlates with increased levels of performance on scaled real-world tasks.  


Information & Portrayal

Subject matter content can be represented by both information and portrayal.  Information is general, inclusive, and refers to many specific situations.  Portrayal is specific, limited, and refers to one case or a single situation. Information can be presented (tell) and recalled (ask).  A portrayal can be demonstrated (show) and submitted to application (do).   Subject matter content can be represented by five categories:  information-about, parts-of, kinds-of, how-to, and what-happens.  The following table indicates the information and portrayal that is appropriate for each content category.


Table 1  Information and Portrayal for Categories of Learning










Associations (facts)

Associations (facts)




Name + description

Name + description


Find location





Classify examples


Steps + sequence

Steps + sequence


 Execute a procedure


Conditions + consequence

Conditions + consequence


Predict consequence
Find conditions


The base-line instructional strategy is information-only.   Information-only includes presentation alone or presentation plus recall.    An information presentation tells learners associations among two or more pieces of information; the name and description of one or more parts; the defining characteristics of a class of objects, situations, or processes; the steps and sequence to carry out a procedure; or the conditions and consequence for the events in a process.  If recall is added it asks learners to recognize or recall the information that was presented.  Information-only instructional strategies are very common in all educational environments whether schools, industry or government.  The author has been quoted as saying, “Information is not instruction.”  For this paper information-only will be considered as the most primitive form of instruction.  It is a level 0 instructional strategy whether or not recall is included.  


Hypothesis 1: 

A level 1 instructional strategy that adds consistent demonstration to a level 0 information-only strategy promotes a higher performance level on scaled complex tasks.

A demonstration is one or more worked examples of all or part of the task.  To be effective the demonstration must be consistent with the kind of task:  location with respect to the whole for parts; examples of the various categories for concepts; showing the execution of the steps together with the consequence for a procedure; and illustrating a specific process by showing the portrayal of the conditions and consequence (Merrill 1997).


Rationale.   Information-only is stored in associative memory.  Until learners construct a schema for the information their only recourse is recall.  Without a demonstration learners may fail to construct a schema or they may construct an incomplete or inadequate schema.  Even though they can remember the information, when asked to apply the information in a new situation they do not have an adequate mental model to complete the task resulting in a lower level of performance on a scaled complex real-world task. 


Application as demonstration. Adding a consistent application to an information-only strategy will also result in an increment in performance.  If the application is accompanied by corrective feedback, that is essentially a worked example, then it is equivalent to a demonstration and will likely result in a similar increment in performance.  Except for very simple tasks, adding application with only right/wrong feedback but without corrective feedback will not promote as large an increment in performance on the final complex task as will demonstration for the same portrayal.  A strategy that consists of information plus consistent application with corrective feedback is also a level 1 instructional strategy. 


Problem-based.  Providing information and then having the student attempt to apply that information to the performance of a complex task which is information-only is still a level 0 strategy and is unlikely to promote an increment in the performance level on complex tasks.    Having learners solve a complex problem after an information-only presentation is not the same as the task-centered strategy described below.   Problem-based learning often involves giving learners a complex task, some resources (information), and expecting them to figure out a way to complete the task.  There is a difference between this type of a problem-based learning strategy and the task-centered or problem-centered instructional strategy described below.  Merely asking learners to complete a complex task after presenting information is not likely to promote any increased performance levels on other complex tasks unless the task is used as a demonstration as described for application in the previous paragraph. 


Rationale.   For the same reasons explained above information is unlikely to lead to adequate or complete schema representations.  Attempting to complete the whole task given information only, especially in situations that are unfamiliar to the student, will likely result in incomplete or inadequate schema construction.  If there is corrective feedback for the whole task then the situation is similar to application with corrective feedback which is in fact the same as a demonstration and may result in a similar increment in effective performance.  However problem-based strategies are not likely to result in efficient learning. 


In summary, when scaling an instructional strategy adding consistent demonstration to an information-only strategy (a level 1 instructional strategy) is the most effective first step to promote an initial increment in level of performance on complex tasks.


This paper also hypothesizes that instructional strategy scaling can also occur within levels.  The following hypotheses suggest increments within demonstration. 


Hypothesis 1.1 

Adding learner guidance to demonstration promotes an additional increment in the level of efficient and effective performance on complex tasks.   

Guidance focuses the student’s attention on those elements of a given portrayal that correspond to the general information presented.  For kinds-of tasks guidance focuses the learner’s attention on the portrayal of each of the defining properties used to determine class membership.  For how-to tasks guidance focuses the learner’s attention on the execution of each step in the demonstration.  For what-happens tasks guidance focuses the learner’s attention on the portrayal of each of the conditions leading to a given consequence.   In short, guidance helps the student relate the specific portrayals to the more general information. 


Rationale.  Appropriate attention focusing guidance directs attention to relationships among information and portrayals and reduces cognitive load by reducing the amount of effort required to locate critical relationships thus allowing more time for the learner to build appropriate schema and transfer the relationships to long term memory. 


Hypothesis 1.2: 

Relevant media included in a demonstration promotes learning efficiency, effectiveness and engagement.  Irrelevant media included in a demonstration results in a decrement in learning efficiency, effectiveness or engagement (Mayer 2001; Clark and Mayer 2003).


Graphics.  It is probably true that a picture is worth a thousand words.  Graphic information that enhances and illustrates the information being presented facilitates learning.  Graphic information that is irrelevant, not directly related to the information being presented, increases cognitive load and results in a decrement in learning.  Irrelevant graphic information include illustrations that do not carry any information directly relevant to the information being presented and extraneous sound effects or music that is not a critical element of the information being taught. 


Rationale.  Processing extraneous information increases cognitive load thereby increasing processing time and requiring the learner to take time to determine relevance of the information contained in the graphic material. 


Audio.  Another form of irrelevant media that results in a performance decrement is the use of audio to read text when both text and graphic material are presented together. 


Rationale.  Learners can process both auditory and visual information at the same time (Mayer 2001) but they cannot process both written text and graphic material at the same time since both use the visual channel  When the audio reads displayed text it focuses the learner’s attention on the text rather than on the graphic material.  If the graphic material carries some of the instructional information then this diversion of attention makes it difficult for learners to process the graphic information.  However, when only audio is used with a graphic, both the audio and graphic information can be processed simultaneously.   


Hypothesis 2: 

A level 2 instructional strategy that adds consistent application with corrective feedback to a level 1 instructional strategy consisting of information plus demonstration promotes an additional level of performance on complex real-world tasks.

Application requires learners to use their knowledge or skill to accomplish specific tasks.  Consistent application for parts-of tasks is to locate the part with respect to the whole;  for kinds-of tasks is to sort examples into appropriate categories;  for how-to tasks is to execute a series of steps; and for what-happens tasks is to predict a consequence given a set of conditions or find faulted conditions given an unexpected consequence. 


 Rationale.  Application allows learners to tune their schema.  Given information plus consistent demonstration assists the learners to form an appropriate schema.  Using this schema to do a new task requires them to check the completeness and adequacy of their schema.  When errors result and these are followed by corrective feedback, then learners can adjust their schema.  The initial application usually results in the most dramatic adjustment of the schema.  If the schema is very incomplete or inadequate then learners may be unable to complete the task.  If the task is too similar to tasks that were demonstrated then learners merely do the task but engage in very little reconstruction of their schema.  The challenge is to find new tasks for application that challenge the student but are not so challenging that their schema is inadequate to complete the task.


In summary, when scaling an instructional strategy adding consistent application with corrective feedback to an information plus consistent demonstration (level 2 instructional strategy) is the most effective second step to promote an additional increment in level of performance on complex tasks. 


Hypothesis 2.1: 

Adding gradually diminishing coaching to application promotes an additional increment in learning efficiency, effectiveness and engagement.

Coaching means that the instructional system or instructor does some of the cognitive processing for the student.  Such coaching often takes the form of hints.  A simple task may require only a single hint but complex tasks may require a series of more and more complete hints as described for complex task scaling in previous paragraphs. 


Rationale.  If a task is too complex the student may be unable to complete the task causing discouragement and eroding confidence.  If is not always possible to sequence tasks in an optimal progression of difficulty.  In order for learners to complete the task the instructional system does some of the “thinking” for learners and allows them to complete the remainder of the task.  If this coaching is always present learners exercise their right to be lazy and began to depend on the coaching rather than tuning their schema to be able to complete the task on their own.  When confronted with a task that is not accompanied by coaching their previous reliance on hints has prevented sufficient schema development for them to complete the task.  If this coaching is gradually withdrawn with each subsequent task the student is gently lead to rely more and more on their own resources to solve the problem or do the task.  When the coaching is finally completely withdrawn learners have sufficiently developed their own schema to allow them to complete the task with out assistance.


Hypothesis 3  Task-Centered Instructional Strategy

A level 3 instructional strategy that consists of a task-centered instructional strategy that includes consistent demonstration and consistent application with corrective feedback, promotes an additional increment in the level of performance on complex tasks.


A task-centered instructional strategy is not the same as problem-based learning or case-based learning as they are typically described in the instructional literature.  Problem-based or case-based learning are often far less structured than a task-centered strategy.   Problem-based instructional architecture often involves presenting a complex task or problem to a learner or group of learners, providing some resources or links to possible resources, and allowing the learners to interact among themselves and with the available resources to solve the problem or complete the task. 


A task-centered instructional strategy is much more structured.  It involves presenting a specific complex whole task to the learners, demonstrating a successful completion of the task, providing information plus demonstration plus application for each of the instructional components required by the task, and then showing learners how these instructional components apply to the task.  It also involves a progression of successively more complex tasks with successively less guidance provided with each subsequent task until learners are completing the tasks on their own.    The 4C/ID model represents a very sophisticated version of a task-centered instructional strategy (van Merriënboer 1997).



A task-centered instructional strategy requires consistent demonstration and consistent application at both the whole task level and at the level of the individual instructional components of the whole task.  A task-centered instructional strategy is a significant modification of an information-only strategy and is likely to result in a much larger performance increment than merely adding demonstration or demonstration plus application. 


Rationale.  When instructional components are decontextualized students are often admonished with “You won’t understand this now but later it will be very important to you.”  As a result the motivation to learn the material is significantly reduced.  Further, when learners must retain many instructional components without a context for their use they must often resort to associative memory and are likely to forget or fail to recognize the relevance of the information when confronted with the whole task and thus be unable to retrieve the required information when it is needed.  At best they will construct schema for the individual skills.  They are unlikely to incorporate the component into a schema for the whole complex task.     When instructional components are presented just-in-time for their application to a real-world complex problem then the need for the knowledge or skill is apparent and the motivation to learn the knowledge or skill is increased.  When the components are immediately applied to a complex problem then the student can construct a schema for the whole task rather than separate schemas for the individual instructional components. 


In summary, a level 3 instructional strategy consisting of a task-centered approach is the most effective instructional strategy to promote a high level of performance on complex tasks. 


Hypothesis 3.1 Task progression

Adding task progression to a task-centered instructional strategy promotes an additional increment in learning efficiency, effectiveness and engagement.

A task-centered instructional strategy that consists of a single complex task may be an effective strategy but a single task is far less effective than a progression of increasingly more complex tasks. 


Rationale.  A family of complex tasks, while sharing many similarities, is also characterized by subtle differences.  Learning to complete a single task leaves learners with only one view of the task and when confronted with a task from the same family but with differences from the original learning task they may fail to recognize that it is from the same family of tasks or they may not have sufficiently tuned their schema to enable them to adjust the solution process to accommodate the differences found in the task.  If the training task is a less complex task than a new task then learners may have not developed the nuances necessary to tackle this more complex task.    A progression of tasks that are progressively more complex during training with the student performing more and more of the steps to task completion on their own enables them to tune their schema so that when confronted with yet a different or more complex task from the same family they are able to move forward toward task completion. 


A level 1 demonstration-enhanced instructional strategy is hypothesized to promote a substantial increment in level of performance on complex tasks when compared to an information-only strategy.  A level 2 application-enhanced instructional strategy is hypothesized to promote yet a second increment in level of performance.  A level 3 task-centered instructional strategy is hypothesized to be the most effective in promoting performance on complex tasks.   Strategy enhancements such as appropriate guidance and use of relevant media for demonstration or gradually diminishing coaching for application should increase the amount of the performance increment.  These hypotheses however assume a precision of measurement that is rarely achieved in instructional or learning research.  A regression analysis usually shows that the first 2 or 3 variables account for the majority of the variance and that subtle enhancement such as those just described, while contributing to effective learning, may be very difficult to measure.


Hypothesis 4  Relevant Experience Activation

Providing or recalling relevant experience promotes an additional increment in learning efficiency, effectiveness and engagement when added to a level 1, level 2 or level 3 instructional strategy. 


A similar measurement problem confronts the researcher when attempting to determine the contribution of the activation principle.   Significant effects may only show-up when these enhancements are added to a level 0 or perhaps a level 1 strategy.  As the instruction becomes richer the relative contribution of these additional principles becomes less apparent.  Nevertheless, it is hypothesized that effective activation of relevant experience does increment learning.  


Adding activation to an information-only strategy may promote an increment in performance if the student has developed relevant schema from the previous experience.  This schema can then be used as the basis for the construction of revised schema given the information.  The more familiar the new task is to previously learned tasks the larger the affect from activation of this previous learning.  Unfamiliar new tasks, for which the previous experience is only tangentially related, are less likely to promote an increment in performance. 


Rationale.  Adding relevant-experience-activation to level 1, level 2 or level 3 instructional strategies should facilitate the formation of an appropriate schema by allowing learners to build on existing schema.  On the other hand activating an inappropriate schema by activating experience that is not relevant may actually promote a decrement in performance.  Attempting to add to an inappropriate existing schema may result in misconceptions that interfere with efficient and effective task completion.  From a measurement standpoint it may be easier to detect the detrimental effects of inappropriate relevant-experience-activation than the other way around.


Hypothesis 5  Structure Activation

Providing activation-structure promotes an additional increment in learning efficiency, effectiveness and engagement when added to level 1, level2 or level 3 instructional strategies.


One way to help students form an appropriate schema or mental model for completing a complex task is to provide a framework that can be used to organize the information required.  This structure can form the basis for an appropriate task schema.  When left to their own resources learners often use less effective structures for organizing the information.  When the instructional system provides an effective structure that learners can use to store and process the information their ability to retrieve and use this information in subsequent situations is improved.  Providing a structure is especially effective for students who may have had little or no relevant prior experience that can be used as a basis for a new schema for the family of tasks.


Hypothesis 6  Reflection Integration 

Adding reflection integration to any of the above instructional strategies promotes an additional increment in learning efficiency, effectiveness and engagement.


It has often been demonstrated that amount and level of learning is a function of the amount of effort learners expend to acquire the required skill.  Providing an opportunity to reflect, review, go back over the information and portrayals of the information increases the level of effort, provides additional opportunity for solidifying an appropriate schema, and allows learners to explore areas of possible misconception or ambiguity.  This additional effort provides a tuning for learners’ schema increasing the probability that it will be more effective in completing subsequent tasks. 


Hypothesis 7   Create Integration

Adding create-integration to any of the above instructional strategies promotes transfer of the newly acquired knowledge and skill to performance on similar tasks in the real-world beyond the instructional situation. 


Create is the opportunity for learners to think ahead to find ways that the newly acquired knowledge and skill might be applied in their subsequent real-world activities.  Create probably is more related to long range transfer than to task performance immediately following instruction.  The effects of create are more likely to show up in transfer situations or later performance on the job.  The logistics of measuring the effect of create-integration are often impractical and difficult. Nevertheless, the ultimate goal of most instruction goes beyond the end-of-course performance to performance on-the-job or in the real world. 


Hypothesis 8 Go Public Integration

Adding integration - go public to any of the above instructional strategies promotes engagement that in turn promotes an additional increment in learning efficiency, effectiveness and engagement.


It is hypothesized that engagement is only temporarily gained by graphics, animation, video, audio and other multimedia enhancements.  These superficial qualities of an instructional program often do little or nothing to promote long term engagement.  It is hypothesized that learning itself is the most significant determiner of long term engagement.  People love to learn but only when they can see their learning progress.  When they perceive that they have acquired skill that was not present when they started the instruction then there is a desire to show what they have learned.   Going public means that there is an opportunity to “show off” learning to significant others.  Knowing that they will be going public early in the instruction provides an increased incentive for learners to be engaged in the learning process so that they will be able to ably perform for others when the opportunity is presented.  It is also hypothesized that if learners are informed that they will be required to go public and if,  because the instructional strategy is ineffective (level 0),  the learners cannot perceive learning progress, then engagement will turn to frustration or anxiety with a resulting decrement in learning effectiveness and efficiency.   



Figure 1 illustrates the hypothesized relationship among levels of instructional strategy and performance on complex tasks.  Level 0, information-only, is shown in the left column, adding demonstration is shown in the second column, adding application is shown in the third column and adding a task-centered strategy is shown in the right column.  The dashed line is the basic demonstration strategy plus learner guidance and/or appropriate media, the basic application strategy plus diminishing coaching, and the basic task-centered strategy plus a progression of tasks.    The dotted line is the effect of adding strategy enhancements to the basic plus strategies i.e. relevant experience activation, structure activation, reflective integration, create integration, and/or go-public integration.  The relationships illustrated in Figure 1 are merely relative rather than parameter predications.  The initial basic and basic plus strategies will no doubt account for much of the variance and the enhancements will contribute significantly less as the instruction becomes richer.

Figure 1Hypothesized contribution of scaled strategies to performance on complex tasks



Allen, M. W. (2003). Michael Allen's Guide to e-Learning. New York, John Wiley & Sons, Inc.


Andre, T. (1997). Selected Microinstructional Methods to Facilitate Knowledge Construction:  Implications for Instructional Design. Instructional Design:  International Perspective. R. D. Tennyson, F. Schott, N. Seel and S. Dijkstra. Mahwah NJ:, Lawrence Erlbaum Associates. 1: 243-267.


Barclay, M. W., B. Gur, et al. (2004). The impact of media on the family:  assessing the availability and quality of instruction on the World Wide Web for enhancing marriage relationship. World Congress of the Family:  Asia Pacific Dialogue, Kuala Malaysia.


Bunderson, C. V. (2003). How to build a domain theory: On the validity centered design of construct-linked scales of learning and growth. Objective Measurement:  Theory into Practice. M. Wilson. Stamford, CT, Ablex Publishing Co.


Clark, R. C. (1999). Developing Technical Training. Washington D.C., International Society for Performance Improvement.


Clark, R. C. (2003). Building Expertise:  Cognitive Methods for Training and Performance Improvement. Washington D.C., International Society for Performance Improvement.


Clark, R. C. and R. E. Mayer (2003). E-Learning and the Science of Instruction. San Francisco, Jossey-Bass Pfeiffer.


Clark, R. E. (2003). What Works in Distance Learning:  Instructional Strategies. What Works In Distance Learning. H. F. O'Neil. Los Angeles, Center for the Study of Evaluation: 13-31.


Clark, R. E. and F. Estes (2002). Turning Research into Results:  A Guide to Selecting the Right Performance Solutions. Atlanta, CEP Press.


Collis, B. and A. Margarkyan (2005). "Merrill Plus:  Blending corporate strategy and instructional design." Educational Technology 45(3): 54-59.


Dembo, M. and L. G. Young (2003). What works in distance education:  learning strategies. What Works in Distance Education. H. F. O'Neil. Los Angeles, Center for the Study of Evaluation.


Foshay, W. R. R., K. H. Silber, et al. (2003). Writing Training Materials that Work:  How to Train Anyone to Do Anything. San Francisco, Jossey-Bass Pfeiffer.


Marzano, R. J., D. J. Pickering, et al. (2001). Classroom Instruction that Works:  Research-based Strategies for Increasing Student Achievement. Alexandria, VA, Association for Supervision and Curriculum Development.


Mayer, R. E. (2001). Multimedia Learning. Cambridge, Cambridge University Press.


Mayer, R. E. (2003). What Works in Distance Learning:  Multimedia. What Works in Distance Learning. H. F. O'Neil. Los Angeles, Center for the Study of Evaluation.


Merrill, M. D. (1997). "Instructional Strategies that Teach." CBT Solutions (Nov/Dec): 1-11.


Merrill, M. D. (2002a). "First principles of instruction." Educational Technology Research and Development 50(3): 43-59.


Merrill, M. D. (2002b). "A pebble-in-the-pond model for instructional design." Performance Improvement 41(7): 39-44.


Merrill, M. D. (In Press a). First principles of instruction:  a synthesis. Trends and Issues in Instructional Design and Technology. R. A. Reiser and J. V. Dempsey. Columbus:  Ohio, Merrill Prentice Hall. 2.


Merrill, M. D. (in press b). First Principles of Instruction. Instructional Design Theories and Models III. C. M. Reigeluth and A. Carr, Lawrence Erlbaum Associates Publishers.


Merrill, M. D. (In Press c). Converting esub3-learning to ethird power-learning:  An alternative instructional design model. e-learning:  Lessons Learned, Challenges Ahead (Voices from Academe and Industry). S. Caliner and P. Shank. San Francisco, Pfeiffer/Jossey-Bass.


Merrill, M. D., L. Drake, et al. (1996). "Reclaiming instructional design." Educational Technology 36(5): 5-7.


O'Neil, H. F. (2003). What Works in Distance Education. Los Angeles, Center for the Study of Evaluation

National Center for Research on Evaluation,

Graduate School of Education & Information Studies

University of California, Los Angeles

Los Angeles, CA 90095-1522: 126.


Reigeluth, C. M., Ed. (1983). Instructional-Design Theories and Models: An Overview of their Current Status. Hillsdale NJ, Lawrence Erlbaum Associates, Publishers.


Reigeluth, C. M., Ed. (1999). Instructional-Design Theories and Models:  A New Paradigm of Instructional Theory. Mahwah, NJ, Lawrence Erlbaum Associates Publishers.


Rosenshine, B. (1997). Advances in research on instruction. Issues in Educating Students with Disabilities. E. J. Lloyd, E. J. Kameanui and D. Chard. Mahwah, NJ, Lawrence Erlbaum: 197-221.


Tennyson, R. D., F. Schott, et al., Eds. (1997). Instructional Design International Perspective: Theory, Research, and Models. Mahwah, NJ, Lawrence Erlbaum Associates Publishers.


Thompson_Inc. (2002). Thompson Job Impact Study. Naperville, IL, Thompson NETg.


van Merriënboer, J. J. G. (1997). Training Complex Cognitive Skills:  A Four-Component Instructional Design Model for Technical Training. Englewood Cliffs, NJ, Educational Technology Publications.




[1] The average scores by country indicate almost no implementation of these instructional principles.  N=number of sites analyzed, highest score, average score.  Possible score is 15.  Australia N=202, 6, .11; China N=551, 2, .02; France N=257, 6, .12; Turkey N=42, 6, .17; USA N=410, 7, .13. 

[2] From Lewis Carroll, Alice in Wonderland.


 “The cat grinned when it saw Alice. It looked good-natured, she thought: still it had very long

claws and a great many teeth, so she felt that it ought to be treated with respect.

“ ‘Cheshire Puss,’ she began, rather timidly, as she did not at all know whether it would like the

name: however, it only grinned a little wider. ‘Come, it’s pleased so far,’ thought Alice, and she

went on. ‘Would you tell me, please, which way I ought to go from here?’

“ ‘That depends a good deal on where you want to get to,’ said the Cat.

“ ‘I don’t much care where –’ said Alice.

“ ‘Then it doesn’t matter which way you go,’ said the Cat.

“ ‘—so long as I get somewhere,’ Alice added as explanation.

“ ‘Oh, you’re sure to do that,’ said the Cat, ‘if you only walk long enough.’ ”