The Use of Participatory Design Methods in a Learner-Centered Design Process
Paula Vincini
Indiana State University


This paper is a reflective look at how participatory design methods were chosen and the effect they had on achieving a learner-centered design for an electronic performance support system called the Virtual Instructional Designer (VID), funded through a three year federal grant.It will also provide a quick overview of the path of user/learner involvement in instructional design from formative evaluation to user-centered design and finally to participatory design.I argue that this path reflects the movement in learning theories that also have affected instructional design practices.Finally it will show how these paths intersected with Reeves’ three principles of Learner-Centered Design to provide a design framework for the VID project. The grant period covered is the first 20 months of the project.


In July 1999, a Midwestern statewide partnership of three post-secondary institutions was awarded a three year, $1.5 million federal grant to design, develop and disseminate an electronic (Web-based) performance support system (EPSS) called the Virtual Instructional Designer (VID). The VID targets the complex, high-discretion process of designing and developing an online course by higher education faculty. As Associate Project Director and author of the grant, I am responsible for leading the VID design and development team.  Because I am also a doctoral student in the Instructional Systems Technology (IST) program at Indiana University Bloomington (IUB), I am interested in using a design methodology that would be appropriate to this grant project. In making a decision I was influenced first by the movement in instructional design (ID) from formative evaluation practices to user-centered design. Next, I wanted to mirror the shift in influence in ID away from behaviorism and cognitivism to a constructivist approach.

Finally I wanted to incorporate Reeves’ (1999) three learner-centered design principles of learnability, usability, and understandability in the design of the VID. Because the VID would be a complex, content intensive tool designed to inform, train, and teach, its design process seemed especially compatible with Reeves’ LCD. Also since the VID is about designing online courses, I felt the process for designing the tool should mirror instructional design practices for creating learner-centered instruction.

Foremost, I wanted to involve the pilot testing instructors and partnership instructional designers in the design process as much as possible. I hoped to go beyond user-centered design, where faculty and instructional designers would mainly provide feedback and instead involve them in the actual design decision-making with the VID design and development team. It seemed to me that to create true learner-centered instruction, that representative learners must be involved in the design process for an online course before it is offered, not just after during formative and summative evaluations.  Consequently, the design of this tool, though not an online course, must at least reflect an approximation of such a design methodology. In researching methods and processes that would meet these goals, I became convinced that applying a participatory design (PD) framework to the instructional design process would result in a tool that fulfilled Reeves’ learner-centered design principles.

This paper seeks to look at the advantages and constraints imposed on instructional design practices by PD methods such as cooperative prototyping, future workshops, and direct involvement by user-learners in the design process (Bodker, 1996; Carroll, Chin, Rosson, & Neal, 2000). The project time frame to be examined involves the first 20 months of the three-year grant. The paper will also examine the progression of user involvement in the design of the VID from typical user-centered design practices to more participatory design methods.


My decision to use participatory design methods to achieve a learner-centered design was first triggered by answering a qualifications exam question. The question focused on the movement in ID from formative evaluation practices through user-centered design and rapid prototyping methods to participatory design.  A brief look at that path will provide context for my focus on participatory design methods as being optimal for the design of the VID.

As practitioners of instructional design (ID) seek new models of practice to replace traditional, linear, positivistic models associated with behaviorism and cognitivism, more attention is being placed on the role of the learner or user in the design and development process (Bednar, Cunningham, Duffy, & Perry, 1991; Norman & Spohrer, 1996; Reeves, 1999).Instructional designers work to adapt to new theories of learning based on constructivist underpinnings and with incorporating new interactive technologies into instruction. They will be continuing to look for practical ways to change their methodology of design and development to meet these challenges. Because as Willis (2000) points out, ID has been slower than other design fields to consider full collaboration with the user, it is important to take a look at how the role of the user in ID has evolved.

Three trends seem to be evident, as the field of instructional design has become influenced by changes in learning theories as well as new technologies.The first trend is for stakeholders, whether they are learners in the classroom, instructors, or trainees, to become a greater part of the process of design and evaluation than traditional instructional design models had originally intended. This movement is reflected in changes in the theoretical underpinnings of learning theory (Carr, 1997; Reigeluth, 1996; Wagner & McCombs, 1995; Willis, 2000) and in developments in increasingly interactive computer technologies (Corry, Frick, & Hansen, 1997; McKnight, Dillon, & Richardson, 1996; Norman, 1988).

Early on in the history of instructional design the influence of behaviorism, with its \ emphasis on evaluation determined directly by instructional objectives, led to the need to gather information about the merit, worth and value of a product or process to determine if it fulfilled those objectives (Kemp, Morrison, & Ross, 1998; Seels & Richey, 1994). This became the practice in ID of formative evaluation. Although most ID textbooks advocated that data collection should involve both a sample of individuals representative of potential users and experts, the instructional designer was usually the main critic, reviser, and evaluator of the design (Weston, McAlpine, & Bordonaro, 1995).A conflict of interest is clear here in terms of the ultimate usefulness and usability of the design (Weston, et al., 1995).

A major change was to hit the methodology of formative evaluation in the 1980s, when large companies began putting new personal computers on the desks of employees, and it became clear quickly that users were not always able to use these tools and their software applications efficiently or effectively (Greenbaum, 1993). Designers became aware that users needed more of a voice in the design of software programs and computer interfaces to make them more "user-friendly". With this emphasis in the design of products and instruction to be more user-centered, the term and concept of usability testing entered into ID practice.

The similarities between formative evaluation and usability testing are apparent because both are iterative processes with similar goals - to collect data and information about the effectiveness of products, materials, or instructions. Both use representative target population users and experts, and data and information is collected numerous times in both the design and development phases. They also share similar methods including observation, think-alouds, questionnaires, surveys, data logs and interviews (Corry, et al., 1997; Nielsen, 1993).

However, where they differ is in the primary focus of evaluation. For formative evaluation the focus is on the effectiveness of  an educational program, as well as instructional and learning strategies (pre- and post-tests measures to indicate learning gains and subordination of issues of learner satisfaction).For usability testing, the focus is on the effectiveness of the entire system a learner faces, not just the usefulness of a product. Some also suggest that another major difference is that usability testing occurs earlier and more often than formative evaluation (Corry, et al., 1997).

The second trend in ID also occurred due to the rapid acceleration of information products. Practitioners increasingly found traditional instructional design models to be too cumbersome and comprehensive to use (Dorsey, Goodrum, & Schwen, 1995; Thiagarajan, 1993; Tripp & Bichelmeyer, 1990). Thiagarajan (1993) was one of many who objected to the conventional ID model with its lengthy analysis period followed by an even lengthier formative evaluation period with its painstaking pilotand field testing. He advocated the field move to adopting rapid prototyping, which followed "just-in-time" principles that allowed the designer to move quickly and continuously through the design/develop/revise stages. Thiagarajan felt this would produce a leaner version of the instructional package for more immediate use and implementation (1993).

Others saw more than efficiency as a benefit of rapid prototyping; they saw benefits to the user by involving him or her earlier in the process of evaluation (Dorsey, et al., 1995; Tripp & Bichelmeyer, 1990).They pointed out that rapid prototyping is more than just another (perhaps more efficient) version of formative evaluation. Formative pilot tests were considered less of a prototype for evaluation as a form of the finished product. The benefits to rapid prototyping included helping the user "see" the product earlier before much time and energy had been invested in its final form.

Rapid prototyping and user-centered design were also moves away from the popular systems approach advocated by the early behaviorist models in ID to a model favored by cognitivists who were looking at how users process information (Reeves, 1999).As Norman (1988) pointed out, user-centered design is a "philosophy based on the needs and interests of the user, with an emphasis on making products usable and understandable" (p. 188).The user-centered approach of rapid prototyping can be seen as an attempt to bring people 'back into the picture" - not just in evaluating a nearly finished product but in helping develop the product as it is worked out in practice (Greenbaum, 1993).

Again, one of the major weaknesses of the more traditional systems-based approach to design and evaluation was that despite extensive and comprehensive front end analyses, users were often unhappy with products used in pilot testing (Dorsey, et al., 1995; Nielsen, 1993).Although with rapid prototyping, the user became a collaborator and not just a passive recipient or participant in evaluation trials (Mitchell, 1993, Sarnoff, 1990; Shneiderman, 1998), Blomberg, Suchman, and Trigg (1996) argue that usability and usefulness, while complementary, are not equivalent.


With the inclusion of the user into more and more of the design process, a philosophy of design that became even more closely tied to constructivist philosophy appeared - the philosophy of participatory design (PD) or cooperative design (Trigg & Anderson, 1996). The concept of PD has been borrowed from traditions in Scandinavian software design where stakeholders have been involved beyond input and reflection on product design to empowerment with real and substantial decision-making powers (Carroll et al, 2000 ; Carr-Chellman, et al., 1998).

Participatory design advocates argue that user-centered design keeps the design control in the hands of the professional designers and while users have more and earlier input into the design process, no significant shift of power has occurred (Carr, 1997). PD has now extended to the fields of engineering and architecture, among others (Carroll et al, 2000; Sarnoff, 1990). It was this philosophy that I hoped to incorporate into the design activities that would result in the VID.

I was especially impressed with the emphasis advocates of PD had that user participation in a participatory, collaborative context resulted in the improved quality of computer applications (Bodker, 1996). Carroll et al. (2000)  emphasized that not using participatory design methods could end up being more costly because the tools developed don’t meet users needs:

The software we developed strongly reflects the design concepts championed by the teachers. Our original plan emphasized support for high-bandwidth, real-time interactions; our vision was of a graphically-enhanced multi-user domain. Early development work focused on creating a shared whiteboard that would allow students to collaborate on simulation experiments. However, the teachers’ emphasis on long-term projects led us to reweigh goals having to do with maintaining work context.

In a reflection of learner-centered theories such as constructivism, PD increases user involvement, which leads to more accurate information about tasks and a greater sense of ownership and participation and a more successful implementation (Schneiderman, 1998).


Having chosen a participatory design framework, I looked for a set of learner-centered design principles that I felt would be complementary to the collaborative involvement of users in the design process. I found it in Reeves’ work with Learner-Centered Design (LCD), “a newly emerging view of human beings and the purpose of design  - a view strongly affected by the cognitive (individual and distributed) revolution of the last 30 years” (Reeves, 1999, p.1).

Reeves establishes three principles for LCD that are especially applicable to the kind of complex product design the VID represents: "Learnability refers to the initial difficulty in learning how to use an application, space, or product; usability reflects the ease of use of a system or product over time; and understandability is an extension of usability, including the thoughtful design of the content" (p.2).

Reeves thinks that LCD is “especially appropriate for those information systems that seek to inform, train, teach, evaluate, collaborate, cooperate, or educate” (p.163), which fit well with the purpose of the VID. He also urges designers to move from user-centered designs to “an all-encompassing learner-centered design” (p.6).

I agreed with Reeves that the key to the success of the design of the VID would be in its learnability, usability and understandability for both instructors, who would use it to guide their online course design and development, and instructional design or faculty development staff.As an EPSS, it would serve as a system of online job aids, support tools, and information (Sherry & Wilson, 1996).As such its usefulness and usability would need to be an iterative and flexible design to accommodate the natural flow of designing an online course amidst the other work day demands on faculty.

For faculty, the VID would need to answer their pedagogical and instructional design questions, help them solve their course design problems, and improve their understanding of appropriate uses of Web-based media and technologies. For instructional designers (IDs) and faculty development staff, it would allow them to extend instructional design expertise beyond the typical time and place-based workshop. The VID would assist staff whose time is being stretched thin to meet the needs of instructors for individual assistance with Web-based and online courses.

For these reasons, IDs or faculty development staff would need a tool that would allow them to find answers to instructors’ questions or problems quickly and easily. The VID would provide multiple options that would allow faculty to follow their natural styles of inquiry among clusters of interlinked resources. The tool could potentially provide a cost-efficient and effective way to scale-up individualized instructional design assistance and to help support staff focus their time and energy on more instructor and course specific assistance.


My plan to use PD as a framework for the design of the VID began with identifying who would be the primary stakeholders to involve as participants in the first year of the grant and then to discover their needs, wants, and ideas about what the VID should contain in terms of features and function. In traditional ID models this would be called learner, task, and needs analysis, but I hoped to go one step further and initiate a process that would involve these stakeholders in each step of the design process.

In the VID project, the faculty are typical of adult learners in that they are busy professionals who must “fit in” learning about the process of designing online courses in their already busy work schedule while actually applying the process to a real course. The pilot testing faculty chosen were from varied subject areas and institutional contexts and had a variety of experiences with the Web and technologies in general. They ranged from new online instructors, who had little technical expertise to instructors who had already designed and taught online courses. The context of the institutions also varied from a two-year liberal arts college to a community/technical college system, and a four-year state university.

During the first six months, while a job search was conducted for the VID design team, I focused on getting a clearer picture of what instructors and instructional designers would want in terms of content and function for a tool like the VID. This time was devoted to gathering, analyzing, and synthesizing the responses from pilot site focus groups and interviews, a “vision” meeting, an adaptation of a typical PD future workshop, and an online needs and skills survey. This information when combined with insights from distance learning listservs, publications, e-mail newsletters and other sources outside the partnership then became the initial information structure for the first VID functional rapid prototype.

It was apparent to me that the needs of the instructors were more complex and wider ranging than first anticipated by the grant. As originally conceived in the grant, the VID would have four sections: Pedagogy, Instructional Design, Technology, and Project Management. After each round of information gathering and brainstorming with representative VID users, this structure widened to six sections: Jump Start, Communication, Content and Assignments, Motivation, Evaluation, and Media & Technology.

In other feedback from the vision meeting, instructors and ID staff also wanted best practice examples of real instructors and their solutions to online teaching and learning problems. They emphasized the content in the VID should be focused, short and practical. Others wanted more in-depth information, but the freedom to choose the amount and kind of information necessary to make just-in-time design decisions. Some specified a need for tutorials on the “basics” required for online development and others wanted to take their online courses “to the next level” without being clear as to the meaning of the phrase.

Another feature was sought that would customize the VID for each instructor’s needs. This evolved into faculty needs and skills assessment form that would generate a series of links within the VID to take instructors directly to those areas that interested them most.


During this time some initial constraints became apparent in using participatory design methods in the way I had anticipated. First, because the actual pilot testing instructors would not be selected until the end of the first year, they would not be involved in the initial design of the tool (although some of the instructors participating in the first year activities did turn out later to be selected as pilot instructors).Next, unlike PD projects in Europe where workers from the workplace are involved from the beginning of the project (and in some cases initiate the need for the project) (Carroll, 1996; Gartner & Wagner, 1996), the participants in the grant team were university staff and administrators, instructional designers, or representatives from other partnership groups. No faculty were directly involved in initiating or participating in the grant writing process. The need for the VID came directly from the experiences and perceptions of the grant partnership stakeholders.

The challenge this presented to involving faculty in a true participatory design framework the first year of the grant was two-fold. During the grant writing process, partnership members involved in creating the budget did not figure in any monies for faculty to participate in the first year grant activities. Instead, funds were requested to download or offload an instructor one course in the second and third years of the grant for course design activities involving using the VID. Because of this, the initial design of the VID was predicated on volunteer faculty involvement from partnership institutions. This did not present much difficulty in terms of front end analysis group activities to determine the needs, wants, and problems faculty were having with designing online courses or in envisioning what a VID-like tool might feature and how it might function. However getting participation in the designing of rapid prototypes later that year was nearly impossible.

Second, the grant did include monies to offload instructional designers time to work with the VID design team from three of the five partnership sites. However, due to the lack of a VID team through most of the first year, there was little for the instructional designers to do other than the information gathering activities. Another problem that surfaced with this important stakeholder group included staffing shifts when two of the partners lost instructional designers assigned to the project, and it was some time before they were replaced. Since the site instructional designers served as both participants to the design and development of the VID and also as intermediaries to the site faculty for their participation, the deep participatory involvement I would have liked from both stakeholder groups this first year was limited in scope.

One common problem that arose among the institutions was the importance of semester-based time restraints to instructor involvement in the project. While the grant year ran from September to August, instructors were really only available for involvement about 10 weeks of each 16 week semester. Beginning and ending a semester, midterms, holidays, and the usual heavy work, research, and service load all impeded on their involvement as participants for input to the tool design. Instructional designers at each site discovered that getting feedback from instructors or scheduling them for video conferencing activities became nearly impossible at critical times in the semester.

This constraint was critically apparent when feedback was needed from representative faculty and IDs on a paper functional prototype of the VID in late spring of the first grant year.Instructional designers at the sites were asked to conduct a talk aloud session with instructors at each site. A file with instructions and the paper prototype was e-mailed to site instructional designers and a printed copy was also sent for usability tests with pilot instructors. Unfortunately, the end of the semester and finals interfered with the plan and the talk alouds were never carried out. Next, a Web vertical rapid prototype for the first VID design was created once the VID Web programmer was hired and feedback was requested early in the summer from the partnership sites.

Again, only minimal feedback was gained, and the quickly approaching date for the second year of the grant and initial pilot testing forced the use of the rapid prototype design to be implemented without any real participatory involvement by partnership pilot instructors or designers. Also, because it was late July before the full VID Design and Development team was hired and in place, only one module of content - Online Course Quality - was ready for the pilot testing instructors beginning the second year of the grant. This module was developed first because it was the most frequently selected by pilot instructors in an online survey to set priorities for content development. The survey was another attempt to involve the instructors in design and development decisions in order to feel more involved in the project. Unfortunately, it later became known that instructors felt that they were being asked to give too much input into the design - a role they had not expected to fill.


Activities during the second year of the grant, with the VID design team finally in place, proved to be closer to the kinds of participatory design activities I had first contemplated when beginning the project. However, this was true mainly about the participation of site instructional designers with the VID team. Problems involving the pilot instructors in the design of the VID continued, and it became clear that their involvement would be mediated through the instructional designers assigned to them.

One of the main problems in involving faculty in a participatory design methodology was the matter of their selection for the project. As Carroll and his team discovered in their project (Carroll, et al., 2000), some instructors participating as pilot testers were reluctant participantsThis was true not only for instructors as VID pilot testers but also as online course developers. Incidents of instructors being assigned to the project who had little or no real understanding of the nature of their responsibilities and roles became evident as the pilot-testing year unfolded. Because of this, they had little intrinsic motivation to develop a tool such as the VID.

Another aspect affecting instructor participation and interest was the unique political and social contexts of the “workplaces” or sites for the project. One institution, with six courses to be designed by instructors, was just beginning to get started with online teaching and learning, and there was significant interest from the instructors in faculty development opportunities. It also had received other grant monies that would be used to pay pilot testing instructors stipends beyond the course download from the grant. Their process of selection involved applications by interested faculty and then selection from this list. Also, the site instructional designer organized bi-weekly lunches to discuss course development and VID design issues.

Another partnership institution also had six courses to be designed by instructors but had three sites for the grant. Instructors were chosen according to online courses that needed to be developed. Unfortunately, two of the sites would involve only one instructor each and later in the grant year, their involvement seemed to fade out. Also at this institution no extra monies were available for stipends or course downloads and that may have had an effect on participation since these instructors were expected to be involved throughout the year.

A third partnership institution was the largest site for the pilot testing with nine courses to be developed by faculty. Despite having two years experience with providing incentives for online course development activities, recruiting interested faculty was more time consuming and involved. This is not atypical of a more autonomous faculty culture and an institution that has more formal procedures for distance learning development.

Overall, beyond information the grant team provided faculty, I had little control or involvement in what instructors were told at the partnership sites about the grant project, their involvement, or expectations for the design of the VID. While these very different contexts were advantageous to getting a range of faculty participants in terms of experience and technical skill for pilot testing of the tool, it did make participatory design involvement a challenge.

Our efforts to orient these pilot instructors to the nature of the project involved individual and group meetings, a VID “kick-off” meeting that was teleconferenced to the sites, provisions of three-ring binders of information on the grant and project and informative e-mails. However some instructors were not able to attend any of the information sessions or were selected so late in the process that they missed the opportunity.


Communication and feedback between the pilot testing instructors and the VID design team was another challenge. The team tried numerous e-mails and phone calls to involve instructors or get feedback from them on the design and development of the VID Public Site, VID1 functional prototype, VID1 Online Course Quality module content, and a tutorial’s content. The design team began to realize that instructors saw themselves mostly as users of the tool and not as collaborators in the design. Because of the lack of content due to the hiring delays, instructors were disappointed in their first viewing of the rapid prototype, despite the full Online Course Quality module. However, because VID1 was a rapid prototype with limited content, it became clear that this pilot testing year would need to be more about feedback from the instructors on the design of the VID2 beta version and newly uploaded content than on the use of an already completely designed tool. This initial disappointment was difficult to overcome until much later in the grant year.

By late October, it was clear to the grant team members and the VID design team that real communication problems existed. At a meeting of the full grant team, a decision to have the design team meet monthly with the partnership site instructional designers, in person or by teleconference was made. Need for participation by faculty would be handled through these site designers and the VID design team would no longer contact the instructors directly. A VID listserv newsletter for the pilot instructors, which had already been set up earlier to disseminate information, would be used more frequently by the VID design team to disseminate information about progress on the VID, instructor roles and responsibilities, and the impact that their feedback had on the design of the beta version of the VID.

As the instructional designers and the design team began meeting regularly, a mediated version of participatory design began to evolve as the only viable method for involving faculty in the design of the VID that grant year. A process began where ideas for changes in the rapid functional prototype of VID1 were discussed among the IDs and the design team at a meeting. Those ideas were then developed into several rapid prototypes, which were presented to the IDs at a subsequent meeting. Choices were narrowed and usability testing or feedback sessions would be scheduled between the local IDs and site pilot testing instructors. Those results would then be sent back to the design team and incorporated into the beta version, VID2.The design changes and decisions the VID team incorporated into the beta version were then communicated to the instructors via the listserv newsletter.

A turning point for the project in terms of participation by the faculty in these decisions and their relationship with the design team occurred in January with a usability test for two functional and navigation prototypes of VID2.A list of tasks and instructions was given to the IDs to use with the pilot instructors. One hidden agenda for the design team was to get theinstructors to explore the content that had already been put up in the alpha version but not accessed by the instructors. This was the first activity that involved a large number of pilot instructors and IDs from most of the partnership sites. Instructors were not only impressed with the content, but for the first time were truly involved in making design decisions that would impact the tool.

In this version of a participatory design method, the site instructional designers became both the main participants and the mediators of the instructors’ participation. Though this seemed in conflict with my earlier vision of the faculty as direct and involved participants in the design of the VID, it also became apparent that site instructional support staff might be the more important stakeholders in the design at this stage while the content of the VID was still being created. One surprising consequence of my insistence on involving both IDs and pilot testing instructors in all phases of the design process for the new beta version was a kind of dissatisfaction from the design team who simply wanted to “get on with it”. This kind of involvement by prototypical users in the design process was new for them, and it appeared to slow down the design process.

However, by utilizing this participatory or “learner-centered” design approach, I felt we were avoiding the kinds of design mistakes warned about in the PD literature. As this had been the experience of others using similar approaches, I hoped to persuade the team that these efforts would result in a tool that would better meet the learner-centered standards of not only usability but learnability and understandability. One example of this was in the selection of the graphic design for VID2.

During the usability tests for the functional prototypes a black background graphic design had been created. As this was the only new design created, it received favorable reviews from both the instructional designers and the instructors. However, I thought we hadn’t really involved these participants in any graphic design decisions. Consequently, we offered a variety of potential graphic design choices at the ID monthly meeting and after feedback from the IDs, we narrowed the choices down to two. Instructors were asked for their opinions and feedback and as a result the black design was abandoned for a simpler blue design chosen for its simplicity and its ability to adapt to changes.

Currently the design team is trying for a different kind of feedback and participation from both the faculty and IDs with the content that is being placed in the VID. After consultation with the site IDs, a button that accesses a simple content survey that has five questions with  Likert scale response options, has been placed on each of the major content pages. A field on the survey also asks for open, written feedback. These questions focus on the contents usefulness, understandability and the page design. Results are anonymous and tabulated according to the page’s topic.We hope that as instructors begin using the tool to access information to use in the design of their courses, they will take this opportunity to also become involved with the process of designing the information.


What are the lessons that I have learned from this attempt to utilize a more learner-centered, participatory design approach? First, it helps to have the participants involved in the initial decisions of the design project in order to have some “buy-in” for collaborating in this more time-consuming design approach. Whereas user-centered design advises the design team to keep real users in mind and allow them to try out prototypes and give feedback, participatory design requires a deeper level of involvement from these important stakeholders.If they have been “appointed” to the task, this level of involvement will be more difficult to attain.

Next, my prior assumptions about who the major participant group was in terms of this design approach led to an early sense of frustration and disappointment in the process. After reading about participatory design, I was convinced this was the kind of approach necessary to achieving the learner-centered instructional design goals for the project. Yet, the reality of implementing this approach with widely dispersed participants of varying interest in the project seemed almost doomed to failure. It wasn’t until I realized that the context of the “work” situation, in this case higher education, made shifting the emphasis of participation in the design to the IDs necessary. It was in the monthly meetings that I felt the design team came closest to the kind of participatory design approach advocated in the literature.

Another major participant group that would influence the design of the tool appeared at the end of the spring in the second grant year. An outside consultant was hired to advise the grant team on VID implementation and maintenance scenarios and costs. It was at this point that I came to see that there were numerous “actors” interacting with the design of this tool, and that the design team and the evolving design were not isolated from their influence. In fact we were in the process of creating what Gartner and Wagner (1996) describe as an actor-network:

An evolving design acts as an intermediary in the sense that the participating human actors inscribe their aims, problem definitions, and design ideas in the system and that these inscriptions in turn mediate the social relations within the network. (pp.190-191)

Where I had imagined only that the pilot testing instructors and site instructional designers would be the major participants in the PD process I hoped for, I was not taking into account the role that institutional, administrative, and political context plays in design practices. As the grant team worked through questions initiated by the consultant on product definition, market orientation, and the purpose and value of the VID, their vision for the tool became clearer.

At the higher administrative level of director or dean, the VID was seen as a way to compensate for limited or non-existent faculty training and support resources. In other words, it was seen as a cost-effective and efficient answer to the support staff crisis in higher education online faculty development. They also saw the possibility of other “learners” using the VID, including adjunct faculty assigned to design online courses, instructors involved with workplace or business and industry training, and instructors interested in using web-based tools in teaching and learning.

Whereas I had seen the major user of the VID as a faculty member in higher education, the emphasis was shifted to instructor in post-secondary education. It was also decided by the grant team that at a minimum the tool would be made available to the faculty at all the grant partnership sites. Whether the grant team would find the funds or another partner to extend the implementation of the VID beyond these boundaries remained to be seen. All of these new conversations by stakeholders in the design of the VID would affect the ongoing design of the tool, and so these actors were now involved as participants in this actor-network. The new set of pilot instructors starting in the fall of the third grant year would add yet  another dimension to the evolving design and development of the tool.

Finally, a new level of involvement would be enacted due to the hiring of a new evaluator who had structured a plan for the third year of the grant that would require more frequent and in-depth conversations with pilot instructors using the tool. The design team would need to carefully coordinate efforts with the evaluator as we both sought information from instructors but for different purposes.

My goals for next year are to continue to pursue a participatory design framework for the instructional design of the Virtual Instructional Designer. With a more complete prototype ready for use by pilot instructors, the design team should be able to initiate a deeper dialogue about its use and design. I especially want to continue to pursue the three learner-centered design principles of learnability, usability, and understandability and still believe that I will need to go beyond user-centered design methods to a participatory design “conversation” to accomplish this. I have found that it is remarkably easy not to share the power of the design team with the intended users of the design. I have also found that as the users of the design are impacted by their political and work context, so will the use of the tool and that has to be considered in the design.

Finally, I think it is not simple to support the shift in design from user-centered to participatory or “learner-centered” design. In practice there is a wide divide between user feedback and true collaboration. The issues of efficiency and deadlines are in constant battle with effectiveness and validity. Of the three learner-centered design principles, usability seems the easiest to involve users in. Yet it is the usefulness of the information and understandability of the design of this complex tool that seem most sensitive to the design collaboration of the users and stakeholders. It is at here that participatory design methods add value to traditional instructional design and moves it closer to learner-centered design.


Bednar, A. K., Cunningham, D., Duffy, T. M., & Perry, J. D. (1991). Theory into practice: How do we link? In G. Anglin (Ed.), Instructional technology (pp. 88-101). Englewood CO: Libraries Unlimited.

Blomberg, J., Suchman, L., & Trigg, R. H.(1996).Reflections on a work-oriented design project. Human-Computer Interaction, 11 (3), 237-265.

Bodker, S.(1996).Creating conditions for participation: Conflicts and resources in systems development. Human-Computer Interaction, 11 (3), 215-236.

Carr, A. A.(1997).User-design in the creation of human learning systems. Educational Technology Research and Development, 45 (3), 5-22.

Carr-Chellman, A., Cuyar, Craig ,& Breman, J.(1998).User-design: A case application in health care training. Educational Technology Research and Development, 46 (4), 97-114.

Carroll, J. M.(1996). Encountering others: Reciprocal openings in participatory design and user-centered design. Human-Computer Interaction, 11 (3), 285-290.

Carroll, J.M., Chin, G., Rosson, M.B. and Neale, D.C.(2000). The development of cooperation: Five years of participatory  design in the virtual school. In Proceedings on Designing Interactive Systems: Processes, Practices, Methods, and Techniques. pp. 239-251. New York: Association for Computing Machinery. Retrieved May 18, 2001, from the World Wide Web:

Corry, M. D., Frick, T. W., & Hansen, L.(1997).User-centered design and usability testing of a web site: An illustrative case study. Educational Technology Research and Development, 45 (4), 65-76.

Dorsey, Laura T., Goodrum, David A., and Schwen, Thomas M. (1995) Rapid collaborative orototyping as an instructional development paradigm. Unpublished manuscript, Indiana University.

Gartner, J., & Wagner, I.(1996). Mapping actors and agendas: Political frameworks of systems design and participation. Human-Computer Interaction, 11 (3), 187-214.

Greenbaum, J.(1993).A design of one's own: Towards participatory design in the United States.In D. Schuler & A. Namioka (Eds.). Participatory Design: Principles and Practices. Hillsdale, NJ.: Lawrence Erlbaum Associates.

Kemp, J. E., Morrison, G. R., & Ross, S. M.(1998). Designing effective instruction. (2nd ed.).Upper Saddle River, NJ: Prentice-Hall.

McKnight, C., Dillon, A.,& Richardson, J. (1996).User-centered design of hypertext/hypermedia for education. In D. H. Jonassen (Ed.) Handbook of research for educational communications and technology .New York: Simon & Schuster Macmillan.

Nielsen, J.(1993). Usability engineering. San Francisco: Morgan Kaufmann.

Norman, D. A.(1988). Design of everyday things. New York: Currency Doubleday.

Norman, D., & Spohrer, J.(1996).Learner-centered education.Communications of the ACM, 39 (4), 24-27.

Reeves, W.(1999).Learner-centered design: A cognitive view of managing complexity in product, information, and environmental design. Thousand Oaks, CA: Sage.

Reigeluth, C.(1996). A new paradigm of ISD? Educational Technology, 36 (3), 13-20.

Sarnoff, H. (Ed.) (1990). Participatory design: Theory and techniques. Raleigh, North Carolina: North Carolina State University Press.

Seels, B. S., & Richey, R. C.(1994).Instructional technology: definitions and domains of the field.Washington, D.C.: AECT.

Sherry, L., & Wilson, B. (1996). Supporting human performance across disciplines: A converging of roles and tools. Performance Improvement Quarterly, 9 (4), 19-36. Retrieved May 18, 2001, from the World Wide Web:

Shneiderman, B. (1998). Designing the user interface.(3rd ed.) Reading, Mass.: Addison Wesley.

Thiagarajan, S.(1993).Just-in-time instructional design. In G. Piskurich (Ed.) The ASTD handbook of instructional technology.New York: McGraw-Hill.

Tripp, S. D., & Bichelmeyer, B. (1990).Rapid prototyping: An alternative instructional design strategy.Educational Technology Research and Development , 38 (1), 31-44.

Wagner, E. D., & McCombs, B. L.(1995).Learner centered psychological principles in practice: Designs for distance education. Educational Technology, March/April 1995, 32-35.

Weston, C., McAlpine, L., & Bordonaro, T. (1995). Educational Technology Research and Development, 43 (3), 29-48.

Willis, J. (2000). The maturing of constructivist instructional design: Some basic principles that can guide practice. Educational Technology, Jan/Feb. 2000, 5-16.

ITFORUM PAPER #54 - The Use of Participatory Design Methods in a Learner-Centered Design Process by Paula Vincini, Indiana State University. Poted on ITFORUM on July 10, 2001. The author retains all copyrights of this work.  Used on ITFORUM by permission of the author.  Visit the ITFORUM WWW Home Page at