Contributing to scholarship in educational technology through systematic re-use and evaluation of large educational objects in professional development settings

Jason Ravitz
Associate Director of Research
Buck Institute for Education
18 Commercial Blvd.
Novato, CA 94949  USA
May, 2003


The projects in this paper received funding from the National Science Foundation (NSF) through the National School Network (NSN) and Center for Innovative Learning Technologies. (CILT)

National School Network Testbed (

Center for Innovative Learning Technologies  (


These two NSF-funded projects, National School Network (94-97) and Center for Innovative Learning Technologies (CILT, 1998-present) sought to provide knowledge about the work of pioneering educators and developers and to have an impact on the dissemination of new ideas. They sought to disseminate not only research knowledge (Becker & Ravitz, 1998, 1999; Bransford, Brown & Cocking, 1999; Linn & Hsi, 2000; Pea, et al, 1999) but also practical knowledge and tools for use by educators that could "scale" (Elmore, 1996). One of my interests has been providing professional development for educators that helps disseminate promising practices via the Internet.

This paper focuses on generating knowledge from use of what Wiley (2000) calls "larger reusable digital resources". These are "entire web pages that combine text, images and other media or applications to deliver complete experiences, such as a complete instructional event" (p. 7). Whenever educators take advantage of resources (broadly including ideas or tools of various shape and sizes, I consider this "re-use" and not just "use" if it is true that a) they did not help develop the resource; and 2) they were not among the original intended and supported group of users. One of my concerns as an designer and evaluator has been how much re-use, formal or informal, happens on the Internet that goes uncounted. We are still a long way from knowing how to assess large-scale technology impacts in general (Haertel & Means, 2000; Ravitz, 2001, 2002; Russell, 2000) let alone impacts from re-use of educational technologies across the Internet.

Re-use is considered a high impact result. It is generally considered a victory for developers when innovative ideas are adopted (bought) by companies or organizations that can disseminate them widely. This was somewhat the case when criteria from the Site Feedback Form (Example 1, below) informed development of International Schools Cyberfair (2003), now in its eigth year; when software called MentorCenter from NSN was incorporated by GTE for sales training; and most recently and significantly, when Causal Mapper (2003) software was bought by Intel for marketing to schools (Intel, 2003).

Many other projects associated with grants from NSF or the U.S. Department of Education, from states and foundations, have been developed for re-use by others. This includes projects at Labs, Regional Centers, Universities, and elsewhere intending to support educators. A subset of these have focused on having educators create and find resources for the classroom that are recognized by others as being of high value. AskERIC, BlueNWeb, Collaboratory, GEM, HPRTEC, I*Earn, SolutionSite; ThinkQuest, and numerous others have done a great job helping educators create and assemble lessons or projects of considerable quality that, in theory, can be re-used by others. From an evaluator's perspective, online developers now have a growing array of tools they can use to show they are having an impact on teachers and learners. The potential to have a "dashboard" view of what is happening at each site (Baker, 1998) with indicators of various constructs is remarkable, but must be pursued with care.

With the tools that have been discussed, one can imagine developers providing an Interactive Project Vita (Ravitz, 1997) that shows what they are accomplishing with teachers and learners and invites feedback from other educators. This could point the way toward identification and "re-use" of their best ideas, and even ways to support and study this re-use.

I believe that we have adequate educational theories (Bransford, et al, 1999; George Lucas Educational Foundation, 2003) and theories of design (Jonassen, D. 1999; Kali, Bos, Linn, Underwood & Hewitt, 2000; Riel, 1996) to understand the varieties of learning opportunities on the Internet. We have great project-based models that have run successfully and been valued by teachers for years ( generating huge amounts of data. We see online environments for sharing and discussing project-based learning ( cwebdocs/) where teachers are trying to answer questions about teaching and learning collaboratively with researchers; we see tools for collecting data from teachers (, and examples of educational web sites developed by learners (

What is the Problem?

In the cases listed above and in countless others, getting ideas re-used (including tools or objects that carry these ideas) is often a substantial achievement in itself. However, I share Wiley's (2000) concern that we need more theorists among educational developers. There is often a lack of sense-making and analysis concerning re-use that occurs as a result of all the effort going into educational technology. My concern is less that we lack theories for what is supposed to happen with online learning. Nor is it true that re-use of these large objects is difficult to achieve. As Wiley notes, larger objects are relatively easy for teachers to pick up and use because they do not have to compile many pieces individually.

My concern is that when re-use does occur it is difficult to document at all, let alone to analyze distant impacts. It is difficult to tell what the impact of online materials are among educators, particularly with open-ended learning environments or methods that are "indeterminate" in the sense that Berman (1981) discusses. Because of these complexities of distant implementations, the inevitable result of a "share, and see who cares" approach is that systematic knowledge is not developed and shared. People develop expertise without sharing with others, making it impossible to evaluate.

Researchers and evaluators often have to focus on closely-tied local settings, ("use"), and have few means for addressing local impacts, let alone distant impacts or re-use. The backlog in knowledge is due to lack of time and tools (conceptual and technological) to support research and evaluation in online settings. People have difficulty evaluating use of their materials locally, let alone finding out what is happening "out there" concerning re-use by others

Evidence for effectiveness of re-use will ultimately have to come from the educational community itself, after a variety of resources have been tried by a variety of educators in different settings. My proposal is to help educators share the results of their inquiry, to support their own learning, and to accelerate educational technology research and development.

What is the solution?

Educators can contribute to scholarship in educational technology as a part of their online professional development experiences. This paper conceptualizes re-use in professional development settings as a means of obtaining feedback for developers and data for researchers, at the same time it supports teachers as learners and professionals. This is a very different view of distance education than a correspondence-type experience for online learners (Nathan Bos, personal communication, 5/7/02). Exploring these additional functions in online education is what I call distance scholarship (Ravitz, 2001), after Shulman’s (1999) discussion of the scholarship of teaching. Hoadley & Pea (in press) and Pea (1993) may envision similar uses for online forums to improve research.

Some of the "functions" of distance scholarship that I see are summarized here:

    1. Accelerating research
    2. Disseminating successful ideas
    3. Providing "authentic" professional development

    4. Helping educators analyze, catalog and find tools that are useful
    5. Improving communication between educators, developers, and researchers
    6. Conducting peer-review of projects and letting this serve as "authority"
    7. Learning how to use educationally-relevant resources in different contexts
    8. Providing incentives for people to contribute meta-data

What are examples?

Both the Site Feedback Form (Example 1) and Netcourse (Example 2) incorporated re-use into the professional development experiences of educators with benefits for multiple stakeholders as discussed below:

  1. Helping Educators: Providing authentic (and structured) learning activities for educators by helping them analyze and collect online resources
  2. Helping Developers: Providing a unique chance for guided and structured feedback from new users
  3. Helping Information Managers:: Cataloguing material through generation of educationally-relevant meta-data (Recker, Walker & Wiley, 2000))
  4. Helping Researcher & Evaluators: Obtaining data from educators about how resources are perceived and used, under what conditions and for whom they are most successful

Helping Educators: Authentic (and structured) learning activities

The Site Feedback Form arose out of the perceived need for educators to be able to review web sites, and the desire of the Online Internet Institute (Hunter, 1998; Ravitz & Serim, 1997) to have an activity for introducing teachers to the Internet. Goals for the Netcourse arose out of a similar desire for educators to learn about and try research-based assessment tools. In the Netcourse, there were no formal reviews, but teachers read literature about the tool being used, connected it to concepts from the class, and provided feedback that reflected their own thinking.

Since the early days of the Web, it has been clear that educators have wanted help finding and evaluating online resources; Kathy Shrock provided leadership in this area around the same time of the Site Feedback Form was constructed. The Form provides a conceptual and technological tool for analyzing and collecting online resources. Eisenberg & Miller (2002) and Berkowitz & Serim (2002) have focused extensively on information problem solving as a skill for educators and students to learn.

Authoritative lists of "approved" web sites are valuable to have, but libraries and librarians are ultimately not there to give you answers but to help you learn to find your own information, and to be able to evaluate what you find (Berkowitz & Serim, 2002, Eisenberg & Miller, 2002). Through the use of evaluative frameworks educators come to interpret and see characteristics of quality in online resources in new and lasting ways (West, Farmer & Wolff, 1991). By using a rubric and applying an analytic framework (even as they develop their own) educators can sharpen and shape their ideas about technology. Reviewing educational resources via the Internet is an authentic problem-solving experience that matches what professional educators and leaders have to do in the field every day (Orrill, 2000; Chitwood, May, Bunnow, & Langan, 2000; Ravitz & Lake, 1996 ).

Bannan-Ritland, Dabbagh, & Murphy. (2000) also envision placing an "instructional lense" on information resources to meet the goals of educators:

Ideally, this process would also be supported by an instructor who could monitor, make suggestions, and provide feedback on the students' progress. Providing an instructional lense on the potentially vast resources contained in a learning object system through the use of frameworks may help to address the problem of the absence of instructional theory (p. 32). Recker, et al. (2000) also discuss the instructional value a system that uses "social filtering:" From the instructor perspective, such a system makes locating relevant, high quality learning objects significantly easier. . .Not only would instructors be able to inspect and select individual learning objects for utilization, they would also be able to review groupings of learning objects made by other instructors with similar instructional styles. This capability provides an important complement to existing online repositories of syllabi and lesson plans, and opportunities to cross-reference are obvious. (p. 17) Sharing their analyses of different resources could help improve communication between educators about resources that are available and which are most valuable in their classrooms. Chitwood et al (p. 22) describe the excitement educators feel when they realize they are getting more and more from their learning experience and that they are contributors to something larger.  As Recker, et al. envision, "rather than focusing exclusively on Web resources, the system promotes interaction among people, with the resource providing common ground for reflection, discussion, and debate. (p. 5).

EXAMPLE 1: Site Feedback Form
Obtaining analyses (meta-data) from educators during professional development

Web Site Evaluation Form for Educators (

A product of the R&D phases of National School Network (, the Web Site Evaluation Form used Web-based forms to set out a series of criteria that teachers, students and community partners would use to evaluate and categorize Web sites and Web-based curriculum. The criteria focused on perceived educational usefulness. Teachers and future teachers provided background information about themselves and their students with their reviews.

Using this form, educators attending professional development sessions or university classes examined specific educational qualities of a site (such as examples of student work) and applied criteria for judging quality. These were opportunities to provide open ended explanations and the ability to name new criteria that should be included. Reviews were posted to an online database. Others could then search for sites that met whatever criteria was selected, and submit their own reviews. Owners of sites could obtain user feedback without having to build their own data collection tools. A discussion feature allowed participants to discuss reviews and the usefulness of sites with instructors and developers.

The tool was used nationally by the Online Internet Institute (, in courses at Syracuse University (IDD&E class co-taught with Dan Lake; IST Summer Institute, led by Chuck McClure) and within a consortium of schools in Central New York (OCM, BOCES). Participants would go back and forth between the web site being reviewed and the web site with the form by using multiple views or windows. Use of the form was spontaneously adopted (without our knowledge until reviews started coming in) by teacher educators at Vermont College and Northern Illinois University. This coupled with a number of unsolicited reviews from individuals illustrated the inherent appeal of the project.

A next step was to make it easier for large web providers to request reviews of their web pages by educators, providing a unique way to increase awareness of their sites, and to receive highly structured feedback. This idea was field-tested by pre-service teachers at LeMoyne College , in Syracuse, New York who reviewed the technology page for Madison Metropolitan School District in Wisconsin. Although scheduling problems prevented further experimentation, the plan was to allow someone from Madison to be online interacting with reviewers as the reviews came in.

The form was useful in several ways: for developing a collection of peer-reviewed web resources, as an instructional activity for pre-service or in-service educators, as a means of obtaining feedback on the quality of educational web sites, and as a tool for researcher and developers. Here are some comments made by teachers who used the form:

"The form was very easy to use and really made the user think about the page he/she was reviewing".

"There are enough choices to pick from that allow for the web page owners to find out whether or not their page is effective."

"I am looking forward to the day when I will be able to use this tool regularly

to see what other professionals in my field are using".

"This form provided plenty of opportunity to provide feedback in my own words and also assured that the pertinent questions were addressed."

"A valuable resource for educators"

Helping Developers: Valuable Feedback from Educators

Developers rarely get a chance to interact with their users in a meaningful way, particularly at a distance. It is always difficult knowing what the impact is on teachers and learners. The only way to find out is to interact with them. Partnering with teacher educators provides developers with the opportunity to receive large amounts of data and feedback, particularly when resources are online and the reviews are provided online.

Evidence from the Site Evaluation Form and from the Netcourse indicates that developers are interested in seeing and responding to the perceptions of educators. In the Site Feedback Form, Madison School District and others were interested to see how other educators would assess their web site. During the Netcourse (Example 2) tool providers generally saw a unique opportunity to interact with educators and were pleased to participate in discussions, as "guest experts" in Blackboard, during their assigned week. During this week they monitored and responded to the progress of class participants; offering insights about their work, and seeking suggestions on the best way to share it with educators. When necessary, I would discuss issues with the developers offline and summarize for the class. Sample discussions are provided in Appendix A (forthcoming).

The two projects that are described offered an organized approach to re-use of educational ideas, whereby these innovations are not picked up and used haphazardly by others in uncontrolled settings, or selectively offered only to those with the greatest likelihood of success. Instead, re-use can be systematically explored and analyzed through trials in the course of professional development programs.

Besides running their own focus groups, under the watchful eye of an instructor is probably the best way for tool and resource providers on the Internet to collect reliable and valid data, as opposed to relying on random Internet users from whom to receive feedback. This may be what Gibbons et al mean when he says "A sufficient number of use cases gathered quickly can provide the analyst with a great deal of analysis detail" (p. 28).

Feedback is especially valuable when the reviewers are people who are passionate and care about what they are doing, i.e., educators. I would argue that even the insights offered by novice educators when scaffolded appropriately can way can be very valuable; they can easily offer data about the object being viewed, about themselves and how they view it.

With little effort, developers can obtain "fresh eyes" on a recurring basis. This would be every 6 weeks or so, for my Netcourse, but cohorts might be provided on a recurring basis every semester, year, every few years. The implications for designers (formative evaluation) would be enormous if each new cohort would provide a thorough review of their latest work.

While no substitute to developing their own user communities, a little collaboration from teacher educators could help supplement data that online projects collect through the costly and time consuming process of developing their own pilot sites. The fact that reviewers were engaged in a pedagogically-guided and structured inquiry and were available to discuss and clarify their responses may make their contributions even more valuable. Developers could also use external audience to ask questions they would not want to ask at their pilot sites, for example sensitive questions that might get in the way of their relationship with people "on the ground" or questions about new areas to investigate that would be a distraction for already-committed participants.

Several of the tools in the Netcourse did not yet have complete user-documentation and training systems in place for online users. For this reason, the developers were particularly keen to see how someone else could teach others to use their tool via the Internet, or how someone might manage with scant documentation. In the future, more and more tools may offer an easy "entrance" to their project for educators and easier ways to track their use. This could be accelerated by working more closely with professional developers and teacher educators.

Helping Information Managers: Meta-Data about Resources by and for educators

Having a database of reviewed resources is valuable from an information management perspective. This Feedback Form (Example 1) and Netcourse (Example 2) provide information about usability and educational usefulness of web-based resources that can be used to help others make decisions. The quotes at the bottom of Example 1 are consistent with Orrill's (2000) suggestion that educators could "come to rely on the learning objects library as a primary source of information"( p. 15). Without such a system, Orrill (2000) describes the difficulty professionals have when they have no way to share what they find with each other. They have to "wade through the Internet quagmire looking for critical information about project after project. This also leads to heavier demands being placed on the faculty as students become frustrated by the difficulty they are having in locating appropriate information" (p. 15).

One solution to the problem of information overload appears to be the cataloging of meta-data, or data about data (Wiley, 2000). The approach described in this paper solves the problem of generating meta-data raised by Recker, et al. (2000) who note that "people are loath to explicitly contribute reviews without some kind of incentive, hence it is difficult to seed and grow a metadata database" (p. 18). - my emphasis. Wiley also notes the expense of cataloguing metadata.

The projects presented in this paper (Example 1 and 2) suggest that contributing context-sensitive (non-authoritative) meta-data in the context of a professional development setting is not an overwhelming task; nor need it distract from the goals of professional development. A peer-review process or "social filtering" (Recker, 2000) among educators could end up being an efficient information cataloguing strategy. Blue-ribbon panels are never going to be able to review more than a few hundred sites, and they are often going to overlook good work done on a smaller scale. The Site Feedback Form was concerned with the evaluation of less high profile efforts that might not make it on the top ten list but that can still provide substantial value for some student or educators (Ravitz & Lake, 1996).

Admittedly, the data provided by educators, even when supervised by a faculty member, are "non-authoritative" (Recker, et al.) and may even be quirky. Bannan-Ritland, et al. (2000) explain that use of these reviews requires a "non-linear, fluid and dynamic view of information processing" that allows multiple perspectives of content that are "context dependent and progressively under development through activity" (p. 19-20). They present a view of work in cognition and information processing that, like I share, extends beyond the individual to "understanding the coordination among individuals and artifacts" (p. 21). The approach suggested in this paper will contribute to knowledge about how to do this better in our field. I am confident that information generated by the intended users (educators) is the most relevant possible information or meta-data to have for developers, researchers, and other educator

Important questions raised by Recker, et al concern authority and quality control issues. Somewhere there is a middle-ground between centrally organized content (authoritative sources), and non-authoritative tags. Probably the best known model for obtaining decentralized authority is the scholarship model, as outlined by Shulman (1999. Of course, many educators might be equally happy with an Amazon or E-bay type system for rating educational products, and obtaining information about vendors and clients. These commercially developed systems, as I understand them, allow users to rate both the quality of the product and the credibility of the vendor. For research purposes, a uni-dimensional 4-star rating like that found in Amazon, has little value. It would be much better to obtain information about a number of constructs from each reviewer, including systematic information about the reviewer (intended use, setting, etc.). This allows cross-referencing (as suggested by Recker, et al., p. 17) to occur in a way that can support research and evaluation.

Finally, it is worth noting that not all data is useful forever. When NSN pitched the Site Feedback Form to large online content providers, the biggest concern was keeping the reviews up to date. Cleaning out the database was a very labor intensive process. Each posting went to the database, the discussion forum, and the index files (date, author, subject). Bannan-Ritland, et al., (2000) describe a possible solution, "an archival engine to clear the database of unwanted and outdated contributions" (p. 40). Solving this problem would be enormously useful for projects like the Site Feedback Form and the Netcourse.

EXAMPLE 2: PT3 Netcourse on Technology Supported Assessment

Course Overview
Technology Supported Assessment (TSA) lasted 6 weeks and ran twice, in the spring and summer of 2001, as part of a U.S. Department of Education PT3 Catalyst Grant involving the Concord Consortium in Concord, MA and the University of Virginia. The author developed and taught this course as part of his postdoctoral work with the Center for Innovative Learning Technologies (CILT).

Students in the course included instructional technologists and educators; many had responsibility for advancing technology use at their institutions. Goals for the Netcourse included bringing technology leaders up to speed on the latest technologies in assessment; fostering reuse of the assessments (and parts of the course itself) in local settings; and informing tool developers and researchers about the results of the activity. The conceptual foundation of the course rested on two points:

  1. Perhaps the most essential instructional strategy is providing feedback to learners "Learning gains from systematic attention to formative assessment were found to be greater than most of those for any other educational intervention. Previous research supports these findings (Crooks, 1988; Fuchs & Fuchs, 1986)" (Black & Wiliam, in Rose and Gomez, 2003). See also Bransford, et al. (1999) and Stiggins (1997).

  3. Technology-supported tools offer exciting opportunities for improving formative assessment for teaching and learning. Technology-supported assessments that were reviewed by the class via the Internet included IMMEX (Underdahl, Palacio-Cayetano, & Stevens, 2001), Intelligent Essay Assessor (Foltz, Laham, & Landauer, 1999), CRESST’s Knowledge Mapper (Baker, 1998), and the Analysis Toolkit for Knowledge Forum (Lamon, Reeve, & Scardamalia, 2001). In each case, there is reason to believe that technology can support teachers through better formative assessments of their learning.
Designing the Course

The course was structured around weekly Readings, Activities and Discussions (RAD) that were initiated by the instructor. Each week the class read about and made use of one of these research-based tools and discussed its applicability to their work. The syllabus with a list of readings and activities is available online:

The course followed what Collison et al. call a "structured asynchronous" format that gives students a week to complete the assignments at their convenience. Assignments were posted one or two weeks in advance, and sometimes people would take a week to catch up; often they would have a week to think about what they and their colleagues were seeing and saying. The activities and discussion prompts used methods described by Collison, Elbaum, Haavind & Tinker (2000). The understanding is that in online settings teacher-focused discussions will be severely overwhelming for the teacher and often underwhelming for the student. As a result, the instructor must find ways to move "out of the middle" as taught in a courses at Concord Consortium ( and

Working with Developers

To varying extents, special arrangements were needed support the class, such as creating a new entry web page, a set of usernames, and so on. The course author provided assistance in these areas. Additional collaboration with developers was required in each case, for example to:

  1. secure passwords and login access to the tools when necessary;
  2. secure technical support during the week scheduled for use;
  3. obtain or create instructions for using the tool; and,
  4. provide perspective on available readings and research
  5. explain opportunities and costs for re-use to interested educators
Generally, there was someone from each research and development group who was available to help in these areas. Sometimes there was a high level researcher, a technical support person, and a professional development person. They were curious to see how a Netcourse could facilitate more educators becoming users of their assessments.

Helping Researchers and Evaluators: Valuable Data for Researchers and Evaluators

Courses in educational technology have the potential to generate a wellspring of knowledge about teaching and learning with technology — not just among individual learners, but across the field. Re-use projects like the Site Feedback Form and the Netcourse, help identify for whom and under what conditions different technology innovations are likely to have an impact. Goals for the Netcourse included sparking discussions to better understand the availability of assessment tools and their applicability in different learning contexts. Although it did not seek to provide systematic data about users perceptions of different assessments (Shepard, 1995) the Netcourse showed that it could help groups of educators provide "use cases" and discussions for researchers and developers to consider. Given the rapid feedback offered by the assessment tools, educators themselves could see analyses of their performance, and comparison of data with potential research value was included as part of the Netcourse. Collection of data in professional development settings using agreed upon frameworks could help answer research questions regarding use of various tools and resources, and the reviews can be used to address evaluation questions like -- who used the object; in what context; with what evidence of benefit? These questions are of a similar nature to those asked in Ravitz & Serim (1997) of pioneering teachers in the Online Internet Institute, and are of inherent interest to researchers and evaluators

Systematic re-use by teacher educators for several cohorts of students would provide substantial data for developers and evaluators. By recruiting professional developers in different areas of the country and from different types of schools or institutions one may obtain a representative set of users that would not have been available to them otherwise. The grouping and cross-referencing of learning objects described by Recker et al., could encourage scholarship that is currently lacking because both educators and developers have often failed to develop and share frameworks collaboratively. Notable exceptions are Hickey, et al., (2000), and other CILT-supported projects in Assessment (CILT, 2003).

The approach in this paper is consistent with seeking a decade of rigorous scholarship described by Haertel & Means (2000). One can imagine teachers and teacher educators promoting trials of new tools through a series of online courses and activities. The shift that has to occur is this -- instead of only being accountable to the learning of individuals and their classes, teachers educators using online resources and tools should be enabled and encouraged to contribute to the larger enterprise of educational technology research. They can provide test cases and data about who educational users are, how tools can be used and re-used, and how they might function in different settings.

Finally, evaluators of and principal investigators of large funded projects are very familiar with the peer-review process, for both proposals and academic submissions. They have to fill in forms to report to the government each year about what they have accomplished. These online forms that could be re-purposed to share information about what has been learned with others. This would provide a rich resource that would serve the whole community, particularly to the extent that information in these reports reflected the views of educators, developers and researchers who are representative of broader constituencies.

Unfinished Business

A few participants in the Netcourse were able to re-use the tools as a part of in-service offerings at their institutions, but follow-up was problematic. The course activities were set up to last a limited amount of time, not self-paced or tailored to focus on building long term relationships. The course did not have resources to follow up on the few cases where there was re-use occurring. A follow-up survey of Netcourse participants might address this question. Mechanisms for tracking and discussing re- use by these "second- or third-generation" users would be useful without requiring them to enroll in the original Netcourse. A little flexibility concerning enrollment when complete re-use is not possible may make sense.

Combining the strengths of the Netcourse (a week long informed discussion) with the strengths of the Feedback Form (systematic data collection about educators and their perceptions) seems like a logical next step.. The Netcourse had difficulty aggregating data and reviews, and the Feedback Form had difficulty generating discussion.

Interacting with resource providers from Madison via the Site Feedback Form showed some promise, but never reached fruition. Information seeking was cited as a greater benefit than discussions. In general, the National School Network found it challenging to produce discussions among educators. Most of the successes concerned sponsoring of high-profile "Events" (Hunter, 1998).

The Netcourse, in contrast, provided a method for fostering discussion among students using my RAD approach, and based on Collison, et al. (2000), but did not collect systematic data for comparing the experiences of educators with the different tools. This can easily be fixed provided there is a framework for judging the value of technology supported assessments.

Finally, for some developers of re-usable materials having to go into Blackboard is antithetical to what they are trying to do (providing their own online interface). Perhaps it would be useful to build some of the functions described in this paper into their professional development programs and software. I hope the advantages of doing so and a general approach have been provided here.


There are number of chapters in the Wiley's (2000) edited book that I have referenced liberally; I am not really sure that we are talking about the same thing. My guess is that developers of relatively smaller objects, and programmers in the object-oriented sense will find very little value here. Yet I acknowledge that with the tireless work of developers, the infrastructure barriers for the type of work I am describing are rapidly coming down, or have already come down. None of the projects could have been taken on without programmers at BBN who modified Hypermail to accept forms data; Blackboard developers, and the people who managed the servers at Concord Consortium.

As these technical barriers come down, I hope that social barriers will come down too. A concern of mine is that NSN and CILT were unique moments for developing the approach I have described, and the doors of collaboration were open to me that will not be open to others. The only way I know this paper had an impact is if people tell me the ideas worked for them.  I hope people who are interested in trying the ideas in this paper will find others who see the value in pursuing what I call "distance scholarship" in online learning settings in order to support communication between between developers, teacher educators, and researchers both within and across projects.

Teachers and schools buying curriculum, is a "billion dollar business". Making a purchase in education is like buying a used car -- Buyer beware! Clearly a set of "quality standards" are needed to guide the review of resources (Chitwood, et al., p. 22). In my view, it is acceptable for different communities to apply different criteria and for inquiry to be guided by different analytic frameworks. One group might focus on user-interface issues; another may focus on evidence of student impacts. As long as there is a shared framework being applied to online resources and educators are sharing their analyses with others, this has the potential to improve knowledge in educational technology.

About BIE, and implications for our future work

The Buck Institute for Education (BIE) is a research and development organization working to make schools and classrooms more effective through the use of problem and project based instruction. Founded in 1987, we receive permanent funding from the Leonard and Beryl Buck Trust, and funding for specific projects from foundations, schools and school districts, state educational agencies and the federal government.

BIE creates curriculum materials, trains teachers in their use, and conducts and disseminates research. Current programs target high school social science. BIE has developed seven standards-centered, Problem Based Economics units. These units and a corresponding training program are now available from BIE and several National Training Sites located at National Council on Economic Education Centers. Over the next 5 years, we will provide teachers with additional problem based units for high school government/civics, world history, geography, world cultures, and US history.

BIE also provides training in the more general application of project based instructional approaches, and has published the well-received Project Based Learning Handbook used by educators across the U.S. and in several other countries.

We want to offer our materials for re-use across the Internet and we are trying to figure out the best way to do this; we would appreciate hearing others' ideas about how to support re-use in different places and how to conduct PBL research (GLEF, 2003; Mergendoller, Maxwell & Bellisimo, 2000) at a distance.


Baker, E. (1998, November). Understanding Educational Quality: Where Validity Meets Technology. William H. Angoff Memorial Lecture Series. Princeton, NJ: Educational Testing Service. Available:

Bannan-Ritland, B., Dabbagh, N. & Murphy, K. (2000). Learning object systems as constructivist learning environments: Related assumptions, theories, and applications. In D. A. Wiley (Ed.), The Instructional Use of Learning Objects: Online Version. Retrieved May 5, 2003, from the World Wide Web:

Becker, H. & Ravitz, J. (1999). The Influence of Computer and Internet Use on Teachers' Pedagogical Practices and Perceptions. Journal of Research on Computing in Education., 31(4).

Becker, H. & Ravitz, J. (1998). The Equity Threat of Promising Innovations: Pioneering Internet-Connected Schools. Journal of Educational Computing Research, 18(4).Berkowitz, B. & Serim, F. May / June 2002, MultiMedia Schools: Moving Every Child Ahead: The Big6 Success Strategy . [Online].

Berman, P. (1981). Educational Change: An implementation paradigm. Improving Schools: using what we know. R. Lehming and M. Kane. London, Sage Publications: 253-286.

Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, October. Available:

Bransford, J. (2001). Toward the development of a stronger community of educators: New opportunities made possible by integrating the learning sciences and technology. PT3 Vision Quest on Assessment in e-Learning cultures.

Bransford, J. D., Brown, A.L. & Cocking, R. (1999). How people learn: Brain, mind, experience and school. Washington, D.C.: National Academy Press.

Causal Mapper (2003). Early web site by Eric Baumgartner, et al. Accessed 5/7/03.

Chitwood, K., May, C., Bunnow, D., & Langan, T. (2000). Battle stories from the field: Wisconsin online resource center learning objects project. In D. A. Wiley (Ed.), The Instructional Use of Learning Objects: Online Version. Retrieved May 5, 2003, from the World Wide Web:

CILT (2003). Assessments for Learning. Available:

Collison, G., Elbaum, B., Haavind S., & Tinker, B. (2000). Facilitating online learning: Effective strategies for moderators. Madison, WI:Atwood.

Elmore, R. F. (1996). Getting to scale with good educational practices. Harvard Educational Review, 66, 1-25.

Eisenberg, M & Miller, D; September 2002, School Library Journal: Cover Story. This Man Wants to Change Your Job. [Online].

Foltz, P., Laham, D., & Landauer, T. (1999). The Intelligent Essay Assessor: Applications to Educational Technology. Interactive Multimedia Electronic Journal of Computer-Enhanced Learning, 1(2). Winston-Salem, NC: Wake Forest University. Available:

Haertel, G., & Means, B. (2000). Stronger Designs for Research on Educational Uses of Technology: Conclusion and Implications. Menlo Park, CA: SRI International. Available:

Hickey, D., et al. (2000). Defining Dimensions of Collaboration and Participation. CILT Seed Grant Abstract. Available:

Hoadley, C. M. & Pea, R. D. (in press) Finding the ties that bind: Tools in support of a knowledge-building community. To
appear in K. A. Renninger and W. Shumar (Eds.), Building virtual communities: Learning and change in cyberspace. New York,
NY: Cambridge University Press.   Available at

Hunter, B. (1998). Fostering Collaborative Knowledge-Building: Lessons Learned from the National School Network Testbed. Proceedings of the Annual Telecommunications in Education (TelEd) Conference. Austin, TX. November 14-15, 1997.

Intel (2003). Seeing Reason: Mindful Mapping of Cause & Effect. [WWW Document]. Available:

International Schools Cyberfair (2003).

Jonassen, D. (1999) Computers as Mind Tools for Schools (2nd Edition) Prentice-Hall

Kali, Y., Bos, N., Linn M., Underwood, J., and Hewitt J. (2002). Design Principles for Educational Software. Interactive symposium at the Computer Support for Collaborative Learning (CSCL) conference, Boulder, Colorado.

Lamon, M., Reeve, R., & Scardamalia, M. (2001, April). Mapping Learning and the Growth of Knowledge in a Knowledge Building Community. In Scaradamalia (Chair), New Directions in Knowledge Building. Annual meetings of the American Educational Research Association. Seattle, WA.

Linn, M.C. and Hsi, S. (2000) Computers, Teachers, Peers: Science Learning Partners. Lawrence Erlbaum Associates. Mahwah, NJ.

Mergendoller, J. R., Maxwell, N. L., & Bellisimo, Y. (2000). Comparing problem-based learning and traditional instruction in high schooleconomics. Journal of Educational Research, 93(6), 374-382.

Orrill, C. H. (2000). Learning objects to support inquiry-based online learning In D. A. Wiley (Ed.), The Instructional Use of Learning Objects: Online Version.  Retrieved May 5, 2003, from the World Wide Web:

Pea, R. D. (1993). Seeing what we build together: Distributed multimedia learning environments for transformative
communications. Journal of the Learning Sciences, 3(3), 285-299.

Pea, R.D., Tinker, R., Linn, M., Means, B., Bransford, J., Roschelle, J., Hsi, S., Brophy, S., & Songer, N. (1999). Toward a learning technologies knowledge network. Educational Technology Research and Development. Vol. 47, No. 2.

Ravitz, J. (2002, June). Demystifying data about technology impacts in schools.  Paper presented at National Educational Computing Conference.  San Antonio, TX.   June 18, 2002.

Ravitz, J. (2001). Will technology pass the test? PT3 Vision Quest on Assessment in e-Learning cultures.

Ravitz, J. (1997). Evaluating Learning Networks: A special challenge for Web-based Instruction? In Badrul Khan (Ed.)., Web-based Instruction, Englewood Cliffs, NJ: Educational Technology Publications,, 361-368.

Ravitz, J. and Lake, D. (1996, June). An authentic learning tool for teachers: the OII WWW Site Evaluation Form. FSU/AECT Conference on Distance Learning. Tallahassee, FL. Available:

Ravitz, J. & Serim, F. (1997, April). Summary of First Year Evaluation Report for the Online Internet Institute. Edward F. Kelly Evaluation Conference. SUNY, Albany. Albany, NY. Available:

Recker, M. M., Walker, A., & Wiley, D. A. (2000). Collaboratively filtering learning objects. In D. A. Wiley (Ed.), The Instructional Use of Learning Objects: Online Version. Retrieved May 5, 2003, from the World Wide Web:

Riel, M. (1996). The Internet: A land to settle rather than an ocean to surf.  Telecommunications in Education News, 7 (March-June) 10-17.

Rose K. and Gomez, L. (2003, April). Using Assessment Conversations to Promote Student Learning: A Comparative Analysis of Effects on the Amount, Control, and Quality of Feedback during Student Presentations. Paper presented at Annual Meetings of the American Educational Research Association. Chicago, IL.

Ruiz-Primo, M., Schultz, S., Li, M., & Shavelson, R. (1999, June) On the cognitive validity of interpretations of scores from alternative concept mapping techniques. CSE Technical Report 503. Center for Study of Evaluation. Los Angeles, CA: UCLA Center for the Study of Evaluation, [WWW Document]. Available:

Russell, M. (2000). It’s Time to Upgrade: Tests and Administration Procedures for the New Millennium. Secretary’s Conference on Educational Technology. Washington, DC: U.S. Department of Education. Available:

Shepard, L. (1995). School Reform: What We've Learned Using Assessment to Improve Learning Educational Leadership, 52(5). Available:

Shulman, L. (1999, May). The scholarship of teaching for meaningful learning. Plenary address at annual meeting of the Center for Innovative Learning Technologies. San Jose, CA.

Stiggins, R. (1997). Student-centered classroom assessment (2nd Ed.). Upper Saddle River, NJ: Merril.

Underdahl, J., Palacio-Cayetano, J., & Stevens, R. (2001). Practice makes perfect: Assessing and enhancing knowledge and problem-solving skills with IMMEX software. Eugene, OR: International Society for Technology in Education. Available:

West, C., Farmer, J., & Wolff, P. (1991). Instructional Design Implications From Cognitive Science. Englewood Cliffs, NJ: Prentice Hall.

Wiley, D. A. (2000). Connecting learning objects to instructional design theory: A definition, a metaphor, and a taxonomy. In D. A. Wiley (Ed.), The Instructional Use of Learning Objects: Online Version. Retrieved May 5, 2003, from the World Wide Web: