College Students' Attitudes Toward Portfolio
Assessment as an Alternative to Traditional Tests

Mary Ann Robinson
Department of Behavioral Studies and Educational Technology
University of South Alabama

This research investigated college students' (n=187) attitudes toward portfolio assessment following a one-semester, undergraduate course in computer applications. It examined their beliefs regarding the amount of work involved, amount of learning that occurred, individual learning needs, student-teacher involvement, self-reflection and assessment, integration of skills, pacing, and evaluation preferences. Findings revealed that the majority of students endorsed the use of portfolio assessment. This paper discusses the importance of attitudes in the teaching and learning environment, how attitudes influence motivation, and the interrelationship between motivation and learning. It also discusses education reform issues related to assessment. 


This paper discusses the importance of college students' attitudes in the teaching and learning environment, how their attitudes influence motivation, and the interrelationship between motivation and learning. It also discusses education reform issues related to assessment such as developing tests that measure the intended student outcomes as defined by the course objectives, the need for alternative assessments as a result of the increasing demand for Web-based courses, and the current trend in portfolio assessment.

In this research, portfolios are defined as physical, organized collections of students' work that demonstrate skill acquisition and understanding of the interrelatedness of the individual parts and how they contribute to the whole, and are based on a set of established performance criteria; traditional tests are defined as those administered at the end of instruction, with little or no performance feedback given prior to the tests, and include multiple-choice, fill-in-the-blank, short answer, true-false, and the like.

Why Assess Attitudes, Beliefs, and Values?

There is evidence that learners attach attitudes, beliefs, and values to their knowledge (Bandura, 1977). These attitudes, beliefs, and values influence motivation and shape behavior. Motivation often plays a significant role in the learning process. Some researchers and practitioners believe that there is a strong interrelationship between motivation and learning (Ames & Ames, 1984; Dweck, 1986; Keller, 1987; McCombs, 1984). Some view motivation as the driving force for learning; others view the process in the reverse. Still others claim that the relationship is cyclical, with motivation and learning influencing each other (West, Farmer, & Wolff, 1991).

Of primary importance to practitioners are performance improvement and the factors contributing to performance improvement. Because attitudes, beliefs, and values influence motivation and learning and, thus, effect student performance, it is imperative to obtain feedback from the major stakeholders in the educational environment the students on the techniques that motivate them to perform. Feedback is one part of a balanced assessment of the worth of classroom methodologies and procedures. As practitioners develop and fine-tune their courses, it is essential that they consider customer satisfaction and address these two questions: "Is this technique stimulating to my students?" and "How can I help students succeed and allow them to control their outcomes?" (Keller, 1987)

Education Reform Issues Related to Assessment

Developing Tests that Measure the Intended Student Outcomes

In the past few years, the attention to developing tests has changed. Much of the change is due to the increased emphasis on behavioral objectives (Dick & Carey, 1996). Behavioral objectives, also referred to as performance objectives or instructional objectives, are precise statements of what students should be able to do at the end of instruction (Mager, 1975) and should be communicated to the students at the beginning of the instruction. These objectives provide guidance for the development of instructional techniques and help the students to determine whether they are on target with their learning.

There is a strong relationship between objectives and tests. The objectives provide the basis for developing the tests. The tests should appropriately measure the behaviors, or student outcomes, stated in the objectives. There are five categories of learning outcomes: intellectual skills, cognitive strategies, verbal information, motor skills, and attitudes (Gagne, Briggs & Wager, 1988). Often, because of convenience, ease in grading, or lack of skill in test construction, educators utilize tests such as multiple- choice, true-false, and fill-in-the-blank as their only methods of assessment. These types of tests most often measure verbal information, unless the educator is extremely skilled in test construction. It is important that practitioners design test items according to the type of learning outcome. For instance, in a golf lesson, if the objective states that the student will be able to hit a golf ball, the test item should have the student demonstrate hitting the ball. It would be inaccurate measurement to have the student describe how to hit the ball. The learning outcome is a motor skill; describing how to hit the ball is recall of verbal information, the lowest level of objectives. In addition, informing the learners of the objectives (at the beginning of the instruction) communicates to them what they have to do in order to succeed. It is only right that they are tested accordingly (Dick & Carey, 1996).

Web-based Courses and the Need for Alternative Assessments

With the increasing interest in distance education and the proliferation of Web-based courses, there is an increasing need for alternative assessment practices. Because of the potential lack of security and other issues surrounding online learning, many educators are simply uncomfortable with Web-based tests that are of the traditional genre such as multiple-choice, true-false, and short answer and are searching for alternative assessment methods. Alternative assessment includes nontraditional methods of evaluating mastery of content and skills. Some examples include conducting research, producing a video, creating models and charts, designing a computer program or game, and developing portfolios. When properly constructed, these assessments test higher-level skill development such as analysis, synthesis, and evaluation.

Current Trend in Portfolio Assessment

The use of portfolios as alternative assessments has been a continued topic of discussion among educators and researchers. An examination of the literature revealed a lack of consensus among educators on the definition of a portfolio, on precisely what a portfolio should include, and the manner in which it should be assessed (Adams, 1995; Kennedy, 1992; Parsons, 1998; Perkins & Gelfer, 1993; Mokhtari & Yellin, 1996; Winograd & Jones, 1992; Worthen & Leopold, 1992). There appeared to be a common belief, however, that portfolios are organized collections of student work and that utilizing portfolios to assess student learning could lead to self-reflection and assessment, motivation, higher- cognitive skill development, integration of skills, and enhanced student performance.

Much research is available on the use of portfolio assessment with elementary and secondary students; a limited body exists at the college level. This study was conducted to explore the viability of portfolio assessment, as perceived by college students, in a computer applications course in which a project portfolio was created.


This investigation was the first level of an exploratory study in which reactions to an assessment method were examined. The approach was retrospective and comparative. The first level of Kirkpatrick's (1988) four-level model was used to measure participant reactions to portfolio assessment. As explained in this model, evaluating reaction equates with measuring customer satisfaction. The data gathered in this investigation were used to obtain feedback regarding the viability of portfolio assessment, as perceived by the students, and to establish standards for future courses.


The participants were 187 undergraduate students at a major public university located in the southeastern part of the United States. The students were education majors, primarily juniors and seniors, enrolled in a computer applications course. The sample consisted of 150 females and 37 males who had a mean age of 26 years. The vast majority had no prior experience with portfolio assessment or computer applications beyond word processing. Participation was voluntary.


Course Content

Ten sections of a computer applications course were taught by the same instructor over three semesters. The students developed a project portfolio, in notebook format, with delineated sections for the course content which included use of the Internet and Microsoft? Office 97 applications: Excel, Access, Word, and PowerPoint. With the exception of an oral presentation, the portfolio constituted their entire course grade.

Course Procedures

At the beginning of the semester, the students selected a fictitious fund-raiser of choice as their portfolio project for the semester. The development of the portfolio began with the conceptualization of the fund-raiser. As the computer applications were taught, the students engaged in the following activities: research was conducted on the Internet to obtain fund-raising ideas and possible beneficiaries; a budget was produced in Excel; committees to organize the fund-raiser were conceptualized; a roster of volunteers to work the fund-raiser was produced in Access; a form letter to solicit support and explain the details of the fund-raiser was written in Word; personalized letters were created by merging the form letter with the database; a flyer to promote the event was developed in Word; a high-impact, graphical PowerPoint slide presentation was created to serve as the orientation for the fund-raiser volunteers; a Web site to announce the event to the public was created and published; and an oral presentation was given to the class by each student to demonstrate the PowerPoint slides and Web sites.

In addition to class sessions, the instructor met with the students, individually and as groups, throughout the semester to provide guidance and feedback on their projects. The students collaborated with their peers for additional ideas and suggestions for improvement. Specifically, the students engaged in revisiting their work, reflecting on ways in which to improve the products, and then revising the documents. All final documents produced were organized into a project portfolio.

In an effort to limit subjectivity and provide uniform standards to ensure fairness in grading, the subjects were given criteria checklists for all sections of the portfolio. The checklists also served as self- assessment tools against which the students could gauge their progress.



A questionnaire was developed to assess the students' beliefs regarding portfolio assessment. The questions were generated from a review of the literature on portfolio assessment. The instrument was administered to a sample of 30 students who had completed the same course one semester prior to this investigation in which portfolio assessment had been implemented. Results of the pilot questionnaire were used to reduce misinterpretation and remove potential bias from leading questions. The revised instrument was a 14-item, closed-form questionnaire with three responses. Items 1-12 surveyed the students' beliefs regarding learning when using portfolio assessment as an alternative to traditional assessment methods. Item 13 surveyed the frequency of which the students would elect to be graded using portfolio assessment. Item14 surveyed the students' belief regarding the degree of advantage the portfolio would give in a job interview. The vast majority of the students had no prior experience with portfolio assessment; therefore, the questionnaire was administered at the end of the course only. The responses were analyzed and reported as percentages.

Observational Data

Because of the admitted weaknesses such as fixed responses, lack of opportunity to clarify responses, biased responses, and assumed participant honesty inherent in questionnaires administered at the end of a program, a log of observational data was kept throughout the course during class and laboratory sessions and student conferences. Positive and negative observations regarding enthusiasm, collaboration with peers, self-reflection, revisiting and revising work, and progress were recorded. The observational data were summarized.



The results of questionnaire items 1-12 revealed that the majority of students endorsed the use of portfolio assessment. The majority of students reported that compared to traditional assessment, portfolio assessment better allowed them to: integrate the skills learned in class; address their learning needs; reflect on and assess their own learning; and work at their own pace. Further, the majority reported that portfolio assessment: involved about the same amount of work as courses utilizing traditional assessment methods; supported the process of improvement; was a better measure of their ability to perform the class assignments; and enabled them to learn more. A summary of the responses and corresponding percentages are reported in Table 1.

Table 1 Student Comparisons of Portfolio Assessment to Traditional Assessment Methods (N=187)

The results of questionnaire item 13 indicated support for using portfolios as assessment tools. The results of item 14 revealed that the students felt the portfolio would give them an advantage in a job interview. A summary of the responses and corresponding percentages are reported in Table 2.

Table 2 Student Grading Preference and Usability of Portfolios in Job Interview (N=187)

Observational Data

Some frustration was exhibited by several students in the beginning of the semester. This tapered off quickly as the students became familiar with the computer applications and class procedures and began to understand and follow the criteria checklists. By the end of the semester, almost all students exhibited a high level of enthusiasm and confidence.

Enhanced performance was noted when the students worked collaboratively with their peers. Initially, the students were uncomfortable offering and accepting constructive criticism to one another, but by mid-term determined that it was a very effective means for professional growth and that the collaboration typically resulted in higher grades. By the end of the semester, the students had become critical evaluators of their work and the work of their peers. They exhibited increased self-reflection and revisited and revised their work to near perfection.


This research supported the use of portfolio assessment as an effective alternative to traditional methods, as perceived by college students in a computer applications course. It supported the belief shared by other researchers and practitioners that using portfolios to assess student learning could lead to self-reflection and assessment, motivation, higher-cognitive skill development, integration of skills, and enhanced student performance.

As with the implementation of most new methods in which students encounter the unfamiliar, the initial level of frustration was anticipated. The students were accustomed to taking tests and moving on to the next concept regardless of whether the content was mastered. The portfolio assessment method required the students to revisit their work often and think critically about their projects.

The amount of work involved in implementing portfolio assessment in the college classroom, as evidenced by this study, increased significantly over that of the more traditional methods discussed in this paper. Examining drafts of the students' work, conducting student conferences, and providing guidance and feedback were extremely time-intensive tasks. Benefits to the students, however, were substantial as evidenced by their enhanced performance and enthusiasm. Benefits to the institution are also potentially substantial. Customer satisfaction often leads to repeat business, and, in this arena, that equates with student retention.


Adams, T. L. (1995). A paradigm for portfolio assessment in teacher education. Education, 115(4), 568- 570.

Ames, C., & Ames, R. (1984). Systems of students and teacher motivation: Toward a qualitative definition. Journal of Educational Psychology, 76, 535-556.

Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice-Hall.

Dick, W. & Carey, L. (1996). The systematic design of instruction (4th ed.). NY: Longman.

Dweck, C. S. (1986). Mental processes affecting learning. American Psychologist, 41, 1040-1048.

Gagne, R. M., Briggs, L. J., & Wager, W. W. (1988). Principles of instructional design (3rd ed.). Fort Worth, TX: Holt, Rinehart, and Winston.

Keller, J. M. (1987, October). Strategies for stimulating the motivation to learn. Performance and Instruction, 1 - 7.

Kennedy, R. (1992). What is performance assessment? New Directions for Education Reform, 1(2), 21- 27.

Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels. San Francisco: Berrett- Koehler.

Mager, R. F. (1975). Preparing instructional objectives. Palo Alto, CA: Fearon.

McCombs, B. L. (1984). Process and skills underlying continuing intrinsic motivation to learn: toward a definition of motivational skills training intervention. Educational Psychologist, 19(4), 199-218.

Mokhtari, K. & Yellin, D. (1996). Portfolio assessment in teacher education: impact on preservice teachers' knowledge and attitudes. Journal of Teacher Education, 47(4), 245-252.

Parsons, J. (1998). Portfolio assessment: Let us proceed with caution. Adult Learning, 9(4), 28-30.

Perkins, P. G. & Gelfer, J. I. (1993). Portfolio assessment of teachers. Clearing House, 66(4), 235-237.

West, C. K., Farmer, J. A., & Wolff, P. M. (1991). Instructional design: Implications from cognitive science. Englewood Cliffs, NJ: Prentice Hall.

Winograd, P. & Jones, D. L. (1992). The use of portfolios in performance assessment. New Directions for Education Reform, 1(2), 37-50.

Worthen, B. R., & Leopold, G. D. (1992). Impediments to implementing alternative assessment: Some emerging issues. New Directions for Education Reform, 1(2), 1-20.

ITFORUM PAPER #47 - College Students' Attitudes Toward Portfolio Assessment as an Alternative to Traditional Tests by Mary Ann Robinson, University of South Alabama. Posted on ITFORUM November 10, 2000. The author retains all copyrights of this work. Used on ITFORUM by permission of the author.Visit the ITFORUM WWW Home Page at