Our Beliefs about Evaluation
By the Noyce program evaluation group.
Evaluations of teacher education programs can take on a variety of formats. The different formats are related to different philosophies of, approaches to, and uses of, the evaluation processes and outcomes. Therefore it is important to articulate the beliefs about evaluation that underpin our project. Overall we believe evaluation can have three major orientations: improvement, knowledge and judgment. Improvement oriented evaluation includes formative evaluation and notions such as total quality management and quality assurance. Knowledge oriented evaluation is more akin to research and is used to influence thinking, test models and generate lessons learned. Judgment oriented evaluations are summative in nature and determine merit or worth in comparison to other approaches or to standards and is similar to the notions of quality control.
Our working definition of evaluation is that it is the process of delineating, obtaining and providing useful information for decision makers (Stufflebeam et al, 1971). The belief that evaluation is a process means that it is dynamic, can be modified as it progresses and is responsive to changing priorities. Delineation is one of the most important aspects of the evaluation process. It encompasses the communication among the evaluation team and the stakeholders of the evaluation to facilitate the expression of what is desired from the evaluation and in what format. It defines what the evaluation should accomplish. The process of obtaining information refers to all of the rigor related to methodological issues including such things as instrument development, data collection techniques, ethical considerations, and efficacy. The provision of information is tied to the delineation phase in terms of what sort of information and what communication formats would be most effective in accomplishing the agreed upon purpose for the evaluation. The notion of useful information reflects our belief that evaluation should be aligned with the needs of the stakeholders in mind. In other words that the evaluation should be utilization focused (Patton 1997).
The application of our definition implies close connections with the evaluation stakeholders, methodological approaches aligned with evaluation questions and multiple communication streams. We also believe in adherence to the Program Evaluation Standards throughout the entire process. The development of evaluation questions is a contextualized process dictated by the needs of the situation and the various stakeholder groups. We view evaluation as a contract between the participating groups with all sides having responsibilities vital to ensuring the success of the evaluation. We believe the selection of a model for evaluation should be based on the needs of the situation. Some evaluation models we support are Management oriented/CIPP model (Stufflebeam, 1971), Consumer Oriented/quality control (Scriven, 1967), Expertise Oriented (Eisner, 1991), Naturalistic, Participant Oriented (Fetterman, 1994), Responsive (Stake 1975), Educative Values Engaged (Greene, in press), and Deliberative Democratic (House & Howe, 2000).
We believe evaluation methods should be selected to provide the optimal answers to the determined evaluation questions and not restricted to particular methods. This means that evaluations are generally a combination of different methodologies and care must be taken to determine the best ways to combine the results obtained from the different methods (Greene & Caracelli, 1997; Lawrenz and Huffman, 2002).
We also recognize that the different scale of the projects involved in an evaluation from small local projects to large multi-site projects would affect the types of methods used. Additionally in order to help guarantee that the evaluation be utilized effectively, the methods used should be aligned with the beliefs of the stakeholders.
In summary we see evaluation as a process of delineating, obtaining, and providing useful information for judging decision alternative.
Eisner, E.W. (1991). Taking a second look: Educational connoisseurship revisited. In M.W. McLaughlin & D.C. Phillips (Eds.), Evaluation and education: At quarter century. Ninetieth Yearbook of the National Society for the Study of Education, Part II. Chicago: University of Chicago Press.
Fetterman, D.M. (1994). Empowerment evaluation. Evaluation practice, 15, 1-15.
Greene, J. C. and Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. In J. C. Greene & V. J. Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms. New directions for evaluation, No. 74 (pp. 5-18). San Francisco: Jossey-Bass.
House, E. and Howe, K. (2000). Deliberative Democratic Evaluation. New Directions for Evaluation, 85, 3-12
Lawrenz, F., & Huffman, D. (2002). The archipelago approach to mixed method evaluation. American Journal of Evaluation, 23, 331-338.
Patton, M. (1997). Utilization-focused evaluation (3rd ed.). Thousand Oaks, CA: Sage.
Scriven, M. (1967). The methodology of evaluation. In R.E. Stake (Ed.), Curriculum evaluation. (American Educational Research Association Monograph Series on Evaluation, No. 1, pp. 39-81). Chicago: Rand McNally.
Stake, R.E. (1975). Program evaluation, particularly responsive evaluation (Occasional Paper No. 5). Kalamazoo: Western Michigan University Evaluation Center.
Stufflebeam, D.L., Foley, W.J., Gephart, W.J., Guba, E.G., Hammond, R.L., Merriman, H.O., and Provus, M.M. (1971). Educational Evaluation and Decision Making. Itasca, Illinois: F.E. Peacock.
© 2015 by the Regents of the University of Minnesota. The University of Minnesota is an equal opportunity educator and employer. This page is subject to change without notice. Last modified: 14 June, 2012. For questions or comments, contact Frances Lawrenz, at email@example.com