Search

CEHD and U of M Word Marks

OLPD

330 Wulling Hall
86 Pleasant Street SE
Minneapolis, MN 55455

Fax: 612-624-3377

Undergraduate Studies:
612-624-3640
ugolpd@umn.edu

Graduate Studies:
612-624-1006
olpd@umn.edu


Menu

King

Jean King

Professor; Minnesota Evaluation Studies Institute (MESI) Director


Ph.D., Cornell University, 1979, curriculum and instruction
M.S., Cornell University, 1978, curriculum and instruction
A.B., Cornell University, 1971, English

Org Leadership, Policy/Dev
430F Wulling Hall
86 Pleasant St S E
Tel: 612/626-1614

Download Curriculum Vitae [PDF]

Areas of Interest

Interactive evaluation practice
Participatory approaches to program evaluation
Evaluation capacity building
Evaluator competencies

Profile

Given that my mother was a third grade teacher and my father a school administrator, I’ve long felt at home in schools. As an adult, I became a practitioner in my own right as a seventh and ninth grade English teacher in upstate New York. [I like to joke that I’m a junior high school teacher gone bad.] After earning my graduate degrees in curriculum and instruction at Cornell, I moved to New Orleans where I spent a decade running the secondary teacher education program at Tulane University and taught courses related to middle and high school certification—the social foundations of education, methods, and student teaching. Moving outside of teacher education, my research centered in part on the functioning of the research and evaluation unit in the Orleans Parish Schools.

In 1989 I moved upriver to the University of Minnesota as the founding director of the Center for Applied Research and Educational Improvement (CAREI), a collaborative research organization designed to link university research with school-based practice. I worked closely with school superintendents as part of my work with CAREI. I left CAREI in 1993 to help develop the evaluation studies program in the Department of Organizational Leadership, Policy, and Development (OLPD). The program now includes both a master’s and Ph.D. in evaluation studies and a post-master’s evaluation certificate and a Graduate School minor in program evaluation. For several years I also helped coordinate a professional practice site at Patrick Henry High School in Minneapolis.

From 1999-2001, I took a leave from my professorial role to serve as an internal evaluator/coordinator of research and evaluation for Anoka-Hennepin ISD #11, now the state’s largest district. Anoka-Hennepin is Garrison Keillor’s alma mater, and its children and professional staff are truly above average. I was quickly reminded that it is far easier to talk about educational change than to make it happen and that evaluation use for many is a challenge. While at Anoka, I had the opportunity to work on a number of participatory evaluations, including a special education project with a 50-member study committee, and to collaborate with central office administrators to build an evaluation infrastructure. The passage of No Child Left Behind demanded a re-focusing of district resources to expand standardized testing, making it difficult to sustain program evaluation. As luck would have it, though, I have continued to work at ISD #11, most recently with my colleague Jennifer York-Barr on an evaluation of their Elementary Curriculum Specialization Project (2006-2009) and then helping to again evaluate the district’s special education programs and facilitate evaluation capacity building (2009-2011).

As an evaluator who spends a lot of time teaching, I’m constantly bridging the research and practitioner worlds. For thirty years I have studied educational practice, consistently focusing on evaluation use and the mechanisms of organizational change. Increasingly, my work concerns the role that the systematic use of data by practitioners plays in effecting and documenting change, both in schools and, increasingly, in other organizations. Since moving to Minnesota, my primary research emphasis has remained in program evaluation, with special interest in the areas of participatory evaluation, evaluation capacity building, and evaluator competencies.

With my grounding in the world of schools and social service organizations, my research has addressed two broad topics: (1) studying evaluation practice in these settings, especially during change efforts, and (2) the role and function of program evaluation, including the use of both the evaluation process and its results. The ultimate goal of my work as it has evolved is to determine how to foster and support evaluation processes (by whatever name) in educational and social service organizations over time. The terms I use to describe what I study have evolved, from action research and process evaluation, to participatory or collaborative evaluation (where evaluators work with program staff and participants), and finally to evaluation capacity building (purposeful efforts to build evaluation infrastructure and skills into an organization, also known as organizational learning). Since 1998 when I introduced the phrase in a speech, I often refer to my focus as “free range evaluation” – a collaborative evaluation process that lives freely in the world, that is more viable when it survives (and it often does not) because it lives in a natural setting and reproduces itself in its organizational context. Free range evaluation is longitudinal, and it focuses on building the capacity of individuals and organizations to sustain evaluation activities. I have been fortunate to have given workshops and presentations on these ideas around the world, including Sweden, England, Israel, Japan, Australia, New Zealand, Singapore, and South Africa.

The past couple of years have featured finishing two important projects and beginning others. In 2012 my collaborator Laurie Stevahn of Seattle University and I completed a small book that is part of a kit on needs assessment. Ours is the final book in the series and discusses what people can do to use needs assessment data to implement change in their organization. Once that book was in press, Laurie and I returned to our magnum opus, a book on what we call interpersonal evaluation practice or the “interpersonal factor.” Over ten years in the making, our book applies theory-based principles from social psychology and evaluation research to program evaluation processes and records what we’ve learned in more than a quarter century of evaluation experience. With pride I can now say that Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation was published last year. Also in 2012 I began working as a senior evaluation adviser in an entirely new context: the National Center for Interprofessional Practice and Education in the University’s Academic Health Center. Applying evaluative thinking and capacity building in the transforming health care system has proven to be a challenging but always interesting process.

On a personal note, I am a proud but aging tent camper who, with my husband, purchased a pop-up camper in the year 2000 with the express goal of camping at all 63 Minnesota state parks and as many national parks as possible. I love children and cats and have two of each (Ben, age 31, and Hannah, age 29; Gus, age 9, and U.B., age 1).

Selected Publications

  1. Podems, D., & King, J. A. (Eds.) (2014). Professionalizing evaluation: A global perspective on evaluator competencies [Special issue]. Canadian Journal of Program Evaluation, 28(3).

  2. King, J. A., & Stevahn, L. (2013). Interactive evaluation practice: Mastering the interpersonal dynamics of program evaluation. Newbury Park, CA: Sage Publications

  3. King, J. A., & Rohmer-Hirt, J. (2011). Internal evaluation in American public school districts: The importance of externally-driven accountability mandates. New Directions for Evaluation, 132, 73-86.

  4. Stevahn, L., & King, J. A. (2010). Needs assessment phase III: Taking action for change (Book 5). Newbury Park, CA: Sage Publications.

  5. Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377-410.

  6. King, J. A., & Ehlert, J. (2008). What we learned from three evaluations that involved stakeholders. Studies in Educational Evaluation, 34(4), 194-200.

  7. Toal, S. A., King, J. A., Johnson, K., & Lawrenz, F. (2008). The unique character of involvement in multi-site evaluation settings. Evaluation and Program Planning, 32(2), 91-98.

  8. King, J. A. (2008). Bringing evaluative learning to life. American Journal of Evaluation, 29(2), 151-155.

  9. King, J. A. (2007). Developing evaluation capacity through process use. New Directions for Evaluation, 116, 45-59.

  10. Volkov, B., & King, J. A. (2007). A checklist for building organizational evaluation capacity. Evaluation Checklists website, Western Michigan University.

  11. Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43-59.

  12. Updated March 2014


>