Sustainability: Out-Live Out-Last Out-Reach
Panels
Home  |  Standard Version

Improvement Must Be the End-Game for Evaluation

Mark Jenness

The longer I am involved in the world of educational evaluation, the more convinced I am that the evaluation enterprise must be primarily about IMPROVEMENT. In a paper written by my colleague Zoe Barley and myself in the early 1990s, at the invitation of the National Science Foundation, we said that "evaluation should be designed and implemented to serve a primary function of program improvement." Emphasis should be placed on the importance of using evaluations to shape programs to enhance effectiveness as they are in progress. I continue to believe that this should be the underlying reason for engaging in evaluation.

It doesn't make sense to me to expend the amount of time, energy, and financial resource required to conduct high quality evaluations if what we learn isn't used to make informed decisions about how to make things better, especially at the project level. It should be noted that this view is not universal among evaluators, nor among all funding agencies and policy makers. Many still hold to more traditional notions about evaluation as a pursuit of accountability-those that don't "measure-up" should be terminated. There are certainly some situations in which this approach can and should be applied. However, at the actual program implementation level, too much focus on accountability deflects from pursuing evaluation to improve programming and maximize investment.

EVALUATORS AS EDUCATORS. Evaluators, including those working with grant-funded programs like the National Science Foundation Local Systemic Change projects, must see themselves as educators. In Cronbach's "95 Theses about reforming program evaluation" (1985), thesis #93 says, "The evaluator is an educator; his success is determined by what others learn." We agree. An important role for evaluators is to provide information to inform decisions and actions of project staff, participating teachers and administrators, funding agency staff, and other stakeholders, and concurrently help them make the best and most effective use of that information. At the same time, project staff and other stakeholders must see evaluators as partners in the educational improvement effort. My experience as an educational evaluator suggests that evaluation is most effective when evaluators and project stakeholders work collaboratively to realize the goals of the project. It is only when everyone sees the value of using evaluative data for project improvement (not just trying to prove if something does or doesn't work-leave that to a more formal research study) that evaluators and stakeholders become a team.

USING DATA TO INFORM DECISIONS. In this more collaborative approach, evaluators play multiple roles. For the purposes of this brief discussion, however, I want to focus on the role of the evaluator in helping project stakeholders use data to inform their decisions. For a project to be effectively "data-driven," information must be pertinent, accurate, timely, and in a usable form. In many cases, the evaluator must also assist staff in actually using the data to make decisions (and, in some cases, trying to prevent misuse of the data).

Pertinent, Relevant, and Reasonable. Data are likely to be relevant and pertinent if evaluators and stakeholders (especially project staff) work together to identify the kinds of information that can be most effectively used. Data collection must be purposeful. Collect only data that is likely to inform decision-making about how to improve a program (most of this data will also be useful in determining project results). Don't use all the energy and resource to collect data-there may not be enough left for the equally important analysis and use of the data.

Accurate. No matter what data are collected, accuracy must be a high priority. Data needs to be collected, compiled, and analyzed to maximize accuracy of results. Using inaccurate data only leads to inappropriate or faulty decision-making.

Timely. Quick turn-around of accurate and useful data is one of the great challenges for evaluators. Program decision-making often precedes receipt of evaluative data. Program staff face constant deadlines for implementing project activities. They often decide what to do without the benefit of pertinent evaluation findings (although we also find that some program staff make decisions without the benefit of data even when it is available in a timely manner). As an evaluator, I work hard to synchronize evaluation and project activities so the data are delivered in time to make informed decisions. Our goal is to avoid generating "historical" documents, reports containing information so old as to be unusable for project decision-making (although they might have some use for determining project impacts). This, too, however, requires on-going collaboration and communication between the evaluator and program staff.

Usable Form. Perhaps our most frequent problem in encouraging data use for program improvement is concerned with designing effective ways to present data to stakeholders. Most of our clients have projects with multiple and diverse stakeholders. For example, a current project includes the following key stakeholders: core project staff, regional project staff, teacher participants, administrator participants, and funding agency staff (and their specific constituents). Data are collected in a variety of ways: surveys, interviews, lesson observations, school site visits, and student test scores and products. The data are compiled, analyzed, and reported. The challenge is to provide "user-friendly" data presentations to diverse audiences so they can understand and use the data to improve their work. The data must be in digestible bites. Another of Cronbach's 95 Theses (#49) highlights this challenge: "Communication overload is a common fault; many an evaluation is reported with self-defeating thoroughness."

Using the Data (and Preventing Misuse of Data). It is not unusual to have clients who need evaluator assistance in translating the data (even when presented in "user-friendly" forms) for decision-making, dissemination, and reporting. It is at this point in the process where data can easily (not always intentionally) be misreported or misrepresented, as it is summarized and synthesized. When the evaluation effort is a collaborative one, we are able to help clients "make sense" of the findings and translate them for decision-making and dissemination.

KINDS OF DATA FOR IMPROVEMENT. The kinds of data collected for decision-making to improve programming are not dissimilar from what might be collected for accountability (how that data is used is quite different in the two situations). In a current middle school mathematics improvement project for which we are serving as evaluators, data is collected annually in the following ways: annual survey of participating teacher leaders, pre/post survey of non-teacher leader participants, alternate year lesson observations in classrooms of teachers leaders, annual evaluation site visits to participating schools to conduct interviews (teachers, administrators, parents, students) and observe on-site professional development, pre/post teacher math content tests during summer institutes, end-of-summer institute and other post-professional development session questionnaires, documentation of mathematics curriculum development and instructional materials adoption, debriefing interviews with core and regional project staff, pre/post student content test administration, and monitoring of state-level student achievement test scores. Reports based on specific data collection activities are provided as the data becomes available. Annual reports synthesize all data collected during that cycle. Summary and audience-specific reports are also prepared as data becomes available. Although the primary emphasis of evaluation work in this situation is on program improvement, the same data can be used to report progress toward goals.

A FINAL NOTE: As educators, will we have to continue to respond to funding agency and policy maker requests for data for accountability purposes? That's part of today's reality, even if a bit misguided. However, if we let those demands overpower us, the real value of evaluation will be lost. Carol Weiss, a nationally recognized educational researcher and evaluator, in 1966 remarks, said it succinctly, "The basic rationale for evaluation is that it provides information for action. Its primary justification is that it contributes to the rationalization of decision-making. Although it can serve such other functions as knowledge-building and theory-testing, unless it gains serious hearing when program decisions are made, it fails in its major purpose." In 2002, this is still pertinent and more important than ever.

 Join Discussion  click for standard version of page

Home | Keynote | Poster Hall | Panels | Discussants Reflect | Resources | Who's Here | Open Discussion | Info Center
© TERC 2002, all rights reserved.