Sustainability: Out-Live Out-Last Out-Reach
Panels
log in
Welcome, guest
Home
Keynote
Poster Hall
Panels
Panels Public Engagement
Panels Continuous School Improvement Infrastructure
Panels Uses and Abuses
Panels Science and Literacy
Panels From LSC to fee for service
Discussants Reflect
Resources
Whos Here
Open Discussion
Info Center
 Floor Plan
 Home  Poster Hall  Who's Here  Info Center  Keynote Address  Poster Hall  Keynote Address  Panels Discussants Reflect  Keynote Discussion
Using Data in Discussions of Sustainability: Some thoughts from a mid-point LSC

Mike Howard

I appreciate the opportunity to share some thoughts about data and sustainability from an evaluator's perspective. First a brief backdrop. The LSC with which I work is just past the midpoint of its funding period, providing professional development (p.d.) to support implementation of kit-based science modules to K-6 teachers in 48 schools across a rural five-county area. The modules were selected to align with the state's grade-level standards for science. A participating teacher is introduced to the kits one at a time, over a three year period, principally through summer institutes with academic year follow-up. Participation early in the project was based on individual volition, so the first cohort represents the more "open" teachers; the third cohort has more of the "reluctant" teachers, who received stronger administrative encouragement (or mandate) to attend. This school year the first cohort of teachers is completing its first time implementing all four units for their respective grade levels.

As I see it, the role of evaluation for an initiative like an LSC is to work in four areas, each of which has formative and summative aspects:

  1. provide evidence of the quality and effectiveness of the activities of the LSC in meeting its stated objectives (outcomes);
     
  2. document the quality and effectiveness of the initiative's basic strategies in creating a trajectory toward its long-term goals (impact);
     
  3. provide evidence of substantive system changes that will continue past the term of the initiative (residue);
     
  4. examine the structure and operation of the initiative itself -- assets, challenges, leadership, responsiveness, etc. -- which affects its success at implementing its strategies (context and efficacy);

In addition, evaluators play two important process roles with the project:

  1. help the initiative "tell its story" to key stakeholders -- NSF and others -- in ways that make sense
     
  2. act as a "critical friend" to the initiative, providing regular feedback and advice from the perspective of someone with the interests of the initiative at heart but not vested in its specific activities; someone who can help "separate the initiative's rhetoric from its reality" (the Jiminy Cricket role)

Thinking about sustainability involves looking closely at impact and residue, as well as using the "story" of the initiative to make a case for the efforts to continue. The right kinds of data, appropriately used, can assist the LSC to stay on-course, make needed changes, document its effects, and make its case to stakeholders. But focusing on inappropriate data can distract the initiative from its work. How do you tell the difference? Here are some thoughts for reaction and conversation.

Probably the area with the greatest potential for "use and abuse" of data is looking for the effect of LSC professional development on student performance. Increasing student learning is ultimately what we're all about, and is foremost on the minds of important stakeholders (especially local administrators). So there is great pressure to "sell" the success of the initiative as quickly as possible, using student impact data that resonate with the local audience. The perils here are many, and revolve around the question of what data do you look at and when? Issues include the following:

  • Looking for student impact too early in the initiative
     
  • Restricting data used to results from an instrument (such as a state assessment) that "speaks" to the audience but does not reflect the full range of student effects expected by the LSC
     
  • Looking at global measures, such as a "total science" score, when the initiative has focused on a more narrowly defined set of content objectives

The list above is by no means exhaustive, but represents a few of the issues we've discussed in the unfolding evaluation of the LSC I work with.

Let's take a look at the first bullet. As I mentioned, there is great pressure to "sell" the success of the initiative as quickly as possible. I advocate, however, for an "Orson Welles" approach. As he used to say in the commercial, "we sell no wine before its time." Similarly, we should not try to "sell" LSC impact on students until the activities have borne fruit and that fruit has had a chance to ripen. We must ask at what point in the project's work is it reasonable to expect that students would feel a significant effect of their teachers' professional development? In my LSC, we've taken the position that we won't look for changes in student performance until participating teachers have the opportunity to consistently implement the materials and strategies throughout the school year. Since teachers take three years in the LSC design to begin to use all the modules, we are just now at the point where data on teacher implementation might be worth examining in a systematic manner with respect to broad student effects.

Explaining this to teachers and administrators has been something of a challenge, but not as tough as we expected. Once we help them think about it, they agree that a teacher who doesn't implement the materials consistently and effectively probably won't see the same effects as a teacher who does. And a teacher who is only implementing one six-week module, no matter how skillfully, should probably not expect much change in students' "total science" scores on the state assessment. Implementation in a consistent, comprehensive, and effective manner - this is the "trigger point" for expecting to see the student impact that local personnel want the project to report. It's a matter of monitoring to tell when we've reached that point.

Of course, we haven't been sitting on our hands for four years, waiting for the magic moment. To help the project build its case and keep stakeholders informed, we have gathered and analyzed data on intermediate steps of participants' journey toward full implementation. An overly-simplified version of that journey could be outlined as follows (this should be viewed as an interacting set of elements, not a linear sequence). I've included a few examples of the types of data we have examined and how they were useful to the project.

  • Working on heads, hearts, and hands. This means not only enhancing participants' knowledge and skills in content and pedagogy, but also building perceptions and proclivities that lead them to try-out what they are learning. One area we looked at was the relationship between length of participation in LSC p.d. and changes in attitudes, perceptions, and practices. Overall, we saw a positive relationship. The project used this kind of information to build confidence that the p.d. was doing its job and to enlist administrators' help in getting the more reluctant teachers to the institutes.
     
  • Taking the first few bites. (from the old adage, "How do you eat an elephant? One bite at a time.") This means that participants are trying things out, using the materials and strategies, taking note of positives and learning from disappointments. We looked for evidence of early use of the materials and examined teachers' feedback to identify instructional strategies that were and were not being used as they tried out the materials. This information helped project personnel refine p.d. and support activities to focus on problem areas, and gave them some tangible items to discuss with administrators.
     
  • Getting comfortable enough to start thinking about "how" and "why" as well as "what" to do. This means participants continue using the materials and strategies, moving over time beyond novice/mechanical levels of use and incorporating the materials and strategies as a regular part of their teaching throughout the school year. Because of the phased-in schedule of the kits in the LSC I work with, teachers are expected to be growing more comfortable with earlier kits as they are trying-out the later ones. With proper opportunities and support, the teachers' growing familiarity with the strategies and the structure/flow of the program should help fuel their engagement in more general reflections and discussion about teaching science and looking for evidence of student learning. We are examining teacher feedback to test this notion.

For brevity, I only mention the participant-related aspects of the "implementation journey." There are also system-related aspects (administrative support, resources, collegial interactions, etc.) that interact with the participant aspects and that we collect data to examine as well.

As I mentioned above, we are at the point in the project's schedule where the first cohort of teachers has been oriented to the full complement of modules for their grade level and have used each module with their students. We are now engaged in gathering data to gauge degree of current implementation. This includes both observation and interview data, with a rubric to assess the status of key characteristics, resulting in classification of a school's implementation level. I'm still a bit hesitant about trying to tie this to student performance yet, but this is a big question from the local folks, one that will impact discussions about sustaining the LSC's p.d. and ongoing support functions. So the project needs us to start looking at the issue, and we'll do it as best we can. Politically, the state assessment must be featured in the data and subsequent analysis, even though we know that concerns exist over the assessment's alignment and scope relative to the LSC materials. Again, we'll do the best we can to use the assessment data in ways that make sense and are defensible in terms of the LSC goals. The political aspects of how the project needs to make its sustainability case do impact what we look at and how.

With that in mind, here are some questions to think about and, I hope, respond to during our panel conversation.

  1. If "implementation" is the central issue for an LSC, how is it defined? What data do you gather?
     
  2. Formative evaluation includes supplying information that shows the project where it may be off-course or needs to try another tack. Such information, while useful for the internal decisions of the project, may detract from longer range efforts to build stakeholder support for sustainability. Should evaluators do analysis and reporting that stays inside the project, separate from the analysis and reporting done for public circulation?
     
  3. It's easy to say, "wait until it's reasonable to expect student impact" but much harder to pinpoint when to start looking at the student data. How do you know?
     
  4. Are state assessment results as big a deal for your projects as they appear to be for mine? Do your constituencies respect other kinds of student outcome data that may be better aligned with the LSC vision - assessment of process skills, thinking/reasoning skills, positive attitudes, student work products evidencing conceptual understanding, etc.?
     
  5. I mentioned that the "implementation journey" has system-related aspects that affect what the teachers are able to do at each stage. What do you feel are the critical data about the district/school context that should be factored into monitoring implementation status?

 Join Discussion  click for printer-friendly version of page

 
Home | Keynote | Poster Hall | Panels | Discussants Reflect | Resources | Who's Here | Open Discussion | Info Center
© TERC 2002, all rights reserved.