Posted by:
Mark Jenness
Posted on: May 06, 2002 at 10:32 AM
Message:
Mike's point about getting at teacher content knowledge is a challenging one for evaluators. In one of our LSC and another project, we have done some "teacher testing" in the form of a pre- and post-subject matter test at the beginning and end of a two-week summer institute. Both of these projects were over several summers. We had very cooperative project managers in both projects, so they made it very clear to participants before the summer institute that teachers would be given "tests". It was also made very clear that the results would be kept confidential--only group data would be reported. The first year teachers were anxious about the tests--for two reasons, I think--one because they didn't want to test poorly and two because they were still unsure how the results would be used. By the second summer there was less anxiety, but still some. For evaluators and staff, the next issue was to decide just what the results meant in terms of teacher learning. There was clearly a difference between pre and post (an improvement). We did report it to the funders, but the best use of the data was not so much to determine impact on teachers, but to assess the quality of the summer institutes, since there were several different ones each summer. We were able to look across sites to see why there might be differences in teacher learning.
|