Nancy Love
A high school in an affluent suburb and its principal are recognized and financially rewarded by the state because their 10th graders scored 11% higher on the state assessment than 10th graders the year before. But the school had done nothing to improve its programs; staff readily acknowledged that a particularly academically strong group of 10th graders came along that year. At the same time, a high-poverty school is labeled as low-performing despite the fact the staff worked hard to implement a new standards-based mathematics program. The staff are deeply demoralized.
A teacher begins to question where she can continue to use an inquiry-based approach to science instruction when the state assessment emphasizes science facts, not big concepts or inquiry skills.
A school improvement team decides to provide more test preparation in mathematics and to tutor a small number of students whose scores on the state assessment fall just below the needs-improvement-proficiency level. There is no discussion of their tracking policy, the rigor, focus, and coherence of their curriculum, or the effectiveness of their instruction.
High-Stakes Testing: Data-Driven Mania and Trivial Pursuits
You don’t have to go far or look hard to find data being abused. These three examples – all in my recent experience - result from the same abuse of data: single and imperfect measures of student achievement, high-stakes standardized tests, are being used to size up the effectiveness of schools, dole out financial rewards and punishments, and determine students’ futures. As a result, when most educators think of data, they think of high-stakes tests and high-stakes tests alone. Given our policy-induced test obsession, this is not a surprise.
But it is a big problem. It’s a big problem because most high-stakes tests themselves are seriously flawed, riddled with race and class bias (James Popham, in his review of socioeconomic bias in test items, found 45% of the science test items to be biased [2001]) and focused, for the most part, on low-level outcomes. Any scientist or researcher will tell you that basing high-stakes decisions or conclusions on any single measure or test is a bad idea. When it comes to student learning, no one test, even a good one, can possibly give us a full picture of what students understand and can do in relation to national or local standards and curricula. Teachers I work with find the high-stakes test among the least useful of the data sources they can tap to gain insights into how to improve instruction – not nearly as informative as the assessments that are a part of their standards-based curricula or as the careful examination of students’ work and thinking.
Moreover, the activity generated as a result of high-stakes tests is often short-sighted, unfocused, and in direct contradiction with the vision of mathematics and science education reform many of us have devoted our careers to. Activities like months of drill on test-like items, a narrowing of the curriculum away from cognitively demanding concepts and skills, a deemphasis of entire subjects that aren’t high-stakes, like science, may or may not result in small, short-term gains on standardized test scores. But are they bringing us closer to what we want for our students and our schools? Are they improving the practice of teaching? the access students have to a rigorous curriculum? the strength of the professional culture? the capacity of the school to continuously improve results over the long haul?
Data-Driven Dialogue: The Antidote to Data-Driven Mania
In this atmosphere of high-stakes testing, what are educators dedicated to mathematics and science education reform to do? We can’t afford to ignore the high-stakes tests nor can we use them as an excuse to stop reaching for high standards for all kids. What are some effective ways to use data as a lever for improving student learning, taking into account the political realities but staying true to our commitments?
"Our mathematics department meets weekly," explained a teacher at City-on-a-Hill, an inner city public high school in Boston. "There are two meetings a year where we DON’T look at student work." The team uses a process for examining work that begins with defining a clear purpose for looking at the work. They always do the mathematics task themselves and share their strategies before digging into the student work. Then they closely examine pieces of work the teachers bring, first making observations, then drawing inferences and identifying questions for further investigation.
For the last two years, they focused on this question: why are our students doing so poorly on the open-response problems on the state mathematics assessment when we do these kinds of problems regularly as part of our Interactive Mathematics Program? As they studied student work and thinking over time, they identified some key reasons. First, there was some basic mathematics vocabulary the students didn’t have. Second, teachers were hovering too much over the students in the classroom, explaining the problem to them, breaking it down, coaching them step-by-step. So even though the students were familiar with mathematics problem solving, they did not have enough independent practice. Teachers worked on these areas and over a two-year period quadrupled students’ average score on open-ended questions.
This example illustrates many of the following elements of effective uses of data:
1. Build a professional culture.
Effective use of data happens in the context of a robust professional learning community, where teachers and administrators are crystal clear about their vision and their commitments, relentlessly focused on results for students, collaborative and reflective about their practice. In the absence of this kind of community, I believe schools are data-immune. So go slow to go fast; take the time to have the tough conversation about what we really mean by all kids, study research, dig into the standards documents, and build shared commitments. The school in the above example was united around high expectations for students and continuous learning for staff. In this context, they were able to use the high-stakes assessment as a catalyst for inquiry into their practice and for improvement in their implementation of a standards-based program.
2. Create collaborative structures.
If teachers are going to dig into data, generate strategies to improve student learning, and monitor their results, they need time to meet weekly in department meetings, vertical teams, grade-level teams, or study groups.
3. Engage in data-driven dialogue and collaborative inquiry.
If data are going to provide the momentum for improvement, teachers need to make collective sense of the data, own the problems, and embrace solutions together. Data-driven dialogue, a process where groups come to deeper and shared understandings of data, is an important precursor to decision-making. Data-driven dialogue requires that participants practice norms of collaboration and gain skills in data analysis. They learn to separate data from inference, to bring out multiple perspectives, to test out interpretations of data with additional data and relevant research, and to explore not just the most obvious explanations, but the root causes of problems. Digging into why some students aren’t learning requires use of student learning and other kinds of data, like surveys of classroom practice, curriculum maps, enrollment data, classroom observations, or student interviews. This collaborative inquiry, fueled by dialogue, lays the groundwork for the decision-making and action planning that follow. Avoiding polarization or false consensus, staff truly commit to a specific improvement goal, take collective action, and monitor results.
4. Learn what you can from standardized tests. Give teachers the data they need!
Very often I find that teachers don’t even have access to the data that could help them the most. The summary test results that get posted on the web or published in the newspaper provide little guidance to teachers about what to do to improve instruction. They are the headlines, but not the story. Part of the story lies in the disaggregated data that uncover gaps in performance between whites and minorities, rich and poor, girls and boys, special and regular education students. Another part of the story is in the item-level data, where teachers can uncover what particular content strands students are having trouble with and what items in particular are troublesome. When possible, focus on items that get at knowledge you care about, like students’ ability to solve complex problems and communicate mathematical reasoning. Give some of those items to your students and ask them to explain their answers, as the teachers did at City-on-a-Hill. Examining student work will give you greater insight into their thinking and make clearer what you can do to improve their learning.
5. Use multiple measures, including common grade-level, subject area, or course-specific assessments.
Making Instructional Improvement Go!
Under Superintendent Rick DuFour’s leadership over the last 17 years, Adlai Stevenson High School District 125 in Lincolnshire, Illiinois reduced its failure rate from 23 to 1.4% and was transformed from a low-performing district to one of the highest performing districts in the state by virtually every measure, including high-stakes tests. One key to their success is the use of common assessments designed by teachers to assess the knowledge and skills teachers agree are central to their curriculum. Teachers administer these assessments four times over the school year and meet weekly to analyze results, target specific goals for improvement, generate ideas to try out in their classroom, and monitor their results. They improved continuously not only on their local assessments but on high-stakes tests as well. In schools where standards-based curriculum are in place, teachers can choose assessments right from their units to administer collectively, following the same process. Adding common grade-level or course assessments to your assessment system provides a clear focus for collaborative inquiry into improving student learning and is motivating to teachers.
It is the data-driven dialogue that takes place in department, course- or grade-level teams, not the rank-ordering of schools in the newspaper, that provide the real momentum for improving student learning. Schools that understand this learn to suck whatever value they can from high-stakes tests and not be derailed by them. The good news is that when they do the right thing for the right reason, like at City-on-a-Hill or Adlai Stevenson, they end up raising test scores too.
Popham, James (2001). The Truth about Testing. Alexandria, VA: ASCD.
|