Private and public agencies have invested large sums of money to reform districts
over the past decade. Exhilarated over receiving handsome grants, district and
school innovators happily throw them themselves into efforts at putting their
ideas into practice. By the end of the first or second year, funders will ask,
rarely gently, what the plans are for the continuation of the innovation after
the grant ends. If their enthusiasm for the reform remains undimmed, project
leaders scramble for advice on how to continue the entire project or its key
features. They often ask system insiders and outsiders, "What do we need to
do in our district not only to sustain our hard-fought efforts but also to spread
the reform's positive effects to the rest of the system?" Funders and innovators
who depend upon their monies are deeply interested in the "how-to" of institutionalizing
their work.
I am also. But I believe other, seldom-asked questions must be addressed along
with the "how-to" question. I begin with the popular how-to question and then
move to the equally, if not more important, question of why should a project
be sustained.
How to sustain your project?
Because little research has been done on sustainability, especially for projects
such as the National Science Foundation's (NSF) Local Systemic Change (LSC)
or similar ventures, there is very little to draw on for guidance. I take what
is available in that research but also lean heavily upon my observations of
school reform over almost five decades and my prior experiences as a superintendent
and teacher.
There are generic strategies for establishing the conditions and infrastructure
that institutionalize a project and distribute its effects across a system-what
policy elites (top public and private officials, policymakers, and foundation
executives)-call "scalability." Note that I label these "generic" to distinguish
them from strategies that are unique to particular districts. I will discuss
those context-bound ones below.
Generic strategies.
Staffing. Holding on to key project personnel and recruiting and training new
staff are crucial to maintain stability in leadership and commitment to project
goals. To be present at the birth of an innovation can be exhilarating (and
ultimately exhausting). To move to the next stage of consolidating the organization
that has emerged from the project and sustaining the commitment of participants,
while trimming and adding to the effort as it moves into the larger system,
carries much less of an adrenaline rush. Although less exhilarating to participants,
solidifying project operations while maintaining the quality is essential to
the delicate crossover from a district project to a district program.
Staff turnover, particularly of those who have given leadership to the innovation
in its early stages, often undermines sustainability. Project leadership often
changes. Recruitment of successors either from within the existing cadre of
participants or from a pool of similarly committed outsiders becomes imperative.
Of equal importance is paying attention to introducing new staff to the vision,
prevailing culture, and expected outcomes of the project so as to maintain continuity.
Political. Sustaining a project intact, or preserving its main features (a
tactic to keep in mind if continuation runs into difficulties), will not occur
until particular decision makers replace external funding with a line item in
the operational budget. In most cases, that will be the superintendent and ultimately
the school board. For that to occur, project leaders need to make top district
administrators knowledgeable of the project's efforts, its connections to other
district programs, and how spreading features of the project across the system
advances district goals.
Beyond convincing top district administrators of the worth of the project,
building strong ties to, and credibility with, other departments in the system
are other important tasks. Most LSC projects, for example, coordinate their
activities with varied instructional and curricular departments, individual
schools, and in some cases, unions, universities, and local corporate leaders.
To the degree that the project's work helps those administrators and agencies
to do their work well, they become allies in the project leaders' quest for
institutionalizing the innovation.
Internal allies may be willing to redirect some resources to link their work
to that of the project. External allies may commit publicly to continuing their
participation through contributing staff and monies. Although such tasks impose
additional responsibilities on project leaders in the last few years of external
funding, building direct and indirect support among those who ultimately determine
district budget allocations is essential in translating rhetorical endorsements
into actual mechanisms of institutionalization.
Structural. Getting a line item in the operational budget converts a project
into a regular district program. But there are other organizational mechanisms
beyond the budgetary. Some LSC projects integrate into ongoing district structures
to enhance their effort. For example, in professional development, redirecting
district funds for project-influenced summer institutes, continuing workshops
during the school year, and negotiating with the union additional days in the
annual calendar set aside for teachers and principals work project initiatives
into routine district operations independent of agency grants.
The hardest feature of a project to replicate across a district is its unique
culture (the norms, expectations, roles, and rituals that come to characterize
the community). Yet even with this difficulty there are mechanisms that can
at least make the growth of a similar culture possible. To use professional
development again for example, the program might build time for off-campus retreats
into district-wide or individual school calendars. The provision of early dismissal
days, allowing staff to use time during the regular school day for professional
development, is another possibility.
These generic strategies are common suggestions to innovators seeking sustainability.
The defect of leaning wholly on generic strategies is that each district is
different in its size, governance, demography, experience with innovations,
and political culture.
Does local context matter?
Yes, it does. Minding the contextual differences within and across districts
matters greatly in moving a successful project from soft to hard money and from
temporary acceptance within a district to routine legitimacy.
Few generic teaching strategies can be implemented as designed by others without
some amending by teachers and principals who work with students of different
ages, different abilities, and different ethnic, racial, and class mixes of
students in schools and classrooms. What is applicable to the school and classroom
is true for the district. Seeking institutionalization for an externally-funded
project in Chicago is quite different than doing so in Scarsdale, New York,
or Oakland, California. Knowing that context matters imposes upon project leaders
a search for insider knowledge of the district and much tactical savvy in constructing
staffing, political, and structural strategies adapted to local conditions.
This completes my answer to the question: how to sustain a project? Complicated
as it is and remembering that there is limited research on the issue, I offered
a personal response that combined research and my experience working in schools
over the years. I turn now to the even tougher question facing those wishing
to continue their projects.
Why sustain your project?
In asking the "why" question, I want to explore the reasons that innovators,
past and present, have for continuing their work. Too often these reasons go
unexamined in the headlong rush to institutionalize a project within a district,
resulting in unfortunate compromises, altered goals, and disappointment. (I
omit the personal reasons that may motivate project leaders in their quest for
continuation: altruism, job security, increased visibility as a noted educator,
leaving a legacy, etc. Within public discussions, rarely are personal motivations
mentioned. I acknowledge that some of these are present in different measures
among varied individuals and often fuel the public reasons given to institutionalize
a project.)
For advocates of a curricular, instructional, capacity-building, or technological
innovation the answer to the question is self-evident: we want to continue the
project because it works. It is a success. But what do "work" and "success"
mean? In defining these simple, everyday words, hidden complexities emerge that
should be considered prior to beginning any effort to continue a project.
To policy elites "a successful innovation" or "one that 'works'" often mean
that the effort's intended goals are being (or were) achieved. In other words,
they did what they said they were going to do and they have the figures to prove
it. In a society where "bottom lines," Dow Jones averages, sports statistics,
and vote counts matter, quantifiable results often determine success. Thus,
when policy elites claim that a project is a success, they often point to the
achievement of desired outcomes, often expressed in numbers, to show that the
innovation has worked. In doing so, they are using an effectiveness standard.
For the last quarter-century, policy elites have used the effectiveness standard
to judge the success or failure of innovations and the quality of schooling.
From what students have learned in school to what graduates do after they leave
high school, measures of performance point to whether explicit goals have been
achieved. Primary indicators of effectiveness have been standardized achievement
test scores, rates of college attendance, improvements in teaching, and similar
outcome measures.
Note, however, that educational policymakers, public officials, and agency
funders subjectively set the desired goals for reforms and the measures to be
used to determine success. For example, national and state policymakers concluded
by the late 1970s that American public schools had declined in quality because
Scholastic Aptitude Test (SAT) scores had plunged downward. This widely reported
use of SAT scores as reliable measures of school performance fueled public support
for states raising academic requirements in the 1980s. It did not matter that
the test makers called such use of scores inaccurate; what mattered more to
public officials seeking support for their policies and media seeking high-profile
stories were quantitative measures that could be used in a numbers-conscious
society to establish ranking of schools, thereby creating easily identifiable
winners and losers.
Yet even here, test results proved ambiguous measures of whether a reform worked.
Consider that early evaluations of Title I of the Elementary and Secondary Education
Act (ESEA) in the late 1960s revealed so little improvement in poor children's
academic performance as to endanger congressional renewal of the program. Such
negative evidence gave media critics and national policymakers hostile to federal
intervention a reason to brand the War on Poverty programs as failures. Yet
unpromising test scores were insufficient to overcome the program's political
attractiveness with constituents and legislators. Since the early 1970s, each
successive president and Congress has used this popularity as a basis for allocating
eagerly sought funds to needy students in schools across the nation.
Thus, the gold coin of the evaluation realm, numerical data, dominates the
scene but is not irresistible in determining success. Other evidence drawn from
interviews, impressions, and unquantifiable indicators may well convince policy
elites and advocates that success has been achieved.
Policymakers, then, use the popularity standard to judge success. The spread
of an innovation and its hold on the imagination of voters, educators, and decision-makers
becomes an important criterion, as documented by opinion polls and media reports,
often translates into political support for top policymakers endorsing the reform.
The rapid diffusion of special education, bilingual education, new math and
science curricula, personal computers in schools, and professional development
since the 1970s offers obvious examples of innovations sweeping the nation.
Few funders or educators questioned the accelerating outlays of public funds
for these reforms or asked for measurable evidence to support these outlays.
Advocates viewed the new programs as worthwhile ways of coping with important
unmet educational needs of children and teachers. The popularity itself of these
reforms became evidence to support media editors' and policymakers' judgments
that the reforms were, at least initially, resounding successes.
In addition to effectiveness and popularity as common ways of judging the success
of an innovation or reform, policy elites also use fidelity as a standard. The
fidelity standard assesses the fit between the initial design, the formal policy
adopted, the subsequent program it spawns, and the implementation of the reform.
Those using this criterion ask, "How can you judge the effectiveness of a reform
project if the innovation departed from the blueprint?" When the NSF, for example,
funds a curriculum or professional development project and stipulates that grantees
provide at least 100 hours of professional development to at least 100 teachers,
align their innovative curriculum to district and state frameworks, and build
partnerships with external agencies to enhance the project's effectiveness in
the district, NSF officials want implementers to adhere to the stipulations
because they believe that the desired outcomes-better science and math teaching
and learning--will be achieved.
The fidelity standard places great importance on practitioners' following the
designers' blueprint. When teachers and principals add, adapt, or even omit
features of the original design, then policymakers, heeding this standard, say
that the policy and program cannot be determined effective because of the changes.
Another frequently used standard to judge success or failure is to ask whether
the reform has staying power. This longevity standard is plausible because in
public schools where so many innovations last no longer than warm breath on
a cool window, a program that persists more than a few years is a signal achievement
to which advocates can point with pride. The comprehensive high school, The
Bay Area Writers project, kindergartens, and (a more prosaic example) overhead
projectors were once innovations and, over time, have become thoroughly institutionalized
to become virtual commonplaces of schooling.
There are other innovations, however, which also have longevity but are no
longer recognizable because their original goals and practices have been abandoned.
A non-educational example would be the fight against infantile paralysis in
the 1930s that led to the founding of the March of Dimes organization. As vaccines
for polio became available in the 1950s and the incidence of the disease became
rare, the organization lost its primary reason for being. Yet today that very
same organization survives as a foundation fighting other childhood diseases.
Some educational innovations in the past have also survived but have changed
considerably: the Platoon School in pre-World War I America has evolved into
the modern elementary school; The Dalton Plan of the 1920s has been transformed
into a technique-making contracts with students over the work to be completed-that
both new and experienced teachers use; effective schools innovations targeting
low-performing urban elementary schools in the early 1980s has become nationalized
by 2000 into the standards-based, test-driven, accountability movement for all
American schools. None of these once brand name innovations exist today but
their features can be found in current programs.
Longevity and survival, then-note the differences between the two- provide
other standards by which success or failure can be determined. The press from
federal and private agencies for sustainability in projects, I suspect, is another
way of saying that the longevity criterion is being used to determine the worth
of an innovation.
These mainstream criteria -effectiveness, fidelity, popularity, and longevity
- alone or combined, are the ones most often used by policymakers, top private
and public officials, and the media to judge whether a project is successful.
For teachers and principals who largely are responsible for implementing innovations,
however, these criteria have been imposed and are seldom openly discussed among
themselves; nor, for that matter, are they often discussed among those who make
district-wide decisions. Policymakers, not practitioners, use these standards
to judge a project's success and failure. When federal, state, and local policymakers
(and media reporters following their lead) talk about reforms and use these
criteria to determine success, their judgments carry much more weight than teachers
and principals because elected officials are authorized to act as legitimate
decision makers for the community.
But what criteria do these practitioners--the foot soldiers of every reform-use?
Seldom do teachers and principals use effectiveness, fidelity, popularity, or
longevity as standards to judge a innovation's worth. Of course, teachers seek
visible evidence of improved students' academic performance but what teachers
count as effectiveness are seldom achievement test scores but students' acquiring
certain attitudes, values, and their display in actual behavior on both academic
and nonacademic tasks in and out of the classroom. Also practitioners value
putting their personal signature on the reform and tailoring the innovation
to work with students in their classrooms. Teachers and principals view these
adaptations as prior conditions necessary to achieve their outcomes as well
as those of the innovation.
To policymakers and those who design innovations alterations in their design
become evidence of failure; to teachers and principals, however, the very same
modifications are viewed as healthy signs of flexibility, inventiveness, and
active problem solving in reaching effectiveness. How, practitioners ask, can
you determine whether an innovation is successful unless we adapt the change
to the unique conditions of this classroom, this school? This practitioner-derived
standard of adaptiveness (the flip side of the fidelity standard) becomes essential
prior to applying any other criteria.
But why is the adaptiveness standard, one that practitioners would prize in
a school reform, seldom invoked publicly? The question boils down to one of
power and status: whose standards count? When policy elites, using research
findings often displayed in numbers and their access to media, place their weight
behind reforms, they have an automatic legitimacy that those at the bottom of
the organizational hierarchy, that is, practitioners, lack. Without the cachet
of scientific expertise, access to top officials, or entre to reporters, individual
teachers and principals are stuck. Collectively, teachers have organized into
unions and, more recently, asserted their political clout through taking explicit
positions on school reforms. Yet in making policy and judging success, unions
still play a limited role.
Thus, when individual or groups of teachers or principals do choose to adapt
innovations they do so unobtrusively or, in some cases, engage in guerilla warfare
with district administrators. Organizational legitimacy, use of scientifically
derived data and power often determine whose criteria are used to judge success
and failure. Finding out whose standards are being used to judge the worth of
an innovation and the exact content of those standards including what constitutes
acceptable evidence becomes critical information in deciding whether and how
to continue a project.
To summarize, achieving explicit goals drive three of the standards used to
judge success or failure of an innovation: effectiveness, fidelity, and adaptiveness.
All display evidence (albeit using different measures) to support claims of
success but the power to determine which standards to use varies considerably
between policymakers and practitioners. Neither popularity, which values widespread
belief in the innovation's virtue, nor longevity which values the age of the
innovation, are concerned with effectiveness.
Much of the disappointment that enthusiasts for an innovation experience as
they rush down the road of sustainability comes from the reluctance to ask the
simple question: why sustain the project? The answer to the question means probing
at what criteria for judging success and failure are being used by project staff
and district decision makers and where, if at all, there are discrepancies that
can be eased (or at least made explicit and openly discussed) even before the
tough work of actually institutionalizing the innovation proceeds.
In offering a personal answer to these tough questions about sustainability,
I have skipped too hastily over some difficult dilemmas buried deep within the
concept of sustainability. Let me end with these and pose them as further questions
to consider in our online discussion. Here are a few that I believe need careful
deliberation in institutionalizing your projects.
- Whose standards should we use in judging our success?
In your project what are the explicit and implicit standards used by NSF,
project participants, and top district officials in judging the success of
the innovation? Which ones carry the most weight? Why?
- What are the costs and benefits to the project and to the district in
negotiating support for sustaining the project?
There are inevitable tradeoffs in choosing whose standards for judging success,
working to gain support from administrators and practitioners elsewhere in
the district, and determining tactics to deal with unique local factors.
- Which is more important in the quest for sustainability: longevity
of the project's vision and goals, or working to have the project survive
in any fashion?
How much change is acceptable in the vision, goals, and work of the project
to secure an assured future?