Stage 6.2  Improving the Process

To be able to repeat the same process is a laudable achievement in itself. However, we should aim to improve the process each time we repeat the cycle. "Improvement" can mean a number of things including any of the following:

  • to include more people as participants in the change process
  • to work on tougher problems
  • to work on concerns that affect more people more deeply
  • to complete the cycle more quickly
  • to complete the cycle at less cost, measuring "cost" any number of ways, not just dollar cost, but effort cost, person-hour cost, disruptive cost, etc.
  • to achieve outcomes that provide:
    • more benefits, more certainty, to more people.

To improve the process we have to look with a critical eye at what we have done in the first round, and then we have to take deliberate steps to change how things are done during the second round and subsequent rounds. Thus, there is: 

  • the requirement that we evaluate what has been done,
  • that we take a retrospective look at the various steps along the way,
  • that we go through a redesign exercise, and finally
  • that we reach out to include more members of the system as active participants.



The term "evaluation" conjures up a lot of ideas that change agents will view with varying degrees of skepticism, and perhaps even loathing. One image is of a college professor with a lot of questionnaires and observational tools, bearing down on the poor innovator, noting the smallest error. Indeed, some evaluations are commissioned with the hope that the resulting data will consign the whole project to the trash can. Thus, evaluators, far from being the friends of change, can be its enemies. Certainly premature evaluations, evaluations based on unrealistic expectations, and evaluations commissioned for suspect motives by administrators hostile to either the aims or the means or the actors in a change project all fit in that category. Nevertheless, some form of evaluation is essential to self-improvement and self-renewal. In this section we should review what such evaluations might look like.

Consider first of all the types or levels of evaluation that are possible. As an example, at the high end is the designed experiment with change effort sites and "controls" (i.e., "no change effort" sites), and random assignment of "subjects," which could mean students, classrooms, schools, or even, in theory, whole districts. Only this type of evaluation actually yields data that meet the most rigorous criteria of scientific knowledge. But for better or for worse, educational settings can very rarely be so neatly ordered. The logistics are mind-boggling and the costs out of sight. Even so, this type of evaluation is worth mentioning because any evaluation that fails to meet this standard can be faulted and hence dismissed by experts trained in academic science. Thus, any hostile administrator or school board can kill virtually any project, however worthy, on such "scientific" grounds. Change agents should have enough knowledge and awareness of the experimental models and methods to counter such criticisms on the basis of the gross impracticality and inappropriateness of such evaluations in almost all educational change settings.

Next down the ladder is the quasi-experiment in which outcomes are measured in quantitative terms and the change settings are "matched" with other settings comparable on key dimensions. Even here the costs are high. There must be involvement of university trained experts and both the "outcomes" measures and the "matching" process are always controversial e.g., have we matched on the appropriate dimensions? Was there bias in the selection process? Are any two educational settings really matchable? Are really important outcomes measurable? and so on). The quasi-experiment is also fairly rare among change projects unless there is substantial involvement from either the federal government or a university or both. Many of the innovations available to change agents such as those identified in the Catalogue of the National Diffusion Network, Educational Programs That Work, 1995, have been through such a process and have passed muster as innovations with "validated" outcomes, provided that you follow exactly the same process as used by the developers. On the other hand, it is unlikely that you and your district, operating in 2015 and beyond, will have comparable resources at your disposal. Thus, while such projects are worthy of trial in other settings, their evaluative strategy is not likely to be one which you can copy.

More modest field projects evolving spontaneously, with ­or without the aid of trained change agents, should aim for evaluation strategies which are tailored to their specific needs, most especially the need for self-improvement through successive change efforts. The minimum level is merely documentation, i.e., recording in summary form what you have been doing, preferably as you are doing it. You go to see someone? ... write down who you saw, when, why, and what ­happened as a result. You had a meeting? ... ditto. your group drew up a plan? ... save a copy. You did such and such to build relationships? ... note what was done. You examined the problem diagnostically? ... note who was involved and what they came up with. You searched for and acquired resources? ... where did you search? ... what did you find? All these very simple details add up to a narrative of what happened. The credibility of the narrative is in the details. Note that he Change Model can help you develop such a narrative by providing the categories of activity that are relevant through the model of "stages."

You will also want to assess and record outcomes in some fashion, both for your own uses and to show others that the project "works." Standardized test results on student performance are always impressive and persuasive with some audiences, but such measures may be grossly inappropriate for your project, even when they are attainable. Measured outcomes defined in behavioral terms tied to your original objectives can also be impressive evidence for those outsiders who take the time to study what you have done rather closely. For the more casual observer or the busy administrator, however, well chosen anecdotes can sometimes be just as persuasive and more readily absorbed and remembered than carefully assembled quantitative data.

Choose an evaluation plan that is appropriate both to your objectives and to the size and scope of your project. Formal evaluations can be costly and potentially disruptive if not done well and on a scale proportionate to your overall effort. A large project deserves a significant evaluative sub-project with its own budget, 5-10% of the total, and its own independent and qualified project director. It should attempt to quantify both process and outcomes, but measures should be agreed to in advance between the evaluator and the project leadership. The evaluator should also be required to issue a report with recommendations for continuance and specific modifications. Specific implications should be spelled out for change agents, administrators, teachers, or others who will be responsible for continuing, expanding, and redesigning the effort.

A small project requires only a conscientious effort at observation and note-taking by the change agent or a colleague close to the project and sympathetic to its objectives. The same categories of process and outcome should be recorded but in simpler form. "Data" can be in the form of estimates as well as anecdotes. As with a large project there should be a written report but it can be in the form of "notes to ourselves." The purpose should be renewal, i.e., guidance on what to do on the next round, how to improve and extend what we have done, and how to generate greater impact.

Whether large or small scale, outcomes assessment should extend beyond the ac­counting of planned and expected outcomes. Evaluators should always be on the lookout for unanticipated outcomes, positive and negative. What does the project do for the morale of the group? To what extent is it seen as something disruptive or exciting? Does it change attitudes or choices? Are there effects on non-participants? on parents? on community? on administrators? Sometimes these non-anticipated outcomes can be reason enough to continue (or to kill) a project. They may also yield valuable clues to what the next project should be.



Gathering up whatever you have in the way of evaluations, you should now look back at what happened and consider the implications for renewal and for re-c-r-e-a-t-e-ing the process. It is good to take some time out for this step, simply to think through what has happened, step by step. The stages of the Change Model can be very useful as a framework for doing this and the questions to put for each stage are:

  • How much time and effort was devoted to this stage?
  • Was it enough or too much?
  • Was this process or sub-process executed successfully?
  • If not, what could we have done to make it better?
  • Would more planning or a better plan have led to more success?

Jot down notes on your reflections, and if there are important others who acted in change agent roles or participated in the process, ask them to do the same.

Then set aside some specific time to go over the notes and discuss them with others if possible. Also go over the written evaluation (if there is one) and bring in the evaluator (if someone was so designated) for more discussion of what the evaluation means and what its implications are.

Redesign of the process

Your retrospective analysis now puts you in position to redesign and recreate the change process for the next round. This is "renewal" in the most elementary sense. Your redesign may involve adding more steps, making the process more complex, less complex, more acceptable, more doable, etc.


Imposing more structure

Such redesign may simply be a matter of making the process a little more coherent and orderly. The first time around you may have simply "gone with the flow" or done what seemed possible and practical as you went along without much forward planning and without organizing your effort into any kind of stages. This is fine; the good change agent is a realist and a pragmatist, always practicing the art of the possible. However, the redesign is an opportunity to become more organized, perhaps to apply the Change Model or some other source to the actual activities in real time. Thus, you may want to add steps that were ignored on the first round or expand steps that were slighted.


Cutting steps, shortening steps (Streamlining)

At the other extreme you may decide that you have been too orderly, following a lock­step scheme that sometimes got in your way. The change agent certainly has to be opportunistic and pragmatic, and a plan that is too tight or too detailed may also be counter­productive. The desire to streamline the process and make everything more efficient may tempt the change agent to cut certain people or groups out of the process. This can be dangerous if it leads to people feeling they have been left out. Thus, as you go about streamlining, make sure you stay connected to all the people who were involved in the first round unless it is clear that they want out of the process. Furthermore, if someone really does "want out," you had better find out why as part of your evaluation and retrospective.

Strengthening skills

Part of the first round of evaluation should be a consideration of whether you or key members of your team have the requisite skills in different areas, e.g., in diagnosis, relating to others, acquiring new knowledge. Are there ways to strengthen weak skill areas or bring on new team members with such skills?

ADDing resources

Probably no change project ever has enough resources to do it completely right, but given fiscal realities, etc., are there ways you can add to resources on the second round? Can more people be encouraged to volunteer their time? Are there other spaces that can be used for the project? Are there special funds or grants that might be available if the project is tilted in a certain way?

Reaching out for a more inclusive process

Of all the things that might be done to make the second round more successful than the first, the inclusion of more people has to be at the top of the list. As we have noted over and over again, change is largely a people process, informing people, getting people better connected to each other, getting people concerned and committed to change, and getting them to work toward a common goal. Much of your first round activity involved getting to know the system and getting acquainted with members, hopefully including the key movers and shakers.

Spotting the key persons and groups

Now as you prepare for the second round you should have a better definition of who the key people are, as well as the innovators, the opinion leaders, the resisters, and the other stakeholders. Some of these people may have been involved in the first round but many others probably were not. In fact, you may only have come to realize who some of them were as you wrap up the first round, attempting to extend the innovation.

Creating special events to bring people into the process

The most obvious way to start widening the circle is to involve more people in the redesign effort, starting with the retrospective as discussed above. Consider the need to have special time set aside and special events to bring this off. It could be a review conference at which opinions and observations are elicited, perhaps employing some of the brainstorming rules discussed in Stage 4. The size and shape of such a meeting should be tailored to the size and scope of the anticipated second round activity.

Using media to increase sense of inclusiveness across groups

To include more people we must extend the lines of communication; we must use the media that are used by the people we want to reach. Something has already been said on this subject under Stage 5 "Extending", but there are still a few points which should be added. First of all, consider your choice of media to invite greater inclusion in the change process. There are at least four considerations here:

  • size of the group we want to reach;
  • geographic dispersion and cultural diversity of this group;
  • the types of media these groups are tuned in to; and
  • the appropriateness of such media for conveying the types of messages we want to convey.

The front line of media for local reach-out, especially to specialized audiences, is a digital message serving this audience, e.g., an identified database of users or participants, social media. If everybody reads it or some part of it, that is where you want to place the stories on what you have done, and that is where you want to solicit interest and recruit new members for the change effort.

There is also a place for some types of mass media presentations of the change message. These may be announcements or general interest articles. Even though such items may be aimed at the general population, they will undoubtedly spark special interest and even action among innovators who are so disposed. Such general interest messages can also be prepared for digital formats, which not only make them presentable on radio and television broadcasts, but also make them  available for  YouTube, which can be used in much more targeted efforts which can be repeated over and over again to the same or different audiences.

In reaching out for a more inclusive set of supporters within the system, remember three important rules of communication:

  • never rely on only one medium to get the message across;
  • never rely on only one message or one type of message; and
  • never rely on a message delivered only at one time and in one place.

In other words, you must respect the individual differences, habits, preferences, and schedules of different members of the system, all of whom may become your supporters and advocates for change.