by: John Caddell
Let's first pose a problem. You've put in place a new performance evaluation system and spent a year conducting reviews using it.
How's it working out for you?
Your program will have target objectives like improved employee satisfaction, perhaps, or increased personnel retention. But the first indicator is difficult to baseline and measure, and the second requires a long time to discern a change.
Complex initiatives...vague indicators of success...the hunger to know whether something costly and labor-intensive was worthwhile.
International aid programs face this problem every day, and from that domain has emerged a new method for monitoring these types of programs.
It's called Most Significant Change. (Here's the official guide, written by Rick Davies, the creator of the approach, and Jess Dart.) At its most basic, this deceptively simple approach asks field workers (or first-line managers in the business context) to elicit anecdotes from the people affected, focusing on what most significant change has occurred as the result of the initiative, and why they think that change occurred. These dozens or hundreds of stories are passed up the chain and winnowed down to the most significant, as determined by each management layer, until finally one story is selected.
It's not numbers, or graphs, or ROIs. It's a story, with an explanation, and behind it a collection of other significant stories. A story that describes a real experience, reviewed, defended, and selected by the people charged with the success of the program.
Why can this monitoring method work for programs like our performance evaluation system? I'll discuss that in another post.