Complex business problems need diagnosis, not packaged solutions

futurelab default header

by: John Caddell

Dave Snowden, whose posts are always interesting and instructive, says this in a post today:

What I am finding is that the more accurately you can describe the situation, the less you need formal intervention methods. For example if I can show a statistically valid trend, supported by narrative then most people in leadership or management positions can work out what they need to do.

In other words: if you fully understand the problem, you don’t need
complex managerial methodologies to solve it. Over-relying on “five
practices” or “seven habits” or “four steps” amounts to short-cutting
the real difficulty of understanding complex situations. There are lots
of reasons, which Dave describes well in his post, why best practices
and case studies are not good guidance for action in most circumstances.

it is difficult to know what the problem is in a complex environment.
Standard assessments & surveys demonstrate this. Asking a thousand
employees, “How innovative are we on a scale of one to five?” produces,
at best, a pretty-looking chart that signifies nothing. At worst, it
can point you in a completely-wrong direction. But everyone wants a
short cut.

Exhibit A: the current romance with performance dashboards.
In my experience, at best dashboards point you to a situation that you
need to understand more deeply (“Is this drop in region 5 sales an
anomaly, or is there something substantial behind it?”). In no
circumstances is dashboard information enough to act on.

So the
need is more for problem-understanding skills, and less for
problem-solving skills–meaning managers will need to get more in tune
with narrative sense-making. Here are two examples where intelligent
executives looked beyond the data, into the narrative mess behind a
problem or dilemma, and used the story that emerged to guide their

1) A.G. Lafley of Procter and Gamble, as described in the book “The Opposable Mind“:

trying to decide to whether to roll out highly-concentrated laundry
soap, Lafley faced a dilemma. Merchants loved the idea, but consumers,
as measured by P&G’s quantitative research, were neutral to
negative. In most cases, this would answer the question: don’t roll out
a product that consumers didn’t love. But Lafley didn’t accept the data
at face value:

decided to dive into the voluntary comments that some of the consumers
added to their quantitative research forms…. Lafley took many evening
and weekend hours to pore over more than four hundred handwritten
voluntaries. He came to the conclusion that while consumers weren’t
wildly enthusiastic about compact detergents, few were actively

And, as a result, Lafley decided
that the product could be a success. And, of course, it was. (Stay
tuned for a review of “The Opposable Mind” later this week.)

2) A Fortune 500 VP of Human Resources, evaluating a new performance appraisal system, as described in this blog:

year we had a pilot of a new performance management system for our
employees. The trial group was 4000 people. We had spent a lot of time
on the pilot and gathered a lot of data. At the end of the trial, the
VP of Human Resources printed out all the comments that had been
received on the survey forms. He took them home one night and read
every single one. Then he came in the next day and said, “We can’t roll
this system out.” And that was it. The trial was very expensive. We’d
gathered lots of data, lots of numbers, but the final determinant was
what he read in those comments.

One observation about
the above two incidents is that reading narratives and evaluating
qualitative data, if done at all, are relegated to nights and weekends.

is perhaps a sign of some of the difficulties we face in the corporate
world–you can spend all day looking at charts and spreadsheets, but
read stories on your own time!

Original Post: