Comparing Two Customer Research Approaches

futurelab default header

by: John Caddell

I had two remarkable experiences today.

First, I interviewed a marketing manager about some software he uses. He spent thirty-five minutes describing why the company chose the software, how he used it, how he learned to use the features over time and thereby developed proficiency in an area of marketing he hadn’t known well before, how the supplier had given him very responsive support, how the user’s group had helped him… and, by the way, three or four features that, if they existed, could really help him.

I recorded everything and will review this and a number of other interviews with the client using narrative sensemaking approaches. In the end, they’ll get a deep, detailed picture of how they’re viewed by their customers. They’ll know features that customers will value. And they’ll know some things that bother their clients.

Later in the day, I got a survey to fill out. It looked like this:

Rate each question on a scale of 1-5, with 1 being Poor and 5 being Excellent.

* Trainer communicated in a clear, concise, and easily understood manner.


* Demonstrated that he is knowledgeable in […].


* Displays pride, enthusiasm, and a positive attitude in his work.


* Demonstrates a professional attitude and supports the [client].


* Practice topics are clear and correct for [skill and experience].


* Trainers were timely and approachable with problems and concerns.


It’s unfair, I know, to compare the two approaches. The first is more expensive and time-consuming. There is more at stake for the software company than for the second group, a nonprofit.

But, really, what can one possibly learn from the second approach? Isn’t the interview method better in about 1,000 ways?

Original Post: