(NOTE: This essay draws on a chapter in my new book, Bright Lights & Dim Bulbs, which identifies nine radical branding and marketing insights for innovative business leaders to watch as we roll into 2010).
Home runs are fun to watch, but the nearly great plays are more illustrative and instructive.
We marketers tend to focus on successes, holding them up as proof of what’s possible, whether in conference presentations or new business pitches. "Show me an example" is the litmus test of ideas that deserve to get shared and mimicked; go-forward plans are based on what we learn from standouts and exceptions. This has been true since brands ran their first newspaper and radio ads. The most glorious social media campaigns of 2009 will yield a bevy of flattering copies in 2010.
I think we set ourselves up for failure when we try to "learn" from the most successful case histories, primarily because you can't separate a campaign from its context and goals. Every circumstance is unique, and the variables involved and both varied and often times unknowable. Chance drives the zillions of intersections between expectation and outcome, so discerning why something worked is a loaded, imprecise challenge. Even if we could nail every detail of a success, the limitations of physics and human skill would make repeating it unlikely.
So why don't we spend more time studying the also-rans?
Granted, it's a less sexy agency pitch -- welcome to our reel of sorta good campaigns -- and it might be hard for a corporate middle manager to allocate time to reviewing things that failed, but almost great ideas are far easier to explore:
- What were the expected actions that didn't happen?
- What unforeseen factors influenced the campaign?
- How were the goals mismatched to the content of the program?
- Were there unintended benefits? Drawbacks?
- Might the successes have been extended or multiplied?
If you must, you could start with the stunning successes that your clients and management want to talk about, but try to rip them apart as if they were incomplete. What could have been accomplished? Where were the connections that would have saved resources, or furthered the reach? How might program have worked faster, or been more reliable? Was the when of execution a deciding factor and, if so, why?
The facts you discovered would allow for something the futurists call "scenario planning," which means you could use your insights as a modeling tool, and thereby really understand the underlying mechanics that might differentiate "almost" from "totally."
Contrast this with relying on the happenstance and sometimes outright magic that drives the biggest successes. A blogger chances upon a topic. A community forms around it for some reason or another. A store sells out, a competitor falters. It's a particularly sunny or rainy month. Lightening strikes.
We see what happened as a guide to what will happen, and blithely go about constructing campaigns based on organic, unique, sometimes first-ever events occurring a second time (or many times after that).
And you wonder why you're always explaining yourself and defending your budget?
We're going to get inundated with "best of" lists over the next few months, just like we've spent 2009 celebrating successes to give hope to our struggling industry. But I'd challenge you to understand the almost great ideas...really smart, strategic programs that were well conceived, delivered, and then stopped short of realizing their full potential:
- Viral videos that got watched a lot, but stopped short of prompting a sale (or getting consumers tangibly and reliably closer to one).
- Promotional campaigns that seemed too good to be true (promise overload that defied belief).
- A customer service problem that was adequately fixed, but not extended into a sales opportunity.
- Contributions from unlikely sources, like distribution or finance.
You need to deconstruct what happened in order to truly envision what's possible. I think you'll get more from almost great ideas.
The Bulb Asks:
- Are you basing your 2010 expectations on unrepeatable exceptions?
- Can you take the successes of the case histories you're presented, and explicitly confirm the tangible benefits (i.e. achieving something other than awareness)?
- If you can identify the causal links that lead to success, does that mean you can build campaigns that are far more reliable (and relegate stunning success to the nice to have category instead of risking your job on it)?
(Bright Lights & Dim Bulbs contains 10 tips on this topic and 8 others)
Image source: http://www.flickr.com/photos/kaptainkobold/2450658315/