In his article in Marketing Week last week, Pre-testing ads is not divisive, it⁘s a no-brainer , Mark Ritson presented a very powerful argument for the use of quantitative ad testing, with which I agree entirely. He also made the point that pre-testing has changed ⁘ something Mark glossed over, so I⁘d like to amplify his point with the aid of some remarkable data from the ever-handy IPA effectiveness databank.
So, when Les Binet and I wrote The Long and the Short of It, our research revealed that pre-tested campaigns outsold non-pre-tested ones over the short term, but thereafter it was the non-pre-tested campaigns that dramatically outsold the pre-tested ones. Because they were more likely to be emotional in nature, the full impact of the non-pre-tested campaigns took longer to manifest.
But there was science of a sort behind the old model and it would take a long time to convince the marketing world that there was better science now available.
The pioneer of the new science in 2009 was Brainjuicer, now rebranded System1. Its emotional advertising response approach was validated (blind) among IPA case studies whose long-term business effectiveness was known (for transparency, I was involved in this study).
No comments:
Post a Comment