Learning From Outliers
While we’ve all become accustomed to seeing footage of coastal areas being battered by hurricanes, Irene had a surprising impact recently on the northeastern United States. Some places got hit very, very hard by flooding and downed power lines, others less so. I don’t want to minimize the event in any way, but I did learn some things from Irene. On a personal note, my family and I were without power for four days — in the greater scheme of things, a minor inconvenience compared to those who lost their homes or many treasured possessions.
At work we had power and even hot showers, which were in high demand for many employees without power at home. We still had normal client requests and project deadlines, presentations, benchmarks and analyst inquiries. During some of those conversations, we spoke about our reliance — or overreliance — on electronic systems and tools; one inquiry in particular highlighted the issue. We had a call with some field marketing campaign directors who wanted to review their lead nurturing processes. They sent along some recent nurturing campaign metrics to share and discuss. The call began with questions about best practices for setting up and executing lead nurturing campaigns. The directors wanted to learn about the impact of specific details such as program frequency, delivery mechanisms, how to score contact activities and how to measure the effectiveness of multi-touch nurturing campaigns. We started to talk about those things, but then one of our analysts noticed something simple yet powerful about the reported results. There were rates for email opens, clickthroughs, form fills and content downloads — all good metrics to follow and track. The data tracked to a traditional bell curve with most results within a common range, and then a minority above and below, and that’s where our analyst wanted to focus the conversation.
It became clear that the programs with the lower response and clickthrough rates were the ones delivered to the largest target contact lists — big, broad programs with more general messaging and generic offers. Conversely, the programs with the higher rates were the ones sent to the smallest targeted contact lists. Time and again we see the same results. Response rates are low when target lists are broad and messaging and offers are general. Response rates are high when target lists are narrow and messaging and offers are more relevant. Yes, cadence and frequency are important, and organizations should avoid list fatigue and optimize their opt-in practices. It is also important to carefully design and execute multi-touch, multi-channel programs. But one of the biggest problems with demand creation marketing programs is that technology — such as email campaign systems and marketing automation platforms — have made it too easy to create and execute a program, especially an email program. How many times have we heard that a marketing automation system is so easy to use that anyone in marketing can quickly and easily set up and execute a program? Perhaps they have become too easy….
Many marketing practitioners believe that reaching out to the largest possible target market will yield the best results. Many also realize that this is the wrong behavior, yet they persist. Why? Because they can. There are no internal best practices or processes in place that require clear and targeted market segmentation with appropriate messaging and relevant offers. So, don’t put the cart before the horse and focus attention on more advanced program design decisions when the underlying structure may be flawed. Learning from outliers is a best practice; dismissing them is not.