Using Predictive Analytics, Without Ever Making A Prediction
Last week I had the privilege of participating on the Advisory Board for the Retail Marketing Analytics Program (ReMAP) at the University of Minnesota, Duluth (UMD). Perhaps the best part of these sessions is the opportunity to meet with the students, many of which will be tomorrow’s marketing scientists.
During a few conversations on this visit, I was asked how to secure an entry-level position that would involve lots of cool predictive analytics. I want to focus on one of the answers I shared — don’t tell anyone you’re doing predictive analytics. What do I mean? Imagine you’re a freshly minted analyst in the following situation:
- Your manager asks you to quickly evaluate who responded to a promotion.
- You have many factors to investigate (because you have lots of data).
- You have very limited time to find a great answer and build a deliverable.
- The required deliverable needs to be simple and free of analytic jargon.
Sound familiar? What is a young ReMAP graduate to do in her first job? Perhaps her plan could involve:
- Choosing a predictive modeling algorithm that is resilient to outliers and missing data (i.e., decision trees).
- Setting the outcome to “response” and rapidly sorting through what factors predict who responded to the promotion.
- Quickly deploying a few algorithms to make sure nothing too obvious gets missed. In the case of decision trees, try one approach that constructs short, bushy trees (i.e., CHAID) and one that builds relatively tall, thin trees (i.e., CART).
- Designing simple charts and graphs based on the variables chosen by the modeling algorithms.
- Building a short deliverable with the most compelling visuals and no reference to predictive analytics.
So our young analyst would quickly identify correlates to campaign response and recommend targeting or other campaign improvements. She would undoubtedly add value through predictive analytics, but not through a scored list. She would not need to mention predictions, techniques used, model validation, or deployment.
So what do you think? Would a new analyst following this process be desirable on your team? Future ReMAP graduates are interested in finding out. I welcome your comments below or you can email me directly. I plan to summarize comments in a future post.