- Marketers have numerous options for scoring and many technologies to choose from to implement these scoring approaches
- At Summit 2017, SiriusDecisions introduced a new scoring framework to addresses the elements that must be in place to reap the benefits of any scoring approach
- Clients can use the new scoring decision support assessment to help determine the best approach to scoring
For more than 10 years, the various approaches to scoring have evolved significantly. From traditional rules-based models (largely lead-focused) to models developed via machine learning and predictive technologies (largely account-focused), to scores being presented as letters, numbers, chili peppers and stars, there seems to be no end to the models and technology implementation options. When we were considering topics for Summit 2017, the notion of addressing the evolution of scoring and providing a decision support mechanism to help sort through the mess clearly warranted attention.
This trend provided Laura Cross and me with the opportunity to address an issue facing many of our clients: identifying the best approach to scoring. While scoring is certainly not a new concept and we see high rates of adoption (89 percent, based on our findings in the 2016 State of Marketing Automation study), adoption doesn’t equate to success. Today there are numerous options for scoring. Despite the wealth of research we’ve published on best-practice scoring , pitfalls to avoid and considerations for model development, we’ve lacked a common scoring framework. Our Summit track session, “The Evolution of Scoring: Considerations for Today” addressed these challenges – and was even voted as a repeat session!
The session started with an introduction to the new SiriusDecisions Scoring Framework, which addresses the elements that must be in place for organizations to reap the benefits of scoring – regardless of the approach. The highlights of the framework included foundational elements such as a well-defined lead management process, clearly articulated qualification criteria and service level agreements. Other considerations include people, gointvernance, data, implementation, and testing and measurement. Organizations can move into the model and technology discussions only after addressing these other items.
Given the variety of options for both the model (approach) and the technology to support scoring, we felt it was important to identify and define the options. From our observations, today’s perspective and guidance is focuses too heavily on the technology (e.g. marketing automation vs. predictive), without a broader consideration for what approach makes the most sense for an organization. The types of scoring models include:
- Manual. Cherry-picking; no standardized methodology or process – not a recommended approach.
- Assumption-driven, rules-based. Attributes and weighting developed according to assumptions vs. informed by data – not a recommended approach.
- Data-driven, rules-based. Uses statistics to identify and prioritize attributes and behaviors that are configured into the model. When the scoring threshold is reached, the prospect is handed off for followup – a recommended approach.
- Predictive prospect prioritization. Prioritization models work like lookalike models – they mathematically compare known prospects and accounts to an ideal and prioritize accordingly. Followup is determined by statistical processes.
- Hybrid. Combines data-driven, rules-based and predictive prospect prioritization to determine hand-off and prioritize followup. Hybrid approaches score the account via predictive and deliver the contact via marketing automation platform (MAP) behavior.
The different technologies to support these approaches include:
- Sales force automation (SFA) systems
- Predictive analytics
- Business Intelligence
- Custom technologies
- Hybrid technologies
The primary focus of the session was the new scoring decision support assessment that our clients can use to help determine the best approach to scoring. The assessment includes a set of questions aligned to five key categories – business case, model, resource, internal change and alignment. Because scoring isn’t simplistic and no two organizations are the same, there are several things to consider when determining the best approach to scoring. When we polled the attendees, we found that 76 percent were leveraging MAP-based scoring, 7 percent were using predictive and 13 percent were using a combination of SFA, MAP and/or predictive to score.
The questions in the assessment are setup to result in a yes, no or maybe answer with the final output being a recommended approach to scoring – either data-driven/rules-based, predictive prospect prioritization or hybrid. The scoring assessment process includes an analyst reviewing the results with specific attention to any questions with a no or maybe response. Questions within the business case category are considered gating questions, and any no or maybe responses would preclude a recommendation for predictive or hybrid options. For questions within the other categories with a no or maybe response, these will be reviewed and discussed as part of a pause and assess process. Plans for remediation may be discussed during this part of the process and every organization’s unique characteristics, challenges, and current state must be reviewed and considered before the final recommendation is made.
We’ve received positive feedback on the session and lots of interest in the assessment. As we anticipated, the scoring framework was long overdue and provides a holistic perspective that considers all the requirements for successful scoring. If you’re a SiriusDecisions client and are currently looking at an adoption or migration of your scoring approach, I invite you to take our Scoring Decision Support Assessment. To get access, contact your client success manager. Once you complete the online assessment, we’ll schedule an analyst inquiry to discuss the details and ultimately identify the recommended approach that makes the most sense for your organization.