Email A/B Testing: How to Get the Best Results
Even a marginal improvement in marketing conversion rates can represent a substantial number of additional sales. This is why email A/B testing is so important.
Even a marginal improvement in marketing conversion rates can represent a substantial number of additional sales. This is why email A/B testing is so important.
Email A/B testing is, essentially, doing split testing where different recipients see two different versions of the same email – Version A and Version B. The performance of each version is measured by looking at the number of conversions each version drives. The email with the highest conversion rate is selected and used moving forward. For large email deployments, A/B testing is often done using a small segment of the target audience. The winning email is then sent out to the remainder of the list, which consequently drives a higher response rate.
When performing email A/B testing, four primary measurements can be used to compare email effectiveness. These range from a causal relationship related to one’s demand creation efforts, to measurements that are more directly linked. Below are the four measurements organizations can use in A/B testing, listed in order of least to most impactful.
- Open rate (least impactful). The number of people who opened the email as a percentage of all recipients. Though prone to measurement error (i.e. false positives/negatives due to mobile consumption, email preview panes and blocked images), comparing the open rates of emails sent to the same audience can yield insight into which email is most compelling at first glance. Optimizing on the open rate helps to increase the number of individuals who consider taking your offer.
- Clickthrough rate. The number of individuals who respond to an email’s call to action by clicking on a link. Targeting clickthrough rate brings the results of A/B testing closer to inquiry (a person who clicks through is more qualified than one who just opens an email).
- Conversion rate. The number of inquiries generated from the email’s call to action (often measured by the number of unique form submissions) as a percentage of all recipients. This measurement gets at how many unique inquiries the email generated, which is more valuable than measuring to clickthrough rate as it measures the end result of the tactic and not leading indicators.
- Quality rate (most impactful). Where an email conversion rate focuses on the sheer number of inquiries generated, the quality rate measures the number of inquiries that fall within the marketing organization’s target audience. This is an important concept as it is not the total number of inquiries an email generates but the number of inquiries with the right demographics (e.g. title, level, department) that drives value for a marketing organization.
So, how do you rate? The more tightly correlated an organization’s A/B email testing measurements are to the desired results, the more directly marketing results are positively impacted. Though driving A/B testing against activities creates value, those marketers who measure against the actual yield of each test version create the most impact for their organizations.