By Meg Goodman
You’ve developed your email creative, agonized over the copy, and had your team run quality control tests to make sure the email is rendering correctly. All that’s left is scheduling and hitting send, right?
Wrong.
It’s time to set up an A/B test.
Modern marketing is synonymous with data-driven marketing, and the best method for getting quick, actionable data to drive more impactful email is A/B testing.
In a survey conducted by Econsultancy and RedEye, 74% of survey respondents who use a structured approach to conversion testing improved their sales.
A/B testing email allows you to identify which combination of creative, tech, and time lifts response. With email marketing doing much of the heavy lifting for win-back or cross-selling initiatives, it’s crucial to understand what mixture of elements makes up the secret sauce for making more informed decisions.
How to set up a statistically sound testing plan in 5 steps.
A/B testing requires less data to find statistical significance than other testing methodologies and it’s often supported by email marketing automation and sender platforms. However, not properly setting up an A/B test can cost you by leading you to make decisions based on faulty data.
One way to reduce risk is by starting your testing plan with an A/A test to ensure your A/B testing tool is calling the right winner with confidence.
Step 1: Check your tools with A/A testing.
A/A tests use A/B testing tools to pit two identical versions of an email against one another. Why would you want to run a test where email A and email B are identical?
In most cases, A/A tests are recommended to double-check the effectiveness and accuracy of your A/B testing software. By sending two identical emails using an A/B testing tool, you’re able to judge whether your testing tool is accounting for natural variances in behavior.
For example, if you ran an email test and email A had a 3% higher conversion rate than email B, your A/B testing tool might call email A the winner. However, is a 3% difference statistically sound? That depends on whether the A/B testing tool is requiring a confidence score of 95% or greater before calling a winner.
Definition: A confidence score or statistical significance is a way of mathematically proving that a certain statistic is reliable. A 95% confidence score is widely considered a best practice, representing only a one-in-20 chance that the results you see are due to random chance and misattributed.
When running an A/A test, your results should come back as statistically inconclusive—and your email sender should not call a winner—if the program is set to require a 95% confidence score.
If your A/A test calls a winner, you most likely have one of two situations:
- Your testing software is using statistical inference and not requiring your sample size to be large enough to give you statistical significance confidently. If this is the case, you should either select new software, manually alter how the software calls a winner by requiring a statistical significance score of 95%, or be sure to run a test long enough to reach your required sample size to confidently call a winner.
- Your A/A test fell into the one-in-20 chance that a false positive occurred. Even a 95% confidence score can leave a very small open window that a false winner was called. Run another A/A test to confirm that the first situation mentioned is not occurring.
Step 2: Create a hypothesis for A/B tests.
Strengthen the performance of your email campaign by identifying which elements help or hinder conversions and by testing a hypothesis.
When you’re brainstorming hypotheses, remember to only test variables that you believe will move the dial and increase performance. A/B testing for the sake of testing—and without considering your business goals—can lead to inefficient testing and lost time.
Will switching the color of the button from red to green really increase response? Or, is it more likely that a new call to action or messaging strategy will have a greater impact?
You won’t know the answer until you test.
TIP: Only test one variable at a time so that you’re able to better identify what caused a lift in response.
Possible variables to test include:
- Subject line
- Subject line character length
- Special characters or emojis in the subject line
- Sender name
- Day of week sent
- Time of day sent
- Links vs. buttons
- Image-based CTAs vs. HTML CTAs
- Social sharing icons
- Preheader text
- Personalization
- Header image height
- Using lists, bullets and/or numbers
- S. note
- CTA placement, design or color
- Short copy vs. long copy
- Promotional copy vs. straightforward
- Headlines or subheads
- Design
- Images vs. plain text
- Offer
- Audience segment
Step 3: Choose the distribution size.
Email senders that support A/B testing will allow you to select a distribution size for your test groups. A 50/50 distribution will provide you the greatest data integrity for an A/B test.
Depending on your email sender, you may have to select random draw to make sure individuals of any similar criteria are not lumped together when the A/B test runs.
Step 4: Gather data.
While A/B testing requires less data than other testing methodologies to draw statistically sound conclusions, the population size still needs to reach a certain threshold.
Most email senders do not have a fixed horizon (a set point in time) to call statistical significance, which means the testing can and often will waiver between significant and insignificant at any time during an email campaign if the emails are automated.
As a result, you will need to predetermine the sample size of your test so that once you have enough data for statistical significance you can close the test and begin analysis on the entire group of data instead of just one moment.
To determine the sample size you need for statistical significance, you can do the math—or you can use this handy calculator (pictured below) to tell you how large your sample test size needs to be to draw a sound conclusion.
Step 5: Analyze.
Once your email has been sent and you’ve collected enough data to determine statistically significant outcomes, it’s time to start analyzing the results.
Collect the following data from your A/B test:
- Open rate
- Click-through rate
- Unique clicks
- Unique open rate
- Click-to-open rate
Note: While other email metrics like bounce rate and spam score are important, they do not indicate conversion performance and shouldn’t be used to evaluate the winner in an A/B test.
When declaring a winner, your email sender will evaluate the performance of the conversion act, which is usually defined as clicking a button or hyperlink in the email. But make sure your email sender is evaluating the data based on the confidence score you defined when picking your sample size. Some email clients will not require a confidence score of 95% and give you a false winner based on less data.
To elevate your analysis and reporting, track more than the initial conversion reported. For example, did email A have a higher conversion rate—defined as the number of people who clicked on the call to action button, but less overall sales when you track people’s actions through your site?
That may indicate a bottleneck in the user experience that needs to be adjusted.
Only by creating a structured approach to conversion testing can you improve product adoption and customer retention by driving better email performance. By using a five-step A/B testing plan like this one, you’ll see better results over time. You’ll also have the ability to better articulate what is working—as well as opportunities for constant improvement that will move the needle forward.
Meg Goodman is the Managing Director of relationship marketing agency Jacobs & Clevenger. She has brought measurable, data-driven results to a variety of major financial institutions. When she’s not riding her motorcycle, you can connect with Meg on LinkedIn or Email: [email protected].