Graphics of a monitor, list on a clipboard, and mobile devices over an illustrated background

By Scott Gregor & Tim Yeadon 

You can A/B test nearly every element of an email. From subject lines to images to its color palette to the tone of the text - if it’s in the email, you can test it against a potentially better solution. And that’s great, because testing helps us improve and stay agile in a competitive world. 

As we noted in “Email A/B. Tests You Ought to Be Doing” … no matter how much experience you may have with email, no one can assume the solution we've used before will continue to succeed. The moods and vibe of our culture are continually evolving. A tone that worked pre-COVID often failed during the pandemic, and what eventually worked five minutes ago may not perform in the months and seasons to come. 

A/B testing your emails offers an opportunity to improve nearly every aspect of your email program - from audience segmentation and content relevance to design and copy. This type of incremental improvement is vital to the success of your program, helping you to increase long-term engagement with your audience. 

So, to get started, here’s a step-by-step framework we use when planning email tests.

 
"1, 2, 3" showing priority

First, prioritize your test variables

Any given email could have a number of elements that could be used for an A/B test, but they’re not all created equally. Only some are going to support your program’s goals and objectives. Your first tests should focus on high-yield elements first and foremost.

It’s important to remember to only test a single variable in each experiment - helping you to avoid inaccurately attributing the changes seen to the wrong test element. It’s better to be precise and use restraint so you can learn as you go.

If you’d like a bit of inspiration on elements you might test, check out “Anatomy of a World-Class Email Template.” In this article, we give a tour from top to bottom of every key element you might see in an email. 

 
Paper and lightbulb graphics, indicating a hypothesis being written down

Second, form your hypothesis

While first starting the A/B testing journey, you should have an expectation of how your two versions will differ in performance, and why. A good way of thinking of this is “If we take this action, then we’ll see this change in performance, because of this reason.”

To form a strong hypothesis, our “if-then” statement should be based on past experience and any performance trends we’ve seen in previous email campaigns. Have there been previous changes that were made that resulted in increased engagement? Were there times that our emails didn’t meet the goals that we had set, and if so, are there any distinctions we can find in them compared to those that met expectations?

Your previous email campaigns may not be able to give you an exact answer (and your future campaigns won’t either), but paying attention to the information you already have and finding those nuggets of insight will point in the right direction to make meaningful improvements over time. 

 

Third, define what “success” looks like

Once you know what change you’re going to be testing in your campaign, you should have a clearly defined measure of what it means for that test to be successful, depending on what the expected results are. If the goal of your test is to improve the number of opens, your success shouldn’t be based on how many clicks or conversions the campaign received. Set reasonable and achievable goals for yourself and plan on continued incremental improvements with each campaign. 

Similarly, be mindful of what metrics are most likely to be impacted by the element being tested. In the scenario above, correlating changes to your subject line or pre-header are more likely to impact your open rate than changes to a CTA or an image would. As you’re thinking about how you’ll determine if the variable was a successful improvement, keep in mind at what point is the subscriber going to see the element that you’re testing, and what is that element intended to get them to do.

 

Fourth, consider any/all risks

Ask yourself, what could go wrong by doing this test? In most cases, the worst you might find is that the metrics weren’t up to your expectations, but depending on how far the variable element is from previous versions, you may find it has a more profound negative effect. For instance, using language outside of your normal brand voice may not be received as positively as you hoped. In such cases, you may consider testing to a smaller group of subscribers compared to other tests that don’t venture as far from the “norm.”

So how much risk should you take in a test? That’s a business decision you’ll have to make. If you’re going to try anything that would be generally outside the bounds of your brand, we’d stay small. (Side story - we know a marketing manager who historically has always kept about 1% of their list set aside for ongoing skunkworks. Sometimes they win big and are worth testing on a larger scale, but the real failures are limited to a very small group.) Most of your tests, however, are likely to be aiming for smaller incremental improvements - and it’s alright to experiment on larger audiences. For instance, in subject line testing it’s normal to conduct what’s called a 10/10/80 test, which means subject line A goes to 10% of your list, and B goes to another 10% of your list. Once there’s a winner, then you send that winner to the remaining 80%. With this method, only a small percentage of your list gets an ineffective subject line.

This isn’t to say that risks shouldn’t be taken, after all, the purpose of performing A/B testing is to determine how you can most effectively get your audience to engage with your content and that can’t be done if you don’t try something new out every once in a while. Just keep in mind how your audience might respond, and know that even if your hypothesis was completely wrong, you’ve still learned a valuable bit of information that can inform your future content and A/B tests.

 

Fifth, the test itself

Running the actual A/B test once you’ve carefully planned it out is the easy part, but there are some things to keep in mind to ensure it’s an effective test. The most important aspect of initiating an A/B test is making sure that both variations are sent simultaneously. Timing plays such an important role in campaign results that your test would likely be invalidated by sending each variation independently at different times.

The other is to make sure you give your test enough time to provide useful data. With email A/B tests, that usually won’t mean waiting more than a few days for the majority of results to pour in (if even that long), but calling your test completed too early might lead you to see results that aren’t representative of what you’d see had you continued to let the data accrue.

 

Sixth, document the results and what you’ve learned

Using the data that you collect from your tests to guide your future changes is going to be far more valuable than relying on gut instinct. In order to do so, you’ll need to document how each version of your email performed, and compare the two against each other. Some metrics to keep track of in particular are:

  • Open Rate - what percentage of the emails that were successfully delivered were opened

  • Clickthrough Rate - what percentage of people that opened your email clicked on a link

  • Conversion Rate - what percentage of people that clicked a link went on to take the desired action (made a purchase, created an account, took a survey, etc.)

  • Unsubscribes - if you suddenly have a spike on unsubscribes, there’s a good chance you need to work on your audience segmentation, as content relevance matters. 

  • Revenue per email - how much revenue was generated by a specific campaign.

It’s important to keep records of the results of your A/B tests, allowing you to compare the results of different campaigns against each other as well. Just because one test yielded a particular result doesn’t mean that’ll always be the case, and you’ll want to be sure to have metrics to look back on as you continue testing.

If you have the capability, you should also be performing data analysis to determine the statistical significance of and the confidence you can have in your results, based on the sample size used for your A/B test.

 

Seventh, share your test results and insights

It’s unlikely that you’re the only one that cares about the performance of an email campaign. Well, maybe you are - but there’s a good chance that other marketers and product managers in your organization will be interested in how a specific audience responds to content (in email, or on the app, or elsewhere). Sharing the test results, and what insights you’ve been able to pull out from them will allow you and your team to collaboratively review them and likely notice some other trends that may not have otherwise been spotted. Together, you may find a niche to explore further in future tests, or discover a new segment of your audience that might be better suited to individual targeting.

 

Eighth, let’s do it again!

One beautiful thing about A/B testing is that the more you do it, the more you’ll learn from not only each subsequent test, but previous tests as well. The goal for each test you perform should be incremental improvements to your campaigns, honing in on what is most effective over time to engage with your audience. As you accumulate more data, you can look into the trends over time and start asking questions about any difference you might see.

Why did a particular style of headline work well in the past, but is now showing weaker results? 

Why are we seeing our clickthrough rate increase while not testing the CTA button?

What is causing our unsubscribe rates to increase in the last few campaigns that have been sent?

By continually performing A/B tests, you’ll not only be able to make more informed decisions on what your audience will engage within your upcoming campaigns, but you’ll also be able to look at the bigger picture, and learn about how your audience is evolving over time.


Putting it all together

A/B testing your emails is going to be instrumental to your ability to consistently create engaging content that resonates with your subscribers. It won’t happen overnight, but with consistent, incremental improvement you’ll be able to make better decisions as you craft each message to be sent. To get the most benefit out of your tests, remember to be deliberate in what you’re attempting to accomplish with reasoning to back it up. 


About Clyde Golden

Clyde Golden is an email marketing and digital direct agency in Seattle. We’re here to help you create thoughtful and relevant content that leads your prospects through the buyer journey and on to a long-lasting relationship with your brand.