Your list is ready. Your emails are loaded. Your landing page is poised for action. You can’t wait to hit the launch button on your new digital campaign.
But hold on. Before you hit “go,” consider testing it first to make sure it’s ready to deliver optimum conversions. In other words, to achieve the exact goals you have in mind — whether it’s more downloads, more subscribers, or more sales.
Conversions are the whole point of your digital campaigns. The higher your conversion rates, the higher your sales and profits. The way to achieve higher conversion results is by optimizing your campaigns in ways that compel consumers to open, click and buy more often.
The secret to achieving optimized campaign results is A/B testing.
A/B tests (also called split tests and randomized experiments) are live, controlled experiments on your website that compare two versions of a page element for a length of time to see which converts better. You send half of your prospects to one version of the campaign and half to the other version. Then you measure conversions from each set of users until you have a clear winner. You apply the winner to the entire campaign. You can continue to measure variables until you’re satisfied with your campaign’s overall conversion rate.
A/B testing helps improve campaign results, because the decisions are not based on guesswork. They are based on how consumers actually respond to specific variables.
By performing an A/B test, you can examine to what extent your assumptions were correct, whether they had the expected impact, and ultimately gain insight on the behavior of your target audience.
When you consider that the average open rate for email campaigns in the United States is only about 20 percent, running split tests to find winning variables is well worth the effort to boost your results.
A/B testing has proved that even the seemingly smallest details can have a huge difference on conversion rates. Even subtle changes to the copy, colors, prices and more can dramatically change shoppers’ behaviors. Elements that don’t seem to matter, such as the text in a “buy button,” often turn out to matter tremendously regarding consumers’ propensity to click.
While the cost of acquiring traffic can be huge, the cost of increasing your conversions is minimal. As a result, the ROI of A/B testing can be massive.
The elements you can test are endless. Here are some ideas (although you should only test one at time!):
For A/B testing inspiration — and to test your gut reaction — view the results from a wide variety of A/B tests on Which Test Won.
Follow these seven basic steps to launch an effective A/B test on your next digital campaign:
A/B testing is not about randomly testing ideas. Doing that would waste precious time and traffic. The more targeted and strategic an A/B test is, the more likely it will be to have a positive impact on your conversions.
What are you trying to achieve with your new campaign? Do you want more downloads of a new eBook? Or more subscribers to your blog? Or to boost new product purchases? Or more members to upgrade their service contracts to the next level?
First, clearly identify exactly what you want from your campaign. Then move on to Step 2.
Examine any data your company has gathered about your leads, prospects and customers. It could be from surveys, customer interviews, or eye-tracking data, for example.
You can also employ inexpensive consumer feedback tools that give you a “fresh pair of eyeballs” on your site, such as:
After gaining insight from your data, write a problem statement related to the conversion goal you are trying to achieve in your campaign. For example, if you want to boost downloads of a lead-generation eBook, your problem might be: “My target audience has limited time to read. This keeps them from downloading eBooks.”
Once you know what your goal is and the possible reason why you are not realizing that goal, you can then move on to formulating your test hypothesis on how to solve your problem.
A hypothesis is a statement made on the basis of limited evidence, which can be proved0 or disproved. It is used as a starting point for further investigation. An A/B test hypothesis is the basic assumption on which you base your test. It encapsulates both what element you want to test within your campaign and what impact you expect to see from making that change.
If you test A versus B without a clear hypothesis, and B wins by 15 percent, what have you learned from this? Nothing. What’s important is what you learn about your audience. This insight helps improve your current campaign as well as all future campaigns.
For example, suppose you want more people to download your eBook, but you presume they won’t because they don’t have time to read. Then from your research you learn that people tend to always read the first subhead on your Landing Page. In this scenario, your hypothesis might be: “By tweaking the copy in the first subhead to directly address the time issue, I can motivate more visitors to download the eBook.”
As a result of your research and hypothesis, the first subhead on your campaign’s Landing Page is your test variable.
The next step is testing your hypotheses to prove or disprove its validity. To set up the A/B test, create two versions of the variable you are going to test — in this case, the copy in the first subhead. To address the possible problem you’ve identified, one version of the subhead could say: “Download this eBook — it takes only 25 minutes to read.”
After launch, half of your visitors will be presented with version A and half will be presented with version B.
Determining the ideal run time for A/B tests is based on reaching statistical significance. Statistical significance is calculated using the total number of users who participated in each version of the test and the number of conversions in each version. The standard confidence level is 95 percent.
Online calculators, like the free barebones version of Dr. Pete’s split-test calculator, can determine if you’ve reached a 95 percent confidence level. If not, it helps determine how many more users you’ll need to reach that level.
A good rule of thumb is that each variation should have at least 100 conversions and the test should run for at least two weeks.
After you’ve given your test enough time and compared the conversion rates of each version, you can declare a winner. Then apply the winning element to your entire campaign.
Also, report the results to all relevant departments, so everyone can learn from the test. For example, let key personnel in marketing, information technology, and UI/UX share in the knowledge so they can apply the insight.
Don’t stop with one A/B test. Continue testing relevant elements of your campaigns until you feel you’ve boosted your conversion rates to a satisfactory degree.
Using A/B testing to incrementally improve your campaign conversion results is a smart marketing move. While A/B testing takes time to plan, execute, and analyze, it can help ensure you are achieving your campaign goals — and that your campaigns are boosting overall your sales and revenue in the long run.
Karen Taylor is a professional freelance content marketing writer with experience writing for over 100 companies and publications. Her experience includes the full range of content marketing projects — from blogs, to white papers, to ebooks. She has a particular knack for creating content that clarifies and strengthens a company’s marketing message, and delivers optimum impact and maximum results. Learn more at KarenTaylorWrits.com.