It goes without saying that if you want your digitalĀ marketing efforts to be results-driven, then you need to measure resultsĀ ā and an important part of making sure youāre going to get results is to do testing.
Split testing, commonly called A/B testing, allows you to evaluate the impact of a potential change on your website or in a marketing campaign.
Itās a pretty simple concept. āAā is status quo, āBā is a change youāre considering. You make use of your choice of a variety of online tools to see if your idea for change will increase conversions.
The goal of A/B testing is, of course, conversions. Whether youāre trying to make a direct sale or gain prospects by collecting email addresses, the little gem that triggers conversion is your call to action (CTA).
So letās look at how you can test ideas for improving the effectiveness of your CTA.
1. Figure out what youād like to test
You can evaluate almost any element of a webpage with a split test, but you want to pick one single element to test at a time to eliminate any confusion from your results. For example, letās say you were to test the following two pages against one another:
- Version A: Uses āHeadline Aā and a red CTA button
- Version B: Uses āHeadline Bā and a grey CTA button
And letās say Version B tested better ā but which element made the difference? Was it the headline or the CTA colour? This is the primary reason why you only want to test one element at a time.
With your CTA, you might consider changing the colour, font, copy or position on your page or email to garner a different result.
So develop your hypothesis (āI think changing the color from gray to red will increase conversionsā) and test one change at a time.
In other tests, you can compare the impact of a copy change, and then your hypothesis would become: āI think stating āRequest Your Free Newsletterā will perform better than āSign Up for Our Newsletterāā.
The important thing is to be clear on what youāre testing and why.
2. Define your specific goal and how to measure it
For simplicity's sake, letās say youāve decided to test the colour of your CTA button. You need to be clear on your goal and how you will measure this.
For example, you may be tempted to measure your results in actual sales or conversions, but youāll want to get a bit more granular, and hereās why:
Imagine your CTA takes your prospects to a landing page where they are presented with an offer ā but various elements on that page may impact the userās decision to convert. Because of these potential other variables, (i.e. sales copy, and images on your destination page) conversions or lead generation are not the best measurement for your test.
Instead, you will simply measure the number of clicks on the CTA itself to see which version performs better. Remember, the goal of your analysis is to determine which of your two CTA variations generates the most clicks.
To put it simply, does the red button or the grey button get more clicks?
3. Set up your control and variation
Your ācontrolā in testing-language is your āAā or status quo. Your ātreatmentā is the āB,ā the variation that includes the change youāre going to test.
In the example, āAā has our CTA in dark gray; version āBā is red.
Most importantly, all other elements on the page should be exactly the same. The only visible difference should be the element you are actively testing.
4. Start your testing
Iām assuming here that youāve already selected a testing platform. So now that youāve decided what to test, how youāll measure results and your control and treatment, youāre ready to get your test underway.
You have to create the content and graphics that you need for your control and treatment. In the example here, thatās the grey CTA and the red CTA shown below.
Youāll see that the only difference is the color. Later we might test other variables such as shape, text (content) or position on the page. For this first test, weāre only interested in whether the color impacts the number of clicks on the CTA element.
Variation A:
Variation B:
5. Drive traffic to your test
To get enough results for your test to be statistically significant, you need a lot of action on your page during the test.
ThisĀ requires you to know what typically drives traffic to your site ā but not just any traffic. Existing customers, for instance, are not likely prospects for clicking on your CTA. Nor are those who already subscribed to your newsletter or whatever else youāre offering for lead generation. For this test, you need a mass of new visitors to your site.
A side benefit of A/B testing is that you can use it to simultaneously test methods for driving website traffic.
For instance, if you tailor a promotion to a Facebook demographic, you can test the ability of your social media campaign to drive traffic and (assuming the campaign is successful) Ā also get the numbers you need to check your red CTA against your existing gray one.
6. Gather data
Marketers attempting A/B testing for the first time often question how long the test needs to run.
Unfortunately, thereās no easy answer. It's a waiting game. Put your promotion into hyper-drive until you have statistically significant results.
It can take as much as a month for your site traffic to yield significant results. Or, you may find that what youāre testing doesnāt make enough difference to measure. In other words, maybe youāll find that the red button and the grey button perform equally as well.
Thatās another possible outcome of split testing.
Testing for an appropriate amount of time will generate data that gives you an opportunity to look closely at your marketing funnel. Itās the journey your web visitor takes from becoming aware of a product or service to doing something.
7. Analyze your marketing funnel
Long before the internet, marketers have examined the AIDA model: Awareness, Interest, Desire, Action. There are variations on this idea, but these are the basic steps a buyer takes before making a purchase ā and you need to build your website design around this or a similar concept.
Even if you didnāt get enough traffic for your results to be statistically significant (but even better if you did), you could further analyze results with the AIDA model in mind.
Although the point of this test was only to measure clicks on the CTA, you can look further. For instance, you might see if there was any impact on the number of people who made that click and eventually converted by completing the offer on your landing page.
Everything you can learn about user behavior on your site will help you improve user experience and thereby, conversions. So there is no insignificant data, just more opportunity!
Wrapping Up
There are so many variables that impact conversions. The time of year, the time of the month and even the time of day. As you wrap up your first A/B test, consider what you might want to examine next. Keep track of your data and results from A/B tests. By doing this, you can develop a robust testing program to improve user experience and conversions.