A/B Testing - 6 Best Practices
[post_published]
A/B testing is a simple experience to determine which variant of a product, copy, or webpage users prefer. For webpages, this is done by creating two similar webpages with one small difference, and then testing user response to each webpage. You can use this to determine whether a small change has a positive or negative impact on users.
A/B tests are used by businesses in a wide variety of situations on the Internet. They can be used to test whether changing the colour of a button increases restaurant bookings, whether text modifications increase click-through rate, and which ticket price can be set to maximize revenue.
There are countless examples of successful tests. French video game giant, Ubisoft, managed to increase lead generation by 12% after their tests showed that they should reduce up and down page scrolling.
A/B testing is now a core part of user experience (UX) improvements and customer research. There are loads of tools that can be used for these tests.
However, your A/B tests will be useless if you do not set them up properly. To do this, you need a good understanding of the best practices for A/B testing. In this article, we will look at some of these best practices and how you can use them to improve your tests.
Test One Feature At A Time
The main purpose of A/B testing is to test whether a change to your website has a positive impact.
One of the important things to remember is that you are testing a single change. For example, if you have a “Buy” button for customers to buy tickets for an event, you may want to test moving the “Buy” button around on the page. If you do so, make sure you are only testing that single change.
If you make multiple changes at once then you will not know which of these changes had the biggest impact, or whether some of these changes actually have a negative impact.
So, if you want to move your “Buy” button around, don’t also change event descriptions (or anything) else in the test. Your customer might like that the button has become more prominent, but hate the new description. Your A/B tests cannot tell you which change the customers liked unless you tested them separately.
Test Both Variants Simultaneously
There are some things that are out of your control when it comes to user behaviour. One of these is the timing of visits. Whether this is the time of day, day of the week, or month of the year, users will behave differently at different times.
If your website allows customers to book a table at a restaurant, you will find that it is more popular around the New Year period than a random Tuesday in May.
If you perform the testing for each of your variants at different times, this could have a big impact on the results that you gather. To remove this huge reliability problem, make sure that you test both variants over the same time period.
There is only one exception to this rule, when you are testing the impact of timing on your metrics. This could be the optimal time to send out a customer newsletter or the optimal time to offer a promotion. However, if your test is not reliant on time, try to remove it from your experiment.
Don’t Make Changes Mid-Test
If you are following your statistics during your test, it can be exciting to see that your change is having great results. You might then want to rush and make some more changes.
Don’t make changes mid-test!
If you start making changes before your A/B test has been completed, you cannot be sure whether your results are reliable. While the tests are still ongoing, statistics will continue to change.
Stopping your test as soon as you see a positive result does not guarantee that the change was good. Imagine your test between two changes as a marathon race. The change leading after the first two miles is not necessarily the winner, you must stick it out until the end to see who wins.
Gather A Large Enough Sample Size
When you see that your test has increased conversions by 100%, it can be incredibly exciting. But is your statistic reliable?
One thing that could affect the reliability of your results is your sample size. If your test was only conducted on 10 users, how can you trust that the tens of thousands of users who view your website each month will agree with the results?
Determining the ideal minimum sample size is not as difficult as you might think, and it all depends on the statistical tests that are used to prove your results are significant.
Optimizely provides a sample size calculator to work out the minimum sample size for each variation. To use this calculator, you need to know your current conversion rate and the minimum change that you would like to be able to detect.
For example, with an existing conversion rate of 5%, you will need a sample size of 31,000 for each variation if you want to detect a 10% change.
In general, a higher existing conversion rate will lower the sample size needed. A higher detectable change will also lower the sample size needed, but remember that your change in A/B tests is usually small - so it is better to lower it in the calculator.
When planning an A/B test, always use a sample size calculator. Otherwise, you run the risk of having an unreliable and useless test.
Track The Correct Metrics
You have decided on the webpage that you want to test. You have decided what feature you are going to change, and how you are going to change it. But what are you trying to test with your A/B test?
A major pitfall for new A/B testers is their failure to track the correct metrics.
In your initial stages of planning the test, you must ask yourself what you want to measure. For example, if you are changing the text above a form, you may want to know whether users are now more likely to complete that form. Once you know what you want to measure, you can ask yourself what existing metrics you have that can show this measurement.
Most websites use Google Analytics for the measurement of web statistics. Google Analytics provides hundreds of different measurements that you can use, and even the ability to set up A/B tests. You can look through the Analytics dashboards to find the correct metric.
So, for the example of a completed form, you could measure visits to the “form completed” page compared to visits to the form - giving you a completion ratio. Filtering out visits from your own employees can also make your results more reliable. Make sure that you also consider these interfering elements.
Always Be Testing
“Always be testing”. One of the most beloved phrases by A/B testers everywhere.
Every day that you are not testing, you are wasting traffic that is coming to your website. This traffic is the only organic way of performing A/B tests.
Testing is also one of the main ways that you can get feedback on your website, and find ways to get more clicks, hits, and purchases. By constantly testing, you can be constantly improving your website. By improving your website, you are improving your business. There is no reason to not be A/B testing at all times.
If the time to set up tests is an issue, then use a professional tool to assist you. Leading tools include:
Best Practices For A/B Testing
A/B testing is a form of user testing where different users are given one of two variants of a webpage, and then their actions are tracked to see which variant gets more interactions.
This form of testing is crucial for businesses looking to optimize their website. It has proven benefits for an endless list of websites trying to improve their click-through rates, revenue, and user satisfaction.
The general idea of A/B testing is well understood. While there are a few possible ways of going about it, there are several best practices for A/B testing:
- Test one feature at a time
- Test both variants simultaneously
- Don’t make changes mid-test
- Gather a large enough sample size
- Track the correct metrics
- Always be testing
If you want to create reliable and useful A/B tests for your website, make sure that you follow these best practices.
Photo credits:
Shutterstock
Joseph Mucira from Pixabay
Tumisu from Pixabay