A/B Testing

4 minutes read

In simple words, A/B testing two different versions (A & B) of your website to understand and learn which version is more effective. It’s mainly a user research methodology which helps us understand and measure a variation against a current user experience.

It is an application of statistics - two sample hypothesis testing - wherein the purpose is to determine whether the difference between two populations is statistically significant.

What can we A/B test?

Also known as split testing/bucket testing, we can test the performance of various user interactions like -

  • To measure progress over time
  • Design of a single button element
  • Product messaging
  • Structure of your website
  • Landing page
  • Logo
  • Colours
  • User interface
  • Product messaging
  • Landing page
  • Visuals
  • CTA text, colour, placement
  • Feature update
  • An entire user interface etc.

Why should Product Managers do A/B tests?

As a Product Manager, you need to know what data is saying before making any changes. Via A/B tests, we can mainly learn in depth about our customers (what they like/dislike, how they react to the changes) and focus our efforts on building what the customers like. We can understand whether - to implement new features or not, how to increase conversions, reduce bounce rate, increase traffic on our site, lower cart abandonment rate, improve content engagement, etc.

How should we do A/B tests?

1. Research
Researching the current website helps us gain data and knowledge of how it's performing. We can assess which pages have the highest bounce rates, where do most visitors spend time on the site, number of users, scrolling behavior etc.

2. Define your objective
Once you have collected the data, analyse it and define what you want to achieve by running the A/B test. Ex - Do you want more visitors to click CTA. Create variations - Assuming you plan to split your traffic into 50/50, show the original version of your website to half of your traffic (the control) and show a modified version to the other half (the variation). You can also split your 60/40 or 70/30 depending on how you want to do it. But just ensure that visitors will need to see the same variant, even if they return later as long as the test runs.

3. Run the test
Choose a tool with what you want to run the test with and also long you would want the test to run for - a week, two weeks or more.

4. Analyse the results
This is one of the most important steps. You can have 3 possibilities of the results - the control wins, the variation wins or there is no difference. While analysing the results consider how the metrics were impacted by the variation. In case the results are inconclusive, take those insights and implement them in your next test.

Conclusion

Choosing the correct software to run the A/B test is very important. There are a lot of options available like Optimizely, KISSmetrics, Visual Website Optimizer (VWO), Crazy Egg etc. Keep in mind that external factors also impact the results of your test. Also keep away from looking at a large number of metrics at the same time, otherwise you can see random fluctuations!