You’re serious about the quality of the products on your site or app and your customer service is flawless. Still, you face increasing competition and your customer churn is high, even among your most loyal audience.

The pressure on your team to prove ROI is huge, and yet marketing budgets have never been as stretched as they are today.

The good news is that brands today have access to a large volume of data, and have all the tools they need to know exactly what engages visitors and what puts them off. A/B Testing, or Split Testing, provides a scientific answer to a problem once solved by intuition alone.

It may be a widespread solution, but that doesn’t mean it’s failproof.

To get the most our of A/B Testing, it’s crucial to plan ahead and be strategic from the start. If you skimp on preparation, you could stand to lose both time and money.

Let’s look at the reasons why.

What is A/B Testing or Split Testing?

A/B Testing, also known as Split Testing or Split URL Testing, is a process that helps marketers compare one or more versions of a web or app page against a control page. It helps teams understand which elements perform better for their audience.

Split URL Testing is slightly different because the control version also has a different URL (visitors are generally unaware of this).

The aim of an A/B Test is to build different versions of a variable, modifying one or more specific elements in each variant: copy, layout, color…

The audience is then split evenly into groups. Each group is exposed to one of the variants at random and for a set period. Analyzing visitors’ digital behavior and more importantly, the conversion rate of each version, reveals which variant performs better and should be shown to a wider audience.

Today, marketers are not the only ones making decisions about customer experience, and consumers directly influence optimizations. 

Why implement an A/B Testing strategy?

Let’s cut to the chase, shall we? The main reason to implement an A/B Testing strategy is for conversion rate optimization.

Acquiring traffic can be costly (Adwords, referral, PR…) and improving the experience is not only easier, it is also more cost-effective. And there are many advantages to A/B testing. A test allows you:

  • Understand visitors: what elements positively or negatively influence your visitors’ subscription to a service or their add-to-cart rate? 
  • Keep only what works: engaging copy, most attractive color for a CTA, easy-to-fill-in form… 
  • Draw clear conclusions: your hypotheses are validated by data-driven analysis and not by mere intuition. 

Carrying out an A/B test the “traditional” way

Like heatmaps, the concept of A/B Testing is hardly new. Wikipedia describes an A/B Test as “a randomized experiment with two variants.”It’s impossible to speak about A/B Testing without going over the processes that have traditionally informed this type of marketing experiment.It’s worth noting, however, that these traditional processes are ill-equipped to handle the complex challenges of experience building in 2019

But we’ll get to that in a bit.Generally speaking, a typical A/B Test follows the following steps: 

  • Formulate a hypothesis to improve the site or app,
  • Prioritize which tests to run,
  • Analyze results,
  • Implement changes,
  • Formulate new hypotheses, etc

What A/B Testing allows you to test

The possibilities of Split Testing are almost infinite.

It is therefore imperative to identify objectives so you can keep elements to be tested to a minimum. 

If your objective, for example, is to increase the rate of purchase confirmations following add-to-cards, you might want to test: 

  • The visibility of payment options, 
  • The effectiveness of online security certificates, 
  • The seamlessness of the checkout process,
  • And more…

By isolating each element in a separate variant, you will be able to learn what causes visitors to abandon their carts. 

In no particular order, here are other areas for optimization that A/B testing can help with:

  • Attractiveness of a title,
  • Impact of an image (banner, picture, photo, graph) or a video 
  • Design of a CTA button
  • Size, color and font of the text
  • Effectiveness of customer reviews
  • Legibility of hyperlinks
  • Simplicity of form filling
  • Performance of landing pages
  • Various elements of a newsletter (time sent, content, CTA, subject line, images), 
  • Etc.

Why classic A/B Testing is no longer enough

A/B Testing as we know it no longer works. This might seem like a bit of a bold statement, and yet… 

While everyone agrees on the need to leverage data to improve the visitor journey and, ultimately, the conversion rate, the data-first mindset is not top of mind for all team. In fact, a large number of A/B tests today are carried out with little to no analysis before implementation.

What does this mean? That dozens (sometimes hundreds) of tests are carried out on sites or apps without focus or knowledge that an element is indeed worth testing.  And all this testing comes at a cost!

Teams are already overstretched and testing blindly is a waste of money and time, resulting in conclusions that are shaky to say the least. While there is no question that Split Testing has the potential to drive winning optimizations, teams must urgently rethink their strategy to prioritize the most critical tests and get the most out of their data. 

How to optimize your A/B Testing strategy in 2019? 

Our years of experience in UX and conversion rate optimization have helped us define a much more pragmatic approach to A/B testing. 

Effective A/B tests start with a pre-test analysis.  

Knowing you need to test is good. Knowing exactly which element(s) should be tested is critical.

At Contentsquare, we believe every A/B test should be based on a prior analysis. And this analysis should not be carried out lightly. Indeed, this crucial step enables teams to: 

  1. Localize the issue prior to testing
  2. Prioritize hypotheses or insights to be analyzed
  3. Verify these hypotheses with a managed process 
  4. Draw data-backed conclusions

This approach has helped us define our very own process for analyzing the performance of websites and apps and carrying out pertinent A/B Testing campaigns. Our method follows 4 steps:

Phase 1: Analysis

The analysis takes into account:

  • Interface
  • Time period
  • Target audience
  • Objectives 
  • Optimization areas
  • Actions to implement

This analysis allows teams to identify winning insight/recommendation pairs. 

Concretely, it’s about identifying a behavioral issue on the site or app (insight) and formulating a solution (recommendation) with the help of UX and UI teams.

Phase 2: Criteria

Because it’s impossible to test everything at once, it’s important to determine which insights will have the most impact and should be prioritized.

Criteria are based on:

  • Volume: a large volume of data will give rapid results and allow deeper segmentation,
  • Complexity: focusing on simple pairs of variants will allow teams to reduce bugs and deploy tests faster 
  • Impact: prioritizing pairs with quick and conclusive results 
  • Seasonality: prioritize changes that are not heavily impacted by seasonal events (for example, Christmas) 
  • Risk management: testing elements that do not risk negatively impacting elements not included in the test

Phase 3: Strategy

If (and only if!) you followed the steps needed to correctly determine the insights/recommendations, then you are ready to start testing:

For best results, stick to:  

  • One test per period
  • One well-defined KPI 

A/B Testing results

We won’t spend too long on this part because, as we mentioned earlier, the most important part of testing is the analysis you conduct before launching an A/B test campaign. 

To learn more about our made-to-measure CX and conversion rate optimization solutions, check out our platform’s capabilities.

With sophisticated data visualizations and easy-to-read, granular matrics, today everyone on the digital team can leverage customer behavior insights to improve the experience on their site or app.