Driving Personalization through Marketing and A/B Testing

This article was written by our partner REO, as part of our series highlighting direct insights from our large ecosystem of partners.

In 2019, for the first time ever, digital ad spend represented more than 50% of total global marketing spend. Whilst the UK was considerably ahead of this trend (63.8% of UK’s total ad spend was attributed to digital in 2018, 66.4% in 2019), the US has now joined the group with online ad spend going from 48.6% in 2018 to 54.2% in 2019. With eMarketer forecasting a 17.6% year-on-year growth (to $333.25M) in worldwide digital marketing spend, the need to ensure each of your marketing channels is delivering the best possible ROI has never been higher.

Within the conversion rate optimization (CRO) space, most brands conduct A/B testing without fully considering which marketing channel or source their customers have come from. Customers are typically bucketed into various user segments based on their purchase history, onsite behavior, geographic and demographic data. However, users within the same audience segment can often demonstrate varying behavioral attributes when navigating through the purchase funnel, across countless online and offline touchpoints.

Let’s Take Paid Search as An Example

If a user arrives on your website via paid search, you already know what they searched for and which ad they clicked on; however, users who click on the same ad, but searched for different terms/items, will often experience the same customer journey. For instance, if a customer has searched for “luxury men’s white shirt” – not only do you know the item they are looking for, you also know they are looking at the higher end of the market.

A/B Testing the landing page a user is taken to is quite common, but you can go a step further and explore how to change the experience for the customer based on their search criteria.

A potential testing idea could involve pre-sorting these shirts by highest price first, and on the Product Listing Page (PLP), displaying all the available men’s white shirts. This can develop into personalization if the user has visited the site previously, within the cookie period; e.g. by storing size data within the cookie, you could pre-select the shirt size which the user filtered by on their previous visit. 

Reducing the number of clicks and filters it takes a user to find their item can only have a positive impact on conversion rate, especially on mobile. So, by showing a customer the items they’re looking for, sorted by their desired price point and filtered by their size, you will make the purchase journey more tailored to that specific customer.

Understanding a visitor’s context (location, date and time of day, device, internet connection, etc) as well as their intent (are they here to complete a quick purchase, to research and compare products, to seek inspiration, to test a coupon, etc) add an invaluable layer of behavioral understanding to your analysis, and will allow you to execute a more impactful form of personalization.

Making the Affiliation between A/B Testing and Voucher/Cashback Partners

By applying this testing method to the affiliate channel, you can optimize the largest click and revenue drivers; namely voucher and cashback websites. After all, you can already assume that users coming from these two affiliate types are both online-savvy and price-sensitive.

Voucher and discount websites should have a conversion rate of at least 20-25% on mature affiliate programs – so any of these affiliates who have a conversion rate lower than that, represents an opportunity for incremental revenue. For cashback sites, expect this figure to be upwards of 40%.

A test idea for these two affiliate types could be to re-enforce the discount or cashback offer listed on the affiliates’ website. For instance, if the deal was “Save £15 when you spend over £100” – you could use a “loading bar” at the top of the page which gradually fills up as you add items to your basket, until the user hits the spend threshold to activate the discount. 

For cashback sites, you could test a cashback calculator onsite, which automatically calculates the amount of cashback the user will earn if they purchase everything currently in their basket. This type of gamification can be incredibly effective in increasing the number of units per sale and, in turn, the average order value.

Serve Less Content, but More Dynamically

“Content is King” – we’ve all heard it before, but how can you be smarter in how you serve it? Content, and specifically dynamic content, is another channel where source-based A/B testing can improve engagement, click-through-rates and leads/ sales. If you know the article or blog post a user has come from, you can use this insight to serve them relevant and dynamic content, making their customer journey more seamless and less detached across the two sites.

User journey analysis shows that visits to content sites usually happen in the “Discovery Phase” of the sales funnel – including on product review sites, influencer social posts, news/magazine sites and blogs. Such content is informative and persuasive; perfect to push the user towards the bottom of the funnel.

Some of the more content-heavy merchants, such as insurance brands or high-end technology retailers, will have an eclectic and extensive array of content across their website, making navigation more muddled. A solution? Reducing the amount of content on-site and instead, storing the less frequently visited content pages elsewhere, to then be served dynamically.

For example, if a user looking to buy insurance is reading up on excess and the impacts it has on a claim and future premiums, the existing content about excess could be tweaked accordingly – which could be as simple as changing the title of an article, calling out the keywords or changing the order of the content on that page.

Again, a granular analysis of how customers are interacting with individual elements of content will help paint the complete picture of engagement. Measuring clicks alone will only tell one part of the customer behavior story: tracking metrics such as exposure, attractiveness and conversion rate per click (to name a few) will give a more complete view of how content is contributing to (or stalling) the user journey.

As the capabilities of A/B testing and personalization platforms continue to evolve, the way you test and analyze a customer journey should follow suit. One of the major challenges of channel/source-specific testing can be a lack of traffic volume. If you have insufficient traffic, it will take a while before a test reaches significance. For example, the 5th highest paid search term, or 4th largest voucher site probably won’t have the volume to justify running an A/B Test on.

Want to Know More?

Contact us! REO is a digital experience agency. We are an eclectic mix of bright and creative thinkers, embracing the best of research, strategy, design and experimentation to solve our clients’ toughest challenges. We work across a variety of sectors, with companies such as Amazon, M&S, Tesco and Samsung. 

Also invaluable to our company is our scope of partners, including Contentsquare, which allows our customers to capture the nuances of their end users’ behavior for even more sophisticated segmentation and ultimately, deeper personalization.  

Whatever the challenge may be, REO applies design thinking to identify and deliver big growth opportunities.

 

Hero image: Adobe Stock, via blankstock

Driving Innovation: How Brooks Bell is Helping Brands Achieve Experimentation Excellence

At Contentsquare, we have a rich ecosystem of technology and strategic partners, built around the needs and business objectives of customer-centric companies and experience-driven brands.

We spoke with Gregory Ng, the CEO of Brooks Bell, and asked him for his thoughts on experimentation and personalization in the age of experience.

Take a product tour

Get to grips with Contentsquare fundamentals with this 6 minute product tour.

Take tour

Can you tell us a bit more about Brooks Bell?

Founded in 2003, Brooks Bell is a consulting firm focused on building world-class experimentation programs for enterprise brands.

Working out of our headquarters in Raleigh, NC, we’ve spent the last 16 years helping companies better leverage their data, technology, and workforce to learn about their customers and deliver a smarter and more profitable online experience.

Our team is 43-strong and made up of creative thinkers, data scientists, developers and strategists. Everyone—from our operations team to our senior leadership—has a genuine appreciation for the art and science of optimization and a deep understanding of the challenges of experimentation at top-tier companies.

Our client roster consists of many large enterprises and recognizable brands that have trusted our team to assess their experimentation maturity and consult on multi-year “test and learn” roadmaps to achieve true customer-centricity.

What are some of the different ways you work with businesses?

Most of our engagements begin with a maturity assessment to benchmark and measure the growth of an experimentation program. This comprehensive, data-driven review scores your program against our proprietary framework consisting of six main categories: culture, team, technology, process, strategy and performance. The results of this assessment are used to create an actionable roadmap to get your program to the next level. What that roadmap looks like and the scope of our services depends on where your program lies on the maturity spectrum.

For clients that are very early in their experimentation journey, we offer a “we do, they watch” type of partnership. In this, our team comes in and fully manages a client’s experimentation program: learning their business and customers, organizing data, building a strategy, launching tests and analyzing and reporting the results. This partnership model is most effective for programs that need to prove the value of testing before going all in.

For clients that are a little further along, we take a more collaborative approach focused on educating what is needed to build a high-functioning program In this type of partnership, our team works alongside theirs. As we run end-to-end tests, we teach the team our methodologies, practices and frameworks. Through this model, we’re able to build the foundational knowledge and practices to set the experimentation program up for scale.

Finally, as the experimentation practice becomes more mature, we transition our services to be less tactical and more strategic. We’ve helped many clients bring their experimentation efforts fully in-house through building training and on-boarding programs, aligning the experimentation process across teams, establishing an Experimentation Center of Excellence, and offering strategic advice in response to new trends, technologies and business challenges.

How critical is experimentation for driving innovation today?

Critical is putting it lightly. 

In order to compete in today’s market, companies need to have a scientifically sound method in place to learn about customers, to change and to innovate—all while limiting risk, streamlining operations and reducing costs. Experimentation offers the best way to accomplish all of that.

That means, for us, our value is not simply in running tests and helping our clients make more money—though that is definitely a major outcome of our efforts (and one that we’re very proud of). Rather, our work is about empowering our clients with the data, skills, processes and technology to use testing to glean powerful customer insights AND operationalize those insights across your entire organization.

How do you help brands elevate their experimentation/personalization strategy?

Our Maturity Assessment is really only the tip of the iceberg here. Over the last 16 years, we’ve built and honed many frameworks, training programs, practices and even proprietary technology to help our clients elevate their testing and personalization strategies.

For instance, after witnessing some very messy brainstorming sessions, we developed our ideation methodology, which provides a guided approach to developing and prioritizing test ideas in a large, cross-functional group.

Our Insights framework offers a method for connecting your experiment results to bigger picture customer theories and insights.

And finally, we built Illuminate™, our testing and insight management software, to help program managers store, share and learn from their A/B test results. Fun fact: Illuminate was originally built as an internal tool to help us keep track of our client’s tests. In 2018, after many years of tweaking, testing, gathering feedback (and some rave reviews from our clients), we decided to make it available to the public.

These are just a few examples of how we provide value to clients. I should also add that we host Click Summit, an annual conference where digital leaders gather to swap ideas and share tips on testing, personalization, analytics, and digital transformation.

Click Summit trades in all the typical things you’d find at a tech conference: sales pitches, powerpoint presentations and fireside “chats” held in giant auditoriums. Instead, the agenda is built around a series of small-group (15 people) conversations, each focused on a specific topic.

With attendance is limited to just 100 digital leaders, it’s a unique opportunity to tackle your biggest challenges by talking it out with people who have been there before.

What constitutes a good partnership for you?

We love partnering with companies and tech providers (like Contentsquare!) who share our vision of helping our clients find the people within their data and seek to make every day better through optimization.

There are tons of ways in which we can translate Contentsquare’s excellent user experience analytics into optimization opportunities.

Here are a few off the top of my head:

What are your plans for the future?

When Brooks Bell was founded back in 2003, testing was in its infancy. Now, it’s rare that we come across a client that hasn’t run at least a few tests. This is exciting! It means we get to focus on working even closer with our clients and making a bigger impact.

I’m talking more than just conversion increases and revenue lift. The task before us no longer ends at proving the value of experimentation. We’re now in the business of generating insights. By helping companies learn about their customers and fostering experimentation at a cultural level, our clients will be equipped to deliver the best digital experience for their customers.

Investing in experimentation requires taking both a short and long-term view. We look forward to celebrating the day-to-day wins with our community, while also staying focused on the vision of building customer-centric, digitally-forward and insights-driven organizations.

 

A Data-Driven Approach to A/B Testing in 2019

You’re serious about the quality of the products on your site or app and your customer service is flawless. Still, you face increasing competition and your customer churn is high, even among your most loyal audience.

The pressure on your team to prove ROI is huge, and yet marketing budgets have never been as stretched as they are today.

The good news is that brands today have access to a large volume of data, and have all the tools they need to know exactly what engages visitors and what puts them off. A/B Testing, or Split Testing, provides a scientific answer to a problem once solved by intuition alone.

It may be a widespread solution, but that doesn’t mean it’s failproof.

To get the most our of A/B Testing, it’s crucial to plan ahead and be strategic from the start. If you skimp on preparation, you could stand to lose both time and money.

Let’s look at the reasons why.

What is A/B Testing or Split Testing?

A/B Testing, also known as Split Testing or Split URL Testing, is a process that helps marketers compare one or more versions of a web or app page against a control page. It helps teams understand which elements perform better for their audience.

Split URL Testing is slightly different because the control version also has a different URL (visitors are generally unaware of this).

The aim of an A/B Test is to build different versions of a variable, modifying one or more specific elements in each variant: copy, layout, color…

The audience is then split evenly into groups. Each group is exposed to one of the variants at random and for a set period. Analyzing visitors’ digital behavior and more importantly, the conversion rate of each version, reveals which variant performs better and should be shown to a wider audience.

Today, marketers are not the only ones making decisions about customer experience, and consumers directly influence optimizations. 

Why implement an A/B Testing strategy?

Let’s cut to the chase, shall we? The main reason to implement an A/B Testing strategy is for conversion rate optimization.

Acquiring traffic can be costly (Adwords, referral, PR…) and improving the experience is not only easier, it is also more cost-effective. And there are many advantages to A/B testing. A test allows you:

Carrying out an A/B test the “traditional” way

Like heatmaps, the concept of A/B Testing is hardly new. Wikipedia describes an A/B Test as “a randomized experiment with two variants.”It’s impossible to speak about A/B Testing without going over the processes that have traditionally informed this type of marketing experiment.It’s worth noting, however, that these traditional processes are ill-equipped to handle the complex challenges of experience building in 2019

But we’ll get to that in a bit.Generally speaking, a typical A/B Test follows the following steps: 

What A/B Testing allows you to test

The possibilities of Split Testing are almost infinite.

It is therefore imperative to identify objectives so you can keep elements to be tested to a minimum. 

If your objective, for example, is to increase the rate of purchase confirmations following add-to-cards, you might want to test: 

By isolating each element in a separate variant, you will be able to learn what causes visitors to abandon their carts. 

In no particular order, here are other areas for optimization that A/B testing can help with:

Why classic A/B Testing is no longer enough

A/B Testing as we know it no longer works. This might seem like a bit of a bold statement, and yet… 

While everyone agrees on the need to leverage data to improve the visitor journey and, ultimately, the conversion rate, the data-first mindset is not top of mind for all team. In fact, a large number of A/B tests today are carried out with little to no analysis before implementation.

What does this mean? That dozens (sometimes hundreds) of tests are carried out on sites or apps without focus or knowledge that an element is indeed worth testing.  And all this testing comes at a cost!

Teams are already overstretched and testing blindly is a waste of money and time, resulting in conclusions that are shaky to say the least. While there is no question that Split Testing has the potential to drive winning optimizations, teams must urgently rethink their strategy to prioritize the most critical tests and get the most out of their data. 

How to optimize your A/B Testing strategy in 2019? 

Our years of experience in UX and conversion rate optimization have helped us define a much more pragmatic approach to A/B testing. 

Effective A/B tests start with a pre-test analysis.  

Knowing you need to test is good. Knowing exactly which element(s) should be tested is critical.

At Contentsquare, we believe every A/B test should be based on a prior analysis. And this analysis should not be carried out lightly. Indeed, this crucial step enables teams to: 

  1. Localize the issue prior to testing
  2. Prioritize hypotheses or insights to be analyzed
  3. Verify these hypotheses with a managed process 
  4. Draw data-backed conclusions

This approach has helped us define our very own process for analyzing the performance of websites and apps and carrying out pertinent A/B Testing campaigns. Our method follows 4 steps:

Phase 1: Analysis

The analysis takes into account:

This analysis allows teams to identify winning insight/recommendation pairs. 

Concretely, it’s about identifying a behavioral issue on the site or app (insight) and formulating a solution (recommendation) with the help of UX and UI teams.

Phase 2: Criteria

Because it’s impossible to test everything at once, it’s important to determine which insights will have the most impact and should be prioritized.

Criteria are based on:

Phase 3: Strategy

If (and only if!) you followed the steps needed to correctly determine the insights/recommendations, then you are ready to start testing:

For best results, stick to:  

A/B Testing results

We won’t spend too long on this part because, as we mentioned earlier, the most important part of testing is the analysis you conduct before launching an A/B test campaign. 

To learn more about our made-to-measure CX and conversion rate optimization solutions, check out our platform’s capabilities.

With sophisticated data visualizations and easy-to-read, granular matrics, today everyone on the digital team can leverage customer behavior insights to improve the experience on their site or app.