Driving Innovation: How Brooks Bell is Helping Brands Achieve Experimentation Excellence

At Contentsquare, we have a rich ecosystem of technology and strategic partners, built around the needs and business objectives of customer-centric companies and experience-driven brands.

We spoke with Gregory Ng, the CEO of Brooks Bell, and asked him for his thoughts on experimentation and personalization in the age of experience.

Can you tell us a bit more about Brooks Bell?

Founded in 2003, Brooks Bell is a consulting firm focused on building world-class experimentation programs for enterprise brands.

Working out of our headquarters in Raleigh, NC, we’ve spent the last 16 years helping companies better leverage their data, technology, and workforce to learn about their customers and deliver a smarter and more profitable online experience.

Our team is 43-strong and made up of creative thinkers, data scientists, developers and strategists. Everyone—from our operations team to our senior leadership—has a genuine appreciation for the art and science of optimization and a deep understanding of the challenges of experimentation at top-tier companies.

Our client roster consists of many large enterprises and recognizable brands that have trusted our team to assess their experimentation maturity and consult on multi-year “test and learn” roadmaps to achieve true customer-centricity.

What are some of the different ways you work with businesses?

Most of our engagements begin with a maturity assessment to benchmark and measure the growth of an experimentation program. This comprehensive, data-driven review scores your program against our proprietary framework consisting of six main categories: culture, team, technology, process, strategy and performance. The results of this assessment are used to create an actionable roadmap to get your program to the next level. What that roadmap looks like and the scope of our services depends on where your program lies on the maturity spectrum.

For clients that are very early in their experimentation journey, we offer a “we do, they watch” type of partnership. In this, our team comes in and fully manages a client’s experimentation program: learning their business and customers, organizing data, building a strategy, launching tests and analyzing and reporting the results. This partnership model is most effective for programs that need to prove the value of testing before going all in.

For clients that are a little further along, we take a more collaborative approach focused on educating what is needed to build a high-functioning program In this type of partnership, our team works alongside theirs. As we run end-to-end tests, we teach the team our methodologies, practices and frameworks. Through this model, we’re able to build the foundational knowledge and practices to set the experimentation program up for scale.

Finally, as the experimentation practice becomes more mature, we transition our services to be less tactical and more strategic. We’ve helped many clients bring their experimentation efforts fully in-house through building training and on-boarding programs, aligning the experimentation process across teams, establishing an Experimentation Center of Excellence, and offering strategic advice in response to new trends, technologies and business challenges.

How critical is experimentation for driving innovation today?

Critical is putting it lightly. 

In order to compete in today’s market, companies need to have a scientifically sound method in place to learn about customers, to change and to innovate—all while limiting risk, streamlining operations and reducing costs. Experimentation offers the best way to accomplish all of that.

That means, for us, our value is not simply in running tests and helping our clients make more money—though that is definitely a major outcome of our efforts (and one that we’re very proud of). Rather, our work is about empowering our clients with the data, skills, processes and technology to use testing to glean powerful customer insights AND operationalize those insights across your entire organization.

How do you help brands elevate their experimentation/personalization strategy?

Our Maturity Assessment is really only the tip of the iceberg here. Over the last 16 years, we’ve built and honed many frameworks, training programs, practices and even proprietary technology to help our clients elevate their testing and personalization strategies.

For instance, after witnessing some very messy brainstorming sessions, we developed our ideation methodology, which provides a guided approach to developing and prioritizing test ideas in a large, cross-functional group.

Our Insights framework offers a method for connecting your experiment results to bigger picture customer theories and insights.

And finally, we built Illuminate™, our testing and insight management software, to help program managers store, share and learn from their A/B test results. Fun fact: Illuminate was originally built as an internal tool to help us keep track of our client’s tests. In 2018, after many years of tweaking, testing, gathering feedback (and some rave reviews from our clients), we decided to make it available to the public.

These are just a few examples of how we provide value to clients. I should also add that we host Click Summit, an annual conference where digital leaders gather to swap ideas and share tips on testing, personalization, analytics, and digital transformation.

Click Summit trades in all the typical things you’d find at a tech conference: sales pitches, powerpoint presentations and fireside “chats” held in giant auditoriums. Instead, the agenda is built around a series of small-group (15 people) conversations, each focused on a specific topic.

With attendance is limited to just 100 digital leaders, it’s a unique opportunity to tackle your biggest challenges by talking it out with people who have been there before.

What constitutes a good partnership for you?

We love partnering with companies and tech providers (like Contentsquare!) who share our vision of helping our clients find the people within their data and seek to make every day better through optimization.

There are tons of ways in which we can translate Contentsquare’s excellent user experience analytics into optimization opportunities.

Here are a few off the top of my head:

What are your plans for the future?

When Brooks Bell was founded back in 2003, testing was in its infancy. Now, it’s rare that we come across a client that hasn’t run at least a few tests. This is exciting! It means we get to focus on working even closer with our clients and making a bigger impact.

I’m talking more than just conversion increases and revenue lift. The task before us no longer ends at proving the value of experimentation. We’re now in the business of generating insights. By helping companies learn about their customers and fostering experimentation at a cultural level, our clients will be equipped to deliver the best digital experience for their customers.

Investing in experimentation requires taking both a short and long-term view. We look forward to celebrating the day-to-day wins with our community, while also staying focused on the vision of building customer-centric, digitally-forward and insights-driven organizations.

 

 

 

A Data-Driven Approach to A/B Testing in 2019

You’re serious about the quality of the products on your site or app and your customer service is flawless. Still, you face increasing competition and your customer churn is high, even among your most loyal audience.

The pressure on your team to prove ROI is huge, and yet marketing budgets have never been as stretched as they are today.

The good news is that brands today have access to a large volume of data, and have all the tools they need to know exactly what engages visitors and what puts them off. A/B Testing, or Split Testing, provides a scientific answer to a problem once solved by intuition alone.

It may be a widespread solution, but that doesn’t mean it’s failproof.

To get the most our of A/B Testing, it’s crucial to plan ahead and be strategic from the start. If you skimp on preparation, you could stand to lose both time and money.

Let’s look at the reasons why.

What is A/B Testing or Split Testing?

A/B Testing, also known as Split Testing or Split URL Testing, is a process that helps marketers compare one or more versions of a web or app page against a control page. It helps teams understand which elements perform better for their audience.

Split URL Testing is slightly different because the control version also has a different URL (visitors are generally unaware of this).

The aim of an A/B Test is to build different versions of a variable, modifying one or more specific elements in each variant: copy, layout, color…

The audience is then split evenly into groups. Each group is exposed to one of the variants at random and for a set period. Analyzing visitors’ digital behavior and more importantly, the conversion rate of each version, reveals which variant performs better and should be shown to a wider audience.

Today, marketers are not the only ones making decisions about customer experience, and consumers directly influence optimizations. 

Why implement an A/B Testing strategy?

Let’s cut to the chase, shall we? The main reason to implement an A/B Testing strategy is for conversion rate optimization.

Acquiring traffic can be costly (Adwords, referral, PR…) and improving the experience is not only easier, it is also more cost-effective. And there are many advantages to A/B testing. A test allows you:

Carrying out an A/B test the “traditional” way

Like heatmaps, the concept of A/B Testing is hardly new. Wikipedia describes an A/B Test as “a randomized experiment with two variants.”It’s impossible to speak about A/B Testing without going over the processes that have traditionally informed this type of marketing experiment.It’s worth noting, however, that these traditional processes are ill-equipped to handle the complex challenges of experience building in 2019

But we’ll get to that in a bit.Generally speaking, a typical A/B Test follows the following steps: 

What A/B Testing allows you to test

The possibilities of Split Testing are almost infinite.

It is therefore imperative to identify objectives so you can keep elements to be tested to a minimum. 

If your objective, for example, is to increase the rate of purchase confirmations following add-to-cards, you might want to test: 

By isolating each element in a separate variant, you will be able to learn what causes visitors to abandon their carts. 

In no particular order, here are other areas for optimization that A/B testing can help with:

Why classic A/B Testing is no longer enough

A/B Testing as we know it no longer works. This might seem like a bit of a bold statement, and yet… 

While everyone agrees on the need to leverage data to improve the visitor journey and, ultimately, the conversion rate, the data-first mindset is not top of mind for all team. In fact, a large number of A/B tests today are carried out with little to no analysis before implementation.

What does this mean? That dozens (sometimes hundreds) of tests are carried out on sites or apps without focus or knowledge that an element is indeed worth testing.  And all this testing comes at a cost!

Teams are already overstretched and testing blindly is a waste of money and time, resulting in conclusions that are shaky to say the least. While there is no question that Split Testing has the potential to drive winning optimizations, teams must urgently rethink their strategy to prioritize the most critical tests and get the most out of their data. 

How to optimize your A/B Testing strategy in 2019? 

Our years of experience in UX and conversion rate optimization have helped us define a much more pragmatic approach to A/B testing. 

Effective A/B tests start with a pre-test analysis.  

Knowing you need to test is good. Knowing exactly which element(s) should be tested is critical.

At Contentsquare, we believe every A/B test should be based on a prior analysis. And this analysis should not be carried out lightly. Indeed, this crucial step enables teams to: 

  1. Localize the issue prior to testing
  2. Prioritize hypotheses or insights to be analyzed
  3. Verify these hypotheses with a managed process 
  4. Draw data-backed conclusions

This approach has helped us define our very own process for analyzing the performance of websites and apps and carrying out pertinent A/B Testing campaigns. Our method follows 4 steps:

Phase 1: Analysis

The analysis takes into account:

This analysis allows teams to identify winning insight/recommendation pairs. 

Concretely, it’s about identifying a behavioral issue on the site or app (insight) and formulating a solution (recommendation) with the help of UX and UI teams.

Phase 2: Criteria

Because it’s impossible to test everything at once, it’s important to determine which insights will have the most impact and should be prioritized.

Criteria are based on:

Phase 3: Strategy

If (and only if!) you followed the steps needed to correctly determine the insights/recommendations, then you are ready to start testing:

For best results, stick to:  

A/B Testing results

We won’t spend too long on this part because, as we mentioned earlier, the most important part of testing is the analysis you conduct before launching an A/B test campaign. 

To learn more about our made-to-measure CX and conversion rate optimization solutions, check out our platform’s capabilities.

With sophisticated data visualizations and easy-to-read, granular matrics, today everyone on the digital team can leverage customer behavior insights to improve the experience on their site or app.

Beauty Giant Avon Sees Sharp Increase in Engagements and Revenue

With 130 years of social selling under its belt, legacy beauty brand Avon has been ramping up its digital transformation,  setting its sights on conquering new spaces for customer experience optimization.

As part of its digital brand lift, Avon has been investing heavily in customer-friendly content, equipping teams with the right tools to make insight-led customer experience decisions.

We recently caught up with Rachel Bronstein, Avon’s Website Optimization Analyst, who told us how one A/B test on the product carousel led to a 35% increase in revenue.

Applying Valuable User Experience Metrics

The team at Avon analyzed two category pages — makeup and skincare — to understand desktop customer journeys on these pages: how visitors were reaching the site, where they landed, and what they were doing once they were on the page.

They discovered that the product carousel on this page (which is where the company highlights best-selling products) had a low 60% exposure rate on desktop, meaning 40% of visitors were not even seeing it.

Many customers were seeing the banner image above the fold (what the team at Avon refers to as the A Spot banner) but were not scrolling down to the carousel.

A deeper analysis showed that, despite the low exposure of the product carousel, it had a high attractiveness rate and a healthy conversion per click rate, meaning that those who did scroll down to see it were likely to click and — even better news — to convert.

The team had already considered the potential benefit of shrinking the height of the A Spot banner to bring the carousel up the fold, but there had been insufficient data to back up this decision.

Armed with new customer behavior knowledge, the team decided to test the impact of a banner redesign to see if a permanent design change was the way forward.

The A/B Test Leading to User Experience Optimization

A/B testing is a key part of Avon’s continuous digital optimization strategy, and the team comes together every other week to discuss ideas of what tests to run next as part of its customer experience optimization.

The test subject on this occasion was primarily the banner, which the Avon team decided to shrink to determine if doing so would lift the exposure rate. Also part of the test was moving the carousel higher up on the page so that it would be in better view of the visitors.

The area of concern for Avon with cutting down the A Spot banner was that it might devalue this area of the site — the team was concerned that with the product carousel more prominent, consumers might disregard the banner. Running the test was therefore a crucial step to take before executing any changes — even the newly data-backed ones.

During the A/B test, the team at Avon paid special attention to a bounty of specialized metrics:  exposure, clicks, scroll, time spent on an element, and bounce and exit rates. Last but certainly not least, the brand studied conversion metrics and compared them against one another in each version of the makeup category product page.

Initial exposure rate analysis of the Makeup category page.

 

A/B Test Results and UX Implementations

The page with the shorter banner saw positive outcomes across the board. It led to a 44% increase in exposure, which climbed from 57% in the control page to 82% in the variant page. The click rate increased by 24% and the bounce and exit rates dwindled. The team also reported a 5-second increase of time spent on the page.

These uplifts resulted in a 35% increase in overall revenue in the carousel zone. The revenue in the variant page was 6% higher than that of the control page.

The test also disproved Avon’s original misgivings of reducing the banner, in that it did not negatively affect its performance — quite the opposite! Revenue generated by the banner increased by 2% during the test, strengthening the argument to change the standard size of the category page banner.

Original layout (left), and the variant layout with a smaller banner image (right).

 

Combining the Power of UX Analytics and A/B Tests

After this experience, the team decided to use Contentsquare with every test. Today, 20 people across several departments regularly use the solution to underpin UX optimization actions. The tool’s clear visualization means everyone on the team can understand behavior-measuring metrics like exposure rate, and see the impact of changes.

The other main benefit of anchoring the A/B testing strategy to Contentsquare metrics was how quickly the team was able to see uplift. Following the initial batch of insights, the test was up and running within a week, ran for a couple of weeks and results were analyzed in just a few days. Having indisputable results to share with the entire team also secured team-wide buy-in for implementing the winning variant.

Contentsquare answers the question of “how” and “why” and this is extremely valuable to our team with regards to our A/B tests and site analyses.

“Perhaps one of the biggest benefits is the visual representation of behavior that Contentsquare provides,” said Rachel Bronstein.

“We can sit in a room, take a look at the results and come up with a plan of action. We ran our test for two weeks, analyzed the results immediately thereafter and put a plan in motion on the day the results were presented. Contentsquare has empowered our team to make data-driven decisions quickly,” said Bronstein

The customer experience optimization strategy has proven to be an all in all victorious one for Avon’s engagement and revenue. It could not have happened without the consolidation of critical tools: unique behavioral UX metrics, which picked apart the elements and their performances in the category page, leading the team to understand what was going on UX-wise at a granular level; and the ensuing A/B test, allowing the brand to confidently determine how to best improve the UX.

How to Enhance User Flow with UX Analysis

UX analysis is methodically different from brand to brand, as each has its own set of KPIs and priorities.

User flow comes into play where UX analysis is concerned, as it is a fundamental part of UX, fulfilling a pivotal role in maintaining the sales funnel and, for this reason, conversions.

Also known as visitor flow or customer journey, the user flow denotes the path that a typical user on a website takes to complete a task, including all the steps along the way.

Mapping out visitor journeys and examining all the finer points of the user flow, such as what your visitors are doing on each page they visit, will inform you on how to improve your UX. To do so, you’ll need to begin with enhancing your user flow.

UX Analysis for Improving User Flow

Here are the steps you’ll need to take to tweak user flow with a UX analysis.

1. A Visual, High-Level View of User Flow

The first step in examining user flow is to access a high-level visualization of it. Much like a birds’ eye view, such a perspective displays all the steps of the user flow in one clear illustration of the pages viewed within the customer journey.

Where visitors land, which task(s) they complete and at what stage they do it unveils what they have been attempting or seeking from your website. Alternatively, the regions in a website in which users couldn’t complete an action reveal their struggles.

Visualizing where visitors enter your site, where they head to next and ultimately how they exit helps add a layer of behavioral understanding to customer segments.

Clarity and accessibility are key in this step, so make sure you use an analytics tool that can clearly lay out an analysis of your visitors’ journeys.

2. Observe & Simplify the Number of Steps

Secondly, you’ll need to scope out the number of steps in the user flows. This is important, as it shows the complexity your visitors undergo to complete each action. It allows you to surmise if you should increase or lessen the number of steps in these users’ journeys.

Identify friction points in the customer decision journey, including looping behavior and premature exits. Contemplate whether your visitors’ need to fill out a certain form field or enter a particular landing page to lead them to conversions or other actions. If not, cut these steps out! Less is more often times.

3. A Deeper, Page-by-Page Read in Your UX Analysis

Next, you’ll have to heed the happenings, aka individual visitor behaviors on each page of their user flows. This will help paint a clear picture of how your users’ traverse your site.

Analyzing visitor paths through your site can immediately flag pages with issues — be that an error message or a UX obstacle. For example, what is causing visitors to exit after adding to cart?  

Once you’ve found these problems or points of friction, you can begin to conceive some optimization endeavors.

Besides conversions, you should decide on the metric(s) you seek to make the most strides on in your user flows. Perhaps you want to see a larger click recurrence or a smaller hesitation time. When you zero in a few KPIs or metrics, you’ll be able to tackle user flow optimization in a more precise and conscientious way. This will allow your team to implement a more granular approach to improving each step in the digital visitor journey.

4. Implement A/B Tests

Then, consider how you can improve the user flow by implementing A/B tests. A/B testing is a strategy in which two versions of a website or app are tested against each other on their performance. This will help answer questions about why the setup or features in one page are more effective than another, allowing you to make informed optimization decisions

Finally, after you’ve delved into your UX analysis, you can make changes to your UX accordingly, which will directly influence user flows. Perhaps they’ll improve user flows, making them easier for users to achieve their tasks without issues. There is also the possibility that these changes will have little effect on these flows.

What’s certain is that a granular, behavioral analysis provides a much more lucid picture of how your visitors interact with your content. Although traditional analytics are certainly part of the makeup of this picture, they do not present a comprehensive user flow.

Accessing the user flow requires sifting from an overview to granular data, from viewing journeys to in-page steps, and zooming in on the obstacles.

Culling this data will allow you to make fact-based decisions, instead of those based on intuition.

More Tips on User Flow & UX Analysis

The number of steps a user must take to complete a task often corresponds with their satisfaction over the quality of the digital experience. A good experience is unlikely to have points of friction in which visitors find themselves burdened in the steps towards completing an action.

However, there are instances in which shorter journeys are not the UX target; there will be instances where you want to drive longer sessions and deeper engagement.

An in-depth level of data can help you answer whether a short site duration is telling of a good or bad UX. You have to inquire if a short site navigation is due to visitors having completed their goals or if they struggle with the experience.

An exhaustive UX analysis will shine light on these questions. And since seamlessness is a cornerstone of a good UX, an exhaustive analysis of customer journeys goes hand in hand with digital customer satisfaction.