Contentsquare Rolls Out AI Agent Sense Analyst →
Learn More
Guide

Product experimentation framework: a step-by-step guide for successful product experiments

Visual - Workhuman

The mission of a product team sounds simple: make your users’ lives and jobs easier. But that simplicity is deceptive: your work is never done—you’re always iterating and improving your product to meet users' needs.

As your product evolves, you might be tempted to skip experimentation and make assumptions about what kind of features, improvements, and changes your customers need. 

But this approach doesn’t take into account what your customers think about your product, how they feel while they use it, and what they might be missing or struggling with. It means working in the dark, unsure of the reasons for the success (or failure) of any product change you make. 

The better option? Using a step-by-step product experimentation framework. This approach helps you iterate your product with confidence and empathy. This article gives you 6 steps to follow, so you can run successful product experiments and delight customers with the improvements they need.

Run the right product experiments

Use Contentsquare’s experience intelligence to discover what your customers need—and which experiments are likely to be a hit.

A 6-step framework for effective product experiments

A product experimentation framework is a set of structured steps for testing the impact of product changes. It’s based on building a hypothesis about your product, and then testing it methodically, so you can make data-informed changes to improve your product experience (PX).

This framework encourages curiosity, minimizes risk, and helps drive continuous improvement. With such a systematic approach, you’ll be able to choose high-impact ideas to test, generate accurate results, and draw learnings even from experiments that disprove your hypotheses.  

Useful experiments are structured and repeatable—as are the steps to running them. Here are 6 steps to follow every time you’re running a product experiment.

1. Define a product goal

Great experiments start with the end goal in mind. Without a goal to work toward, you’ll struggle to analyze your experiment’s results (and even to build it).

When defining your goal and its impact on customers, consider these 5 experiment components:

  • The problem: what do your users already struggle with and need from your product?

  • The (possible) solution: what are potential solutions to that problem?

  • The benefit: what is the benefit of that solution, both for your users and for your business?

  • The users: which audience segment is this solution most relevant to?

  • The data: what will success look like? Which metrics are you looking to change?

Great product goals come from existing customer experience and feedback. Quantitative and qualitative data from your product will reveal goals worth pursuing.

  • Quantitative data reveals potential gaps in your product experience (like a low task completion rate or high churn rate) or a declining metric (like a longer task completion time or a lower NPS than usual).

  • Qualitative data shows you how your customers feel while using your product, and where they struggle. For example, session replays reveal which areas of your product confuse users, and open-ended survey questions let your users explain why they are (or aren’t) taking specific actions.

Combine ideas from these data types to clarify your product goal. For example:

  • "The goal is to increase the user onboarding completion rate to improve the overall product experience and increase usage. This goal is based on drop-off data and user behavior, as well as feedback during the final 3 steps of the onboarding process."

  • "The goal is to shorten task completion time to help users see success faster. This goal is based on the metric that reveals users spend longer than expected on a specific page and the direct, voice-of-customer (VoC) feedback we gathered on that page."

If you skip this step of the product experimentation framework, the impact of your product experiment will be fuzzy. It may be harder to get buy-in from stakeholders to run the experiment at all. 

2. Build a hypothesis relevant to your goal

A product experiment hypothesis helps remove emotional involvement from your product goals and ideas.

The structure of a hypothesis takes into account the goal you’ve set and the insights you’ve used to set it, which helps your team understand why a product change did or didn’t work.

Here’s the structure you can follow to build your product experiment hypothesis:

"We believe that [a solution] will [the change that will happen] for [audience segment] because [reason for change]."

Let’s take the onboarding completion rate goal from the previous step. Here’s what a hypothesis might look like for that:

"We believe that reducing the number of suggested actions in the final 3 onboarding steps will increase the onboarding completion rate for new customers because it will reduce confusion and overwhelm."

A product experiment hypothesis is made up of the outcome you want to see, the action you believe will get you there, and your theory as to why.


How DGP Media uses Contentsquare to identify hypotheses  

One of Europe's largest media companies, the DGP Media Group, uses A/B testing to minimize risks when making product changes. During that process, they use Contentsquare to form experiment hypotheses and remove assumptions. 

DGP’s business model depends on converting casual readers into paid subscribers. With Contentsquare’s product analytics data, the team spotted that users who landed on their subscription page via mobile had a higher-than-average exit rate. 

So, they viewed a heatmap of the mobile version of the subscription page. It revealed that most mobile users didn’t scroll down far enough to see the information about digital subscriptions. Traditional, paper newspaper subscriptions occupied the hero position on this page—presumably, a less appealing subscription type for users who read the news on their mobile. 

Based on this insight, the team at DGP A/B tested a redesigned version of the page with digital subscriptions in the hero spot. It was a resounding success. 

[Visual] ab test heatmaps

Contentsquare makes it easy to understand why your A/B test results look the way they do 


3. Choose KPIs to measure your experiment

Key performance indicators (KPIs) are values that allow you to measure the impact of your product change. KPIs translate the goal you set in Step 1 into a numerical metric, so you can decisively prove or disprove your hypothesis. Your product experiment could have just one KPI, but shouldn’t have more than 3. 

In our user onboarding completion example, the obvious KPI is the onboarding completion rate. To prove the hypothesis, this number needs to increase. You could also track your Customer Effort Score (CES) as a secondary KPI to learn whether your product change made the user experience easier and more efficient.

Here are some other examples of KPIs to inspire your thinking:

  • Conversion rate from free trial to paid subscription

  • Customer Satisfaction Score (CSAT)

  • Task completion time

  • Task completion rate

  • Form completion rate

  • Churn rate

  • Net Promoter Score® (NPS)

4. Set up experiment parameters

Next up, choose the sample size and length of your product experiment.

A sample size is the total number of data points collected in your experiment. To define your sample size, it’s important to consider statistical significance and whether the data you collect accurately reflects the population as a whole.

Put simply, higher statistical significance means there’s a smaller chance your experiment results are down to pure randomness, giving you more accurate results.

For example, if 5,000 people go through your user onboarding flow every month, it’s risky to make sweeping changes to your onboarding based on data from only 40 sessions. 

The length of your experiment will be tied to your sample size and based on the number of customers who use your product (or a relevant section of it) in a day, week, or month. However, it’s worth considering running your experiment for at least a week, even if you’re working with large user numbers. This is because user behavior may vary between different days of the week.

Consider extending the length of your experiment if it runs through any non-typical periods for your product, too, such as holidays or months with noticeably higher or lower product usage compared to other months.

5. Run your product experiment

So, now you've got your experiment goal and hypothesis, and you know the KPIs you’re tracking and how long you’ll be tracking them for. Next up: launching your experiment.

For this, you’ll need to set up some product experimentation tools. Here are a few suggestions:

  • Optimizely, a digital experience platform that enables tests like A/B, multivariate, and personalization campaigns

  • Omniconvert, an experimentation tool that lets you run experiments for detailed user cohorts

  • AB Tasty, a platform for running A/B tests, multivariate tests, and split URL tests across your website 

On top of these, add a layer of qualitative insights to your experiment analysis, via an experience intelligence tool like Contentsquare—which integrates with all 3 of these A/B testing platforms. This way, you’ll learn not only which variation is winning, but also why.

With the Optimizely integration with Contentsquare, we're able to be more effective as a CRO program because we can take a losing test and see why it lost—maybe there's points of friction or a minor tweak that could be made to change the outcome of the test. With Contentsquare we're able to build and iterate the losing test, instead of scrapping it and starting out a square one, turning a losing test into a winning experience.

Sheena Green
Director, Ecommerce / Optimizations, Ultra Mobile 
[Visual] [AB test results]
Contentsquare helps you understand your A/B test results on a deeper level

6. Review results to prioritize product updates and inform future experiments

There are 2 possible outcomes of your experiment:

  1. You’ve confirmed your hypothesis, which means the product change you’ve tested should be rolled out to all users in that user segment

  2. You’ve disproved your hypothesis, meaning the outcome you’ve outlined in your hypothesis didn’t come to pass, and you won’t roll out these changes 

Both scenarios are important and valuable opportunities to understand your users better, empathize with them, and inform future goals, hypotheses, and experiments.

First, be sure to share your experiment results with your team and map out next steps, like making a product change you validated or brainstorming product goals for your next experiment.

Then, dig into the why of your experiment results (yes, even if you’ve proven your hypothesis!). This involves unpacking your product experiment’s KPIs. For example, if your KPI was the product onboarding completion rate and it did improve, it’s worth spending some time to add a qualitative layer to the new number.

Contentsquare (👋) helps you do that with tools like

  • Session Replay: watch how users move from one onboarding step to the other, where they linger, what helps them move faster, and where they focus their attention. By comparing pre-experiment replays with those from your experiment, you’ll learn the direct impact the change had on their product experience.

  • Heatmaps: review scroll maps, move maps, and click maps for each step of onboarding, to identify patterns in user behavior 

  • Surveys: let users voice their direct feedback on a page of your onboarding flow, or even for a specific page element 

Finally, remember this: product experimentation should be a cyclical process. When you complete one experiment, you’re building a foundation and collecting data for the next one. And when you learn how your users react to product changes and why, you’re laying a permanent foundation for a successful, customer-obsessed product.

Run the right product experiments

Use Contentsquare’s experience intelligence to discover what your customers need—and which experiments are likely to be a hit.

FAQs about product experimentation framework

  • A product experimentation framework is a set of structured steps for testing the impact of product changes. With it, you build and test a hypothesis about your product, which helps you make data-backed product improvements.

    Following a product experimentation framework means you can consistently improve your product, with minimal risk. Instead of changing your product based on assumptions about the user experience, you can use your assumptions to build hypotheses, run experiments, and make changes to your product based on the experiment outcome.

Contentsquare

We’re an international team of content experts and writers with a passion for all things customer experience (CX). From best practices to the hottest trends in digital, we’ve got it covered. Explore our guides to learn everything you need to know to create experiences that your customers will love. Happy reading!