Understanding user journeys: Banish your blind spots in preparation for Peak
What gets measured, gets managed.
If you’re not actively monitoring the effectiveness of your customer journey, it’s likely you have blind spots and ongoing opportunity losses. The result is a lower ROI on your web assets and acquisition spend, and an invitation for competitors to swoop in and service your prospects.
With peak right around the corner, now is your window of opportunity to identify the friction and anxieties your visitors face throughout their entire customer journey. And once found and better understood, those pain points can be flipped into revenue-generating opportunities ready for maximum impact during peak.
Ecommerce teams are grounded in analytics but understanding the human psychologies that generate those analytics can be more of a challenge for teams without UX or User Research expertise. Analytics teams will understand navigational behaviours at scale, but if this is the only data point, those teams won’t be able to understand the attitudinal aspects of visitors nor the psychological reasons that drive those behaviours.
And if you don’t understand behaviours, you won’t be able to change them.
Having spent 100s of hours face to face usability testing across many different domains, it still never ceases to amaze me the number of “aha, of course” or surprising “wow, really?” moments I still encounter. Many of which go some way to help to explain the quantitative data we see in analytics, and the friction that prevents visitors from converting.
For example, in a recent usability test to inform a listing page redesign, I observed several people scroll down the full length of the first page at speed when browsing, not stopping or interacting with any of the product tiles. When they reached the pagination controls, to my surprise they didn’t navigate to page 2, but instead navigated to page 3 or 4.
Why did they do this?
Well, because this was moderated research, I was not only able to observe the behaviour as it happened, but I was also able to ask the participants about it. When I asked why they took the action they did, participants said it’s because the items on the page appeared roughly in price order and page 2 probably wouldn’t have results in the price range they were seeking, but items around pages 3 or 4 probably would.
However, what I also observed is that participants didn’t notice, use, or think to use the sort control to order the items by price. So what I actually observed was people using a workaround because of their incorrect assumption of default product ordering and the fact they didn’t use the sort feature. It’s a small example but it’s the type of insight we uncover when conducting qualitative user research.
Now, when we then turn to heatmaps in Contentsquare’s zoning analysis for example, we can start to see why the click distribution across the pagination control leans more towards later pages than earlier pages. But now armed with our usability testing insight, we can hypothesise that maybe it’s not a problem with the pagination control itself, but perhaps the sort control instead?
We can construct an A/B test around the visibility and usability of the sort control and see if it impacts the way pagination is used, and ultimately the impact it has on getting visitors to relevant products quickly. If we hadn’t combined or “triangulated” research methods, we may have looked in the wrong place or ignored the heatmap data as an anomaly.
The point is, we need a joined up approach that combines various data points across attitudes and behaviours using both analytics and user research methods to get the full picture. And we need to take this view not just across the website journey, but across the broader customer journey, because you know what? There’s likely a lot of offline behaviours away from the screen that could be leveraged on site to improve your bottom line.
For example, if you’re a clothing retailer, have you ever thought about the importance of communicating the weight of garments on your PDPs to prevent returns? More on that later.
How to conduct a Customer Journey Analysis
In econsultancy’s 2017 Conversion Rate Optimisation Report, Customer Journey Analysis was ranked by survey respondents as the 2nd most valuable method, second only to A/B testing itself. Despite this, many teams still struggle to go beyond assumptions and opinions, and many fail to produce validated journey insights their teams can rally around. How to create and run customer journey mapping workshops is one of the questions I’m regularly asked by teams and mentees.
So how does the UX team at McCann Manchester go about measuring the effectiveness of customer journeys and how do we combine analytics and user research data points to produce validated journey insights from which teams can take action?
Well, it all starts with journey mapping.
Part 1 of 3 – Journey Mapping
They say if you want to go fast, go alone and if you want to go far, go together.
This is the key to understanding your customer journey. It’s going to be a team sport and you’ll need to lean on subject matter experts to provide the insights they have access to across the customer journey. You’ll also need to welcome stakeholders into the process. You’ll need to engage them and use your customer journey mapping workshops as a collective storytelling exercise and a catalyst for user centred thinking.
In a pre-COVID world, you’d conduct your customer journey workshops in a room with a large wall where’d you’d pin up a column for each phase of the customer journey from say awareness through to usage. So you may have columns that represent awareness, research, consideration, purchase, delivery, usage, and even returns or off-boarding.
Cutting across each phase in rows, you’d have swimlanes for ‘doing’ (what are people doing and where?), ‘thinking’ (what are people’s mental models and what questions do they have?), ‘feelings’ (usually represented as a line chart on which satisfaction is tracked), ‘opportunities’ (what does the journey map tell us we need to do?) and ‘owners’ (who is responsible for taking action?) The entire map would be framed through the lens of a particular persona and scenario as they travel along their customer journey.
The idea is you have a large matrix on which your workshop attendees and stakeholders can post-it information. So for example, if you have the research available, you may be able to plot the customer’s satisfaction at the point of purchase, how they feel and the questions they have. But if not, you’ve just identified a blind spot you may want to investigate.
In today’s current climate, you will likely need to pivot to virtual journey mapping tools and workshops. For this, you can use a dedicated journey mapping tool such as Custellence or Smaply, or a digital whiteboarding tool such as miro.
Here’s what a journey mapping workshop looks like:
- Start your workshop by Empathy mapping your persona so the group can gain an understanding of what your audience sees, hears, thinks and feels with regards to the scenario at hand.
- Next, invite your subject matter experts to deliver lightning talks, each providing a short playback of the insights they have available from their respective areas of the business.
- For the main part of the workshop, work in small sub-groups to populate the board with the customer journey and then playback to the wider group at regular intervals. Sometimes you will have data you can apply to the board, sometimes you will have assumptions, and other times you will have no idea. It is these latter two areas that are your blind spots; the areas where you should conduct some research. Make sure you capture these research needs in both the opportunities and owner swimlanes.
At this stage, you will likely have an ‘assumptive’ customer journey map; some data points for some part of the journey, some ideas and opportunities where you can enhance the customer journey, but also many areas where you have research needs.
The task now is two-fold. Firstly, act on the opportunities you’ve identified. Secondly, prioritise and conduct user research and analytics investigation into your blind spots.
A novel and highly insightful way I do this is by combining moderated usability testing with a digital diary study.
Part 2 of 3 – Usability Testing
You’ve likely heard of usability testing or have perhaps even used tools such as usertesting or userzoom in the past.
However, while some testing is better than no testing, SaaS usability testing tools often fail to provide the critical feature to be able to moderate; that is, the ability to talk with test participants and ask why they took particular actions – a capability we need for the thinking and feeling swimlanes in our journey map. So it is for this reason (amongst others) that I favour 1-2-1 moderated usability testing for my Conversion Research.
The other critical reason for 1-2-1 moderated testing over remote unmoderated SaaS tools is that when conducting research, I want people to behave as naturally as possible, and I want to them to make a real purchase they care about. This is because I’ll use that purchase to track the second half of the customer journey via the diary study. More on that shortly.
For anyone familiar with SaaS usability testing tools, you’ll be fully aware that your test participants won’t enter personal details in your website, only dummy data (so that’s a whole bunch of critical checkout information you’ll miss), and therefore participants also won’t make a purchase (as that’s beyond the scope of SaaS tools which incentivise with only a handful of dollars).
So the approach I use and recommend for this process is 1-2-1 moderated testing. And as with journey mapping in the current climate, you’ll likely need to pivot out of face-to-face usability labs and into a remote moderated testing tool such as Lookback.
Here’s the process you’ll follow:
- Create a participant specification that details the type of participant you want in your study. This should be derived from your personas, but you can also leverage web analytics demographics to inform them.
- Next, use your local fieldwork recruitment agency, existing customer lists or website intercepts to recruit and screen participants for your usability testing.
- Create a test plan that includes scenarios along the areas of the customer journey you wish to investigate.
- With your test plan, participants, stakeholders and suitable incentives to cover the session and the follow-up diary study, observe your participants researching, using and purchasing on your website. Follow-up with questions targeted to their needs, wants and observed pain points. In particular, you’re looking for friction, anxieties and motivations that you can leverage and bring into your journey map as opportunities.
At this point you’ll have a large number of qualitative insights relating to your website. This could range from people not knowing whether to drop the leading zero in your checkout telephone field (validated in dropouts and errors in field analytics), rage clicking on the payment method icons because they think they’re interactive (validated in click recurrence in zoning analysis), or anxieties around purchasing a dress because they can’t see a photo of it in context to determine the length (a qualitative insight you can’t uncover from analytics.)
The final part of the moderated research is to engage with participants via your diary study, just as they exit the usability test. You’ll have directly observed the website behaviour phases of the customer journey through usability testing, but now is the time to research fulfilment, delivery, packaging, unboxing, usage and returns – a rich area of customer journey opportunities many fail to investigate.
Part 3 of 3 – Digital Diary Study
When I polled my User Experience and User Research contacts on LinkedIn, I discovered 3/4 had never used a diary study and 18% didn’t know what one was. But having run the usability testing diary study combo for multiple projects myself, I can highly recommend it as a valuable research tool for journey mapping.
Traditionally, a diary study, as the name suggests, is a physical diary that research participants complete in response to questions or guides over a number of weeks or months. At the end of the data collection period, the researcher would review diaries and then follow up with interviews. However, today there are of course digital alternatives such as dScout, though I tend to use our proprietary McCann Truth App.
The digital spin on diary studies unlocks a huge amount of media rich and real-time potential. For example, through a digital diary study tool you could engage with participants 1-2-1, hold group conversations, publish polls and surveys, but most importantly, you can have participants post contextual images and videos.
So how do I use digital diary studies with usability tests for journey mapping?
Well, picking up at the point where your usability testing session ends, typically once the participant has made a purchase, arrived at the confirmation screen and received an email, I then have participants move over to the diary study tool.
From here on out, I continue my research at arm’s length through the tool over a number of days and weeks. I have participants record and talk to me about their thoughts, feelings and activities for the remainder of their journey, from ordering through to delivery, unboxing and usage. With this in-context, moment-of-truth window into people’s homes, I get to uncover things such as the experience lull between ordering and delivery, how there’s a missed opportunity for brands to engage in this space, how customers tend to review orders and share with friends in anticipation of delivery, and how many brands fail to delight with the peak-end rule (is a confirmation email really the best exit experience we can provide?)
Following on from here, I have participants talk to me about packaging expectations before taking photos of their delivery experience. And once we’ve moved on from the images of torn up parcels stuffed through letterboxes or thrown into bushes; I have participants video themselves for the unboxing – it’s here where you often find website opportunities.
For example, when using this approach for one of the largest ecommerce brands in the north of the UK, I was able to watch a very excited participant in her own home open the parcel she’d been waiting days for, only to be bitterly disappointed as she held the dress up against her body; “it’s a lot heavier than I thought it would be” she said. And after a few seconds she expanded; “the material, it’s really heavy. It looked like a floaty dress on the website and it didn’t say anything about the weight. It’s not what I thought it was. I’m probably going to send it back.”
And with that, a participant who had been given a free item as part of a research study, retuned it because the PDP on the website didn’t communicate the weight of the item. Sure, the purchase will have helped increase the conversion rate of the website, but the return will have had a detrimental effect on revenue.
There’s a similar story for another participant I can recall. In this instance, the participant purchased a pair of white trainers in the usability lab for her husband. Again, after waiting a number of days and through taking part in the diary study, it was clear the husband was excited by the surprise pair of free trainers.
On delivery day, he sat the box on the kitchen table and in front of a very pleased and excited wife (who had gifted her free research purchase), the husband opened the box. But as his face dropped and disappointment set in, he said “hmmm, they’re ok, but I thought they’d be leather?” Immediately his wife jumped in. “What, they’re not leather? Oh no. It didn’t say they were canvas on the website, I’m positive. I just thought they’d be white leather trainers. There wasn’t anything on the website to say they were canvas and they just looked like normal white trainers in the photos. Well they’re going back!”
And again, we got to observe a plummet in satisfaction and an item being returned because the PDP on the website didn’t clearly communicate the product.
I wonder how many items are returned because of miscommunication or lack of communication across weight, size, material and finish etc.? I wonder how many customers are disappointed and never to return to the brand after such experiences? And I therefore wonder how many opportunities there must be to rectify these types of problems further upstream, on the website itself.
Pulling it all together
We have the assumptive journey map we started with, we have the qualitative usability testing insights for the first half of the customer journey, and we have the digital diary study insights for the second part of the customer journey.
The next step is to apply all this insight plus anything else to the map to create a validated journey map; a journey map that isn’t based on assumption and opinions, but one based on qualitative and quantitative data.
In addition to the methods mentioned here, you should also look to your customer surveys, polls, panels, customer feedback, web analytics, KPIs etc., to add any insight you can. Triangulate and apply both qualitative and quantitative data across the behavioural-attitudinal spectrum so you not only understand website behaviours, but the human psychologies that drive those behaviours.
Remember, what gets measured, gets managed.
If you measure the effectiveness of your end-to-end customer experience, you will banish your blind spots, and learn how to annihilate anxieties and fight friction. You will identify where your opportunities are, and how to flip those opportunities into revenue generators just in time for peak.
Chris Callaghan is a Master Certified UX specialist, Certified Usability Analyst, and Director of UX and Optimisation at McCann Manchester, a Contentsquare Select Experience Partner, and ranked by Prolific North as the number 1 integrated agency in the north of the UK. Working across some of the UK’s largest brands, Chris specialises in the application of advanced UX for Conversion Rate Optimisation and supports Ecommerce teams with Conversion Research, capability growth, Usability testing, UX Design and advanced Prototyping. Connect with Chris on LinkedIn or [email protected] to find out more.
Give me more
For more golden nuggets of Peak advice, head over to our hub.