The vast majority of apps (and there are millions) fail. The apps that succeed are all piloted by teams who know exactly how to track user behavior in a mobile app—and how to use that behavioral data to optimize their app experience.
Only by tracking user behavior can you understand what is and isn’t working in your app experience. It’s also (not incidentally) the best evidence you can cite to prove to your higher-ups that your investments and initiatives are making a positive impact.
In this article, we lay out some best practices to track app usage from day one to day one million. (What can we say? We’re optimists.)
We cover how to
Set your app’s strategic goals based on product lifecycle stages
Tailor your app’s tracking strategy to support your current and future goals
Combine behavioral data with contextual metadata to get a deeper understanding of your users and product
Keep your data quality high, through every iteration of your app
Step 1: choose metrics and KPIs based on your app’s goals
Before implementing any tracking solution, it's important to quickly (but thoroughly) map your customer journey. This’ll help you identify how usage drives the most relevant mobile app metrics and your key performance indicators (KPIs).
What you choose to focus on in your tracking, as well as the scope of your tracking plan, may change across stages of a product or feature’s lifecycle.
For example, if the product or feature you want to analyze is new, you might want to focus on user engagement and pinpointing friction. But as time goes on, you’ll likely collect more data about downstream events from activation. At this point, you may want to look into trends in adoption, retention, and churn.
For mobile application developers who want to analyze their users’ behavior with the same degree of rigor that their web counterparts can, advanced product lifecycle stages can help.
Let’s explore a few common mobile app goals and the actions you can take to achieve them.
Growing a user base and comparing iOS vs. Android
Utilize advanced analytics to identify the most effective user acquisition channels for each platform (iOS and Android)
Analyze user demographics, behaviors, and preferences specific to iOS and Android users to tailor marketing campaigns accordingly
Conduct A/B testing on both platforms to optimize app store listings and improve conversion rates from app store visitors to downloads
Reducing churn and analyzing app version differences
Implement comprehensive analytics to identify common churn points and understand differences between app versions (Ex: iOS vs. Android versions)
Analyze user behavior patterns, such as feature usage, session length, and engagement levels, specific to different app versions to address areas of dissatisfaction and improve retention
Use targeted notifications or personalized offers based on app version differences to re-engage and retain users
Personalizing experiences and tracking cross-platform journeys
Leverage advanced analytics techniques to segment users based on their preferences, behavior, and platform (web or mobile)
Analyze user journeys across web and mobile platforms to understand how customers transition between the 2 and identify opportunities for seamless integration
Deliver personalized content, recommendations, or notifications that are tailored to users' preferences and platform usage
By tracking how customers transition between web and mobile platforms, developers can uncover valuable insights and identify opportunities for better integrating the 2.
Improving self-serve user experience and increasing conversion rates
Analyze user flows and interactions within your mobile app, focusing on areas where users encounter friction or difficulties in the self-serve process
Optimize your mobile app's user interface, navigation, and overall usability to ensure a seamless and intuitive self-serve experience for both iOS and Android users
Leverage analytics to identify drop-off points specific to each platform and implement targeted improvements to enhance conversion rates
Step 2: define your tracking strategy
Now you’ve set your main KPIs, you’re ready to define your tracking strategy, which should support both your current and future goals.
Here’s how to define that ideal strategy:
Identify the customer journey surfaces you need to track
Begin by identifying critical surfaces to monitor and analyze. Next, map the technology or framework used for each surface. Finally, choose the best tracking option for each, such as integrations, SDKs, or track APIs.
Plan for manual tracking and autocapture
A hybrid tracking approach is your best solution for saving time and surfacing hidden insights. That means you’ll use a combination of automatic and manual tracking.
Things you’ll want to use manual tracking to capture | Things you can uncover by tracking with autocapture |
---|---|
- Important user journey milestones that rarely change, like account setup - Critical actions that drive business outcomes (Ex: invite user, start trial, purchase) - Usage of a new feature (Ex: entry point, actions that indicate feature use) | - Unexpected friction in user journeys. For example, users clicking on unclickable elements - How unexpected user behavior or alternate user journeys are converting |
Identify users throughout their journey
It can be a challenge to connect user interactions across web and mobile platforms—you need to adopt a solution that resolves identities.
Doing this helps you create cohesive views of your users across devices, browsers, and domains.
Post-login identifier | Pre-login identifiers |
---|---|
Set up a unique user identifier to understand how users move between different products or web and mobile surfaces. For experiences that occur after log-in, use a username, email, or other unique information. Most analytics tools offer ‘identity resolution’ options. | When dealing with events that occur before a user logs in, there are alternative strategies you can use. Some platforms allow you to retroactively identify a user by matching an identified user with an unidentified one based on a cookie or device ID. This way, you can stitch together the 2 identities and run insightful analyses, such as identifying the most effective marketing pages in driving user trials. |
Remember, it’s essential for all teams to use the same user identifier. This ensures all events are accurately attributed to a specific user.
Step 3: bring together critical context and metadata
User behavior is the core to understanding a user’s journey within your product. That said, critical metadata levels up your analysis, giving you a clearer picture of your users and your product.
For example, you can enrich user profiles with data about their role, vertical, or company size. With this data, you could tell if a feature is better adopted by users in large companies or if those kinds of users are having trouble.
There are 2 main approaches to bringing this data together: inside your analytics tool, or downstream in your data warehouse.
There are advantages to each:
Combining behavioral and metadata in your analytics tool | Bringing behavioral data into your data warehouse |
---|---|
Small teams or teams new to analytics may lack the resources and expertise to manage separate data warehouses and business intelligence (BI) tools—and may instead supplement their data within the analytics platform. This can also facilitate more targeted and personalized messaging by combining enriched data with customer engagement tools. | Advanced teams may already utilize a data warehouse to store, manage, and model financial, business, and user behavior data. Sending the behavioral data from their analytics tool to the warehouse enables them to synthesize various data types and conduct more sophisticated analyses—while also facilitating cross-team reporting. |
Step 4: post-implementation quality assurance
Use your analytics tool to generate initial reports and set up dashboards that map the customer journey and focus on the priority areas identified in Step 1.
As you create these charts, identify any issues you have with implementation. You also want to keep a running list of untracked items.
Dealing with missing data
How you approach this depends on whether your data is or isn’t autocaptured:
Autocaptured data | Manually tracked data |
---|---|
Label or ‘tag’ missing events as you go. For most events, you won’t need to work with your engineering team to instrument further. Instead, use your autocaptured data and label the events you currently need. The best auto-tracking tools even offer retroactive labeling so you can analyze the new events from the time you started auto-capturing, even if you only recently wanted to analyze this data. | First, complete an initial review of your analytics plan, addressing the questions you aim to answer. Following the autocaptured scenario approach, begin mapping the user journey and identifying key KPIs, and compile a comprehensive list of missing or malfunctioning tracking elements. It's best to create this list directly in the engineering team's ticketing system to facilitate prioritization and minimize communication gaps when transitioning from spreadsheets to working tickets. By consolidating tracking changes for each iteration, your team can more efficiently implement new tracking elements (reducing back-and-forth communication) and easily identify if an entire category of tracking elements is absent. Before prioritizing these tickets, analyze the list for any patterns or discrepancies in tracking you can learn from and avoid repeating in the future. Add any necessary tickets to ensure comprehensive tracking of the entire workflow. |
Fixing incorrect or inconsistent data
There are 2 approaches you can take here:
Update the event definitions (preferred) | Update the track calls |
---|---|
The best analytics platforms will allow you to update the event definitions separately from the track calls. This gives you options for repairing or iterating on the event definition without changing the actual tracking code. By changing the event definition rather than the raw tracks, you save developer time and automatically update any charts and dashboards that depend on that event. This saves time for everyone in your organization, including analysts who would have had to update the charts, stakeholders who would have had to find the latest versions of dashboards. It also helps reduce human error and eliminates inconsistencies with track calls for iOS vs. Android that are typically done by different developers and teams. | It’s best to batch up these changes so your engineering team can quickly make these small changes to the tracking code. Create engineering tickets for each change and folks analyzing the data can check the status of changes to the data. You can use labels for any tickets that include tracking changes, then create a unified view of all tickets that change tracks. This allows anyone who relies on the data to understand upcoming changes and see the status. |
Step 5: continuously iterate and govern
Products are constantly improving and evolving, which means user behavior and raw events from the product will also change.
As your team adds features, deprecates workflows, and iterates on the product, you need to ensure the dataset stays trusted and up to date.
Dynamic governance for agile product teams
Use these tools to follow best governance practices:
Event verification. This can mean a process of ‘verifying’ events by the analytics admin on a regular basis. By following this process, your team can run their own analysis more successfully, confident they’re using the most up-to-date events.
Event categories. You can also categorize events by team, product, or workflow to narrow down the dataset the team uses. This makes events, charts, and dashboards easier to discover and manage.
Dataset ownership. Create a single ‘owner’ who’s responsible for ensuring that events stay up-to-date and accurate for a data set. They can help resolve issues and verify charts and events.
Event lifecycle maintenance. To avoid recreating shared charts and dashboards when features and events change, use virtual events or tables. Doing so will ensure accuracy over time and prevent confusion from outdated charts.
There are 2 common ways to create virtual events:
Option 1 | Option 2 |
---|---|
At the data level, use virtual tables, add extra properties, tags, or names to existing events, and keep lists of raw events for analysis. This approach usually involves complex SQL queries, which can be difficult to maintain and requires significant analysis resources. | Choose a product that offers low-code or no-code support for virtual events, enabling the dataset to be defined separately from raw events, simplifying the process and achieving the same result. |
How Contentsquare helps you track user behavior and improve UX
The most successful app teams rely on data for most of their decision-making. But this requires accurate, trustworthy, and accessible real-time data that everyone in your team—and, ideally, organization—can use.
To maintain a comprehensive and up-to-date dataset, develop an analytics and analysis plan for instrumenting your applications effectively. Select a flexible tech stack that evolves with your product and includes tracking tools that cover all aspects of the user journey, making analysis and reporting accessible to the entire team.
Contentsquare’s Product Analytics, powered by Heap, is a comprehensive app analytics solution. It empowers your team to track and analyze your app user journeys from onboarding onwards, serving up actionable app data by the bucketload to help you build an experience that users love to access again and again.