“Let’s build a low-quality product!”…said no one ever. And yet somehow, the world is full of low-quality products.
You’re probably painfully aware of the ways in which your product isn’t high-quality. You may have even made the decisions that got it there.
Those trade-offs are necessary, usually painful, and often difficult to navigate.
In this article, we help you navigate questions like
How do you know when to compromise on product quality vs. insisting on it?
How can you ensure you’re compromising deliberately, not by accident?
How can you tell that what you’re working on improving is worthwhile?
And how do you stand tall and defend those decisions in the face of entirely reasonable criticism from users and leaders alike?
by
Defining product quality
Telling you when it matters
Explaining how to measure product quality
Giving you tips on how to improve product quality
Let’s start with the most fundamental question of all.
What is product quality?
Product quality is the degree to how well you meet user needs, and encompasses the following criteria:
Completeness: for a given feature, did you solve a real problem in a complete and effective way? Or does the target user regularly have to work their way around its deficiencies or lean on your support team? It’s worth noting here this means you have a clear definition of your target user. Does adding that feature leave your product in a state that feels whole and consistent?
Opinionated: did you embed your understanding of what the target user will want to do and how they’ll best accomplish it? Are you clear on your recommended/default next action? Are you willing to inconvenience less-important personas to support more-important ones?
Usability: can your target user figure out how to use the product without unduecognitive load (the mental effort required)? Is your terminology, iconography, and flow consistent with user expectations? Do clickable elements have clear, consistent affordances? Does your product conform to known usability principles? (Note that usable isn’t always the same as discoverable. One doesn’t pick up Photoshop without training, and that’s fine—it’s a pro tool. The same expectation doesn’t hold for Apple Photos.)
Polished: are colors, fonts, and dimensions consistent with your design system, and is everything aligned? Are icons and images rendering properly? Did you use animations to preserve context? Do actions provide effective feedback? Did you write grammatically correct and sensible copy? Is the product stable?
Efficient: does your solution feel fast and responsive? Can users complete their tasks with minimal cognitive load and low effort? This is partly about actual performance (latency, response times, number of steps) but also derives from user experience (UX) details and other factors in this list.
When does product quality matter?
Intuitively, everyone likes the idea of a high-quality product, but is the required investment worth making?
In truth, it matters more for some businesses than others. It’s most important when
You exist in a highly competitive market that’s in the middle of its maturity curve. Your business is well-defined and somewhat commoditized, and you have plenty of competition. Given the existence of robust competitors, low product quality leads to churn over time, while high product quality serves as a meaningful differentiator.
End users are heavily involved in purchase decisions. This is true for B2C software and some system-as-a-service (SaaS) businesses, but less so for traditional enterprise tools where the purchase is made top-down via a central IT department.
End users arrive with expectations built in other markets where product quality is high. This is increasingly true in the tech startup world: popular tools like Slack, Dropbox, and GitHub have a reasonably high-quality bar (in part because they pursued a bottom-up sales model), generating environments in which lower product quality can stick out like a sore thumb.
Truly bad product quality will catch up with you eventually—in support costs, in poor customer satisfaction, in vulnerability to competitors—and the longer you let it go on, the harder it is to rectify.
And, business case aside, do we really want to bring more bad products into the world?
How to measure your product quality pre- and post-launch
Prior to launching your product, product features, and product updates, evaluate and test your product quality with:
Heuristic or expert evaluation. When you hire designers, you’re hiring experts in recognizing product quality. A heuristic evaluation is a powerful, subjective, low-cost way to use that expertise: a designer, alone or along with their cross-functional team, audits the experience with the help of established UX heuristics. This may seem simplistic, but in many dimensions, it’s the most effective way to measure product quality.
Usability testing can tell you about usability, the cognitive components of efficiency, and, to some extent, completeness and opinionatedness. It’s useful at multiple stages of the design process.
Post-launch, you need insights into how well the product is functioning for your actual users and how happy they are with their experience.
There are 3 metrics that can give you some insight here.
Performance metrics (Ex: load times, latency) and quality assurance (QA) can tell you about stability and performance and therefore provide insights into efficiency and polish
Customer satisfaction metrics let you know how your users are feeling about certain aspects of their product experience and can catch any issues that frustrate users—but they are imprecise. Surveys complement these on a feature-by-feature basis. With the right platform (like Contentsquare), you can quickly create AI-powered surveys, placing them at key moments in the user journey (like after the first use of an important feature) and generate summary reports to see what users love or hate, at a glance.
Usage metrics can tell you about completeness and, to some extent, usability over time. Product analytics give you insights into how your users are using your product and how (and how often) they reuse product features.
While these metrics help you understand your product quality, none give you the full picture.
Usage gives you the what but not the why. For example, ongoing retention for a particular feature is a valuable signal about its usability, but it doesn’t tell you whether your actual user matches your target user, or—most importantly—tell you how customers feel about that feature.
Customer satisfaction metrics let you know when something is going right (or wrong), but don’t necessarily tell you what is going wrong.
That’s why you need experience analytics to truly understand why your users behave the way they do.
Contentsquare is an all-in-one experience intelligence platform that not only captures every click, tap, hover, and scroll your product users make, it also makes it very easy to investigate the root cause of that behavior.
The platform’s Experience Analytics product lets you identify macro trends in user behavior and then dig into the data with capabilities like Journey Analysis and Session Replay to understand where and how individual users are getting value from your product—or just getting stuck.
The right experience intelligence platform will let you apply this analysis to the sessions that lie behind customer feedback—which is important, given how vague feedback often is.
When you can’t understand why an unhappy user has given your new feature a bad rating, watching a session replay gives you the extra context you need, turning a ‘huh?’ to an ‘aha!’ in seconds.
Putting a number on It
While product quality is often subjective, it can be valuable to discuss it numerically so we can make comparative statements.
So, let’s give it a 10-point scale. Ten is perfect, and perfect is effectively impossible, so treat it as an asymptote. Theranos might be a 1, Comcast a 3, Microsoft a 4-5, and Apple, in 2012, an 8.5.
![[Chart] what is product quality: effort vs product quality](http://images.ctfassets.net/gwbpo1m641r7/21pnhwpgxB6KIVz8VXdBFe/8d3a6f8f9027cc80758d4ee77ebad1ff/blog-what-is-product-quality-effort-vs-product-quality.avif?w=2048&q=100&fit=fill&fm=avif)
Where should you be? That’s up to you.
The average baseline is around a 4. Beyond that, it comes down to trade-offs—typically between product quality, scope, and timeline.
You can boost any of those 3 at the expense of the others. Your organization probably has axioms (explicit or implicit) around scope, timeline, and even code quality…what’s your axiom for product quality?
![[Chart] time scope product quality](http://images.ctfassets.net/gwbpo1m641r7/2Yhh2JbpKKgAPaVZJzwGst/cdf717b99265ab6a781f8f78a97393f0/area-chart-time-scope-product-quality.avif?w=640&q=100&fit=fill&fm=avif)
It’s also worth noting that your target product quality should be release dependent.
Modern software development methodologies value iteration, experimentation, and agility. If product quality is overemphasized too early, it can work against these.
Specifically:
Your minimum viable product (MVP) exists to test a hypothesis, not to ship a great product. Indeed, some forms of MVP—painted-door tests, concierge MVPs, prototypes—don’t live in your codebase at all. For those that do, the bar of product quality should be as low as possible without endangering the test, where you can often compensate for low quality with hand holding. That assumes, of course, that you’ve hidden your MVP from all but a select, opted-in group of test users (via feature-flagging platforms like Optimizely or LaunchDarkly).
A public beta still has an experimental component to it: will this feature deliver on its early promise? Here, you’ll need high enough product quality that you don’t erode the overall experience (or sabotage the feature in question). You might compromise a bit on completeness, polish, or efficiency—provided you follow through and don’t let your beta become a de facto v1.
How to improve product quality
So you’ve assessed both your current and target product quality. How do you go about hitting that target?
Be axiomatic. Given the difficulty of tying product quality to business metrics, it’s necessary to treat it as axiomatic in and of itself. If we always ask, “OK, but do we really think that extra 2 weeks will make a difference to revenue?” about product quality, we’ll never prioritize it.
Invest in your winners. It’s easy to chase bugs and product/design debt, while letting your successful-but-unfinished features languish. After all, they’re working, right?
1.1 or sunset. For new functionality, make an explicit plan to do a 1.1 release that finalizes product quality on your 1.0. If the launch is successful (or promising), finish it; if not, retire it so the product doesn’t get bloated.
Keep an eye on the big picture. A high-quality product feels consistent and doesn’t expose organizational boundaries. As you build new functionality, take the time to ensure it matches what you had. Sometimes that means constraining yourself to existing patterns; sometimes you’ll have to evolve them. A design system can be immensely helpful here and, once implemented, can increase your front-end velocity too.
Get opinionated. Be deliberate with choice architecture to bake a ‘happy path’ into the product rather than optimizing for flexibility. This will make things easier for some users at the expense of others, but it’s impossible to build a high-quality experience that’s all things to all people. It does not preclude a more flexible (but less prominent) ‘escape hatch’.
Validate early and often. To understand what completeness and usability look like for your particular project, validate along the way. Here’s how:
Conduct periodic foundational research such as open-ended user interviews or field studies to uncover fundamental user needs, expectations, and behaviors. Done comprehensively it can be time-consuming, but it’s also scalable: even 5 one-hour, properly structured phone interviews with customers can teach you a lot.
Run usability tests at key stages to get data on completeness and usability as your solution evolves. Be scrappy: don’t build a prototype when a mock-up would suffice, or an alpha when you could make a prototype work. But bear in mind: the less participants have to imagine your product, the better the data you’ll get.
Lean on closed alphas and betas for late-stage validation that might save time polishing the wrong thing
Jack is Content Writer for Global Marketing at Contentsquare. He’s been creating and copywriting content on both agency and client-side for seven years and he’s ‘just getting warmed up’. When he’s not creating content, Jack enjoys climbing walls, reading books, playing video games, obsessing over music and drinking Guinness.