Using metrics to report on quality


Why do you need metrics?

The role of data in software is to provide objective information for us to base decisions on. It can be used proactively to inform our strategies and reactively to understand whether investment we’re making is having the impact we expect. The crucial thing is that it breaks through our biases, telling us what’s happening rather than what we think is happening. This is really important when you’re handling something as complex and subtle as software quality.

What is a quality metric?

A quality metric is a reported measurement that gives you some information about some aspect of the quality of your product. Measuring an aspect of quality in its entirety is impossible – you can’t ask how good is our reliability and get a neat, accurate score out of ten – therefore, metrics are all indicators for quality but not direct measurements of quality itself.

Photo by energepic.com on Pexels.com

How do you define your quality metrics?

There is no one-size-fits-all approach to defining your quality metrics as the best ones for you depends on the context of your product and business. However, a short-cut to this is defining what quality means to you and then, for each aspect of your quality definition, choose metrics that indicate how you’re doing in that area. It is important to collate enough information to form an accurate picture of quality whilst avoiding the noise and uncertainty that a large number of metrics can create – my team found that three metrics per quality aspect struck this balance well.

Before I mention some things that you should consider when defining your metrics, here are some definitions of types of metrics:

DefinitionExamples
Opinion-based metricsThese don’t aim to measure the quality of the product at all; instead, they collect the opinions of an important group of people. In-app feedback surveys that ask users how they enjoy using the app as an indicator for usability.
Pop-ups asking users to rate the quality of a call they have just had as an indicator for reliability.
Pseudo-metricsThese identify something measurable that relates or contributes to the aspect of quality you’re interested in measuring. A set of pseudo-metrics can then be generalised as an indicator for your quality in that area. User retention rate as an indicator for usability.
Crash frequency as an indicator for reliability.
Leading metricsThese measure something that you believe will result in your success, but not success itself. Code complexity score as an indicator for developability.
Time for an operation to complete in an artificial environment during automated testing as an indicator of performance.
Lagging metricsThese measure whether you are achieving a desired result directly. Examples of these include rate of tickets escalated to specialist support teams as an indicator for supportability.
Any analytics from live environments in use by customers.
Table defining type of metrics

What to consider when defining your metrics:

Opinion-based metrics vs pseudo-metrics

It’s important to balance these two sources of data as they are both fallible. Pseudo-metrics are very narrow and can easily miss an important facet of quality, whereas opinion-based metrics are subject to respondents misunderstanding what’s being asked or including feelings towards another aspect of the product in their response.

Leading metrics vs lagging metrics

While leading indicators give you early feedback on whether your investment in quality might pay off, lagging indicators tell you whether your investment actually had the desired impact.

Efficiency of data collection vs accuracy of metric

At first glance, it seems that the more faithfully a metric measures a quality aspect, the more valuable it is. However, if a metric takes hours to manually collect, it can be much less valuable than a pseudo-measure that’s automatically delivered alongside each pipeline you run. Consider the trade-off between accuracy and efficiency carefully.

Existing data vs new data

Be open-minded to the data you already have – they might not be the metrics you can think of, but starting with those can be a useful proof of concept before deciding to invest in implementing new metrics.

Photo by Lukas on Pexels.com

Hopefully, this gives you some great ideas for how to implement on your own set of quality metrics. Please comment or contact me with any questions you have, or let me know if you enjoyed this post!


Error: ‘Quality’ is not defined

In April 2021, I gave a Masterclass at the Ministry of Testing, called “Error: ‘Quality’ is not defined”. In this Masterclass, I discussed how to:

  • Start a productive conversation about quality with your team.
  • Create a definition of quality tailored to your product.
  • Choose and implement metrics to measure the quality of your deliveries.
  • Keep the conversation about quality at the top of the agenda.

The Masterclass can be watched back by Ministry of Testing Pro users here.

After the Masterclass, I was contacted by a few people asking for further details of how I went about my own journey to define quality; this is the second of two blog posts answering some of these questions. The other post is available here.

One thought on “Using metrics to report on quality

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s