How do we Measure Software Quality in Agile Projects?

Question

In agile, what metrics should be used to measure the quality of software? I know quality is largely subjective and can mean different things to different people, but what KPIs should be chosen if we want to measure quality?

Of course, the ultimate indicator is the number of high priority defects leaked to production as a result of a release of new features and perhaps (if we are in an agile setup) the number of committed stories to actual delivered stories. But what other factors can be used to measure quality?


Answer

It’s important to measure software quality but it’s difficult to identify meaningful metrics. When no bugs have been identified does that mean the software is of the highest quality? On the other hand, when a large number of bugs have been created does that mean the QA team is doing a great job and the software is crap?

The value is determined by the quality of the bugs and the QA process but how is that measured? I guess the closest we can come to a meaningful software quality metric is the number of bugs that make their way to the customer and the impact those bugs have on the users of the software.

Another way of looking at it is that we develop software for a purpose, which is satisfying the need of a user. We deliver value, not quality, so the best thing to do it to make sure whatever you’re doing delivers something valuable to your user. What is important is how you deliver this value to your users and how quickly and how frequently. This is all related to the process and pipeline of software delivery.

Rather than trying to measure software quality via some metrics, why not focus on trying to create a perfect delivery model?

Within an agile context, you might want to take the following into account:

  • Ensure user stories have clear, concise and understandable acceptance criteria
  • Before development begins, ensure everyone in the team (developers, designers, testers) has the same understanding of the need of the user stories
  • Encourage 3 amigos meeting to flesh out the requirements and design decent scenarios
  • Test the stories as they are being developed – Code reviews, Unit tests, pairing to provide early feedback
  • Ensure you deliver what you commit to at the beginning of the sprint
  • Ensure you don’t release high priority defects to production which have customer impact, easier said than done!
  • No rollbacks – this is easy to measure – number of rollbacks could indicate a very broken process

To create a “quality product”, we need to have a quality process in place. By practising the above activities, it helps to create a smooth software delivery pipeline which provides value for the users.

Other metrics to consider are

  • Measuring Velocity over time including how many Story Points was committed to vs. how many was actually completed in a sprint (see if we’re right sizing our stories/sprints and may show a scope issue).
  • Measuring the number of defects alongside velocity to see if there are any correlations with Velocity and number of defects per sprint.

In ISO 25010 standards, there are 8 leading factors, each having certain attributes that can be tested with different kinds of tests.

Generally, a high-quality software has great levels of

  • Maintainability (it is easy to maintain the code and add amendments)
  • Portability (Easy to install, replace, adapt to new environments)
  • Functionality (It does what it is intended to do)
  • Performance (it works quickly without using too many resources, even when many people access the software at the same time, across the globe)
  • Compatibility (the software is compatible with several components)
  • Usability (easy to use without needing instructions, even for people with disabilities)
  • Reliability (we can trust the software will work and overcome issues)
  • Security (important information cannot be extracted by hackers)

But, for each software, some of these will be more important than others, depending on for what and by whom the system will be used.

5 Replies to “How do we Measure Software Quality in Agile Projects?”

  1. One of the main objectives of Testing & QA is to provide visibility of project status by giving Information about Attributes of the SW under Test/Development to all stakeholders, so the Project and Division Mangers can see: Current Project status, evaluate old risks, identify new risk based on objective data. This enables Project and Division Managers to base their decisions on objective Data and evaluate potential Risk in relation to planned Goals.

    Quality can’t be achieved unless it’s specified. Specify means develop Quality Req. and then implement them by designing Quality Req. together with Product Req. during the development process.

    This model defines goals, identifies risks associated with planned Goals, and then define quality attributes for each Goal. When we have defined attributes – Implement Measurement Program to collect data for defined attributes. Set of goals for SW Quality includes Process and Product oriented indicators of Quality.

  2. Measure quality software in usual business applications (not real time software for planes for instance) has no meaning if it is not driven by user requirements about the quality.

    When teams become agile, defects are solved on the fly thanks to automatic tooling giving them information about what to fix to build the product right, and then only a limited number of defects become traced.

    Some companies have for instance totally abandoned to have test cases for their UAT and just rely on online captures of the exploratory tests done by users.

  3. Quality should be measured in development, not in testing. For instance, a good quality metric in development is “how many times a task (bug fix, feature or code change) was rejected in testing before it was accepted”.

    This gives you a good overview of how developers are writing quality code and how they make sure what they code actually works in all levels.

  4. If you ask your stakeholders (customers, end-users, developers, management, sales and marketing, etc.) how they define quality for a given product, or release, you should be able to, with their assistance, derive a method to measure it.

    Then its up to you to work with the team to deliver it.

    One word of caution, some measures of quality may be a challenge to another, ie: develop the code inexpensively (a management measure of quality) may challenge your ability to implement or use new technology (a development measure of quality).

  5. Product quality is generally measured by it’s ability to meet business objectives. Part of this I remain uncomfortable with as forced monopolies can under this measure show high quality without any client satisfaction being required, but monopolies aside business objectives and client satisfaction levels tend to go hand in hand.

    Your objective here though is not so much to measure but to optimise and improve and this involves quite a bit of experimentation.

    Often QA managers are asked to define and measure quality and then their performance will be directly measured by improvements on the measures they defined, this is perhaps a slightly flawed conflict of interests approach and often leads to simplified but very surrogate measures that can be controlled and reasonably easily manipulated to provide (flawed) evidence of improvement but might actually be pushing teams to focus on the wrong things and step further away from the long term business objectives.

    Be very wary of evidence of improvement becoming your red herring objective over actual improvement.

    Sometimes it makes sense for the starting view of quality to be made outside of the QA managers remit and in terms of the business objectives, what does long term business success look like?

    The QA manager can then leverage from this view to break it down and make it more relevant to what the teams are doing but at the same time not lose sight of the business success and improvement primary objective.

    Improvement is all about goals and experimentation, if you do experiment with a measure, consider the fail quickly factor and cast aside any invalid metrics as quickly as possible before they do any serious damage.

Leave a Reply