Labelling a defect a regression

In a recent training session around defect reporting and investigation guidelines it was highlighted that a useful additional to a bug report is the label ‘regression’. Assuming of course that it is indeed one.

In the context of the training session a regression is a change, usually a negative one, of functionality from one version of the product to the next.

The reason we give for such a label is mainly around providing enough information to the person, in our case a Product Owner, to help them prioritise bug fixes and other work.

This made me wonder what would happen if we didn’t add this keyword to a bug report?

So, do we always fix these regressions? Mainly yes. But why?

Is it because we believe this functionality matters to the customer.

I wonder if the label itself might be too highly weighted and in fact it might not matter as much as say providing a new piece of functionality. By removing this keyword would it force us to look at the defect or bug in a different light, would this be better or worse?

So on one hand, very loosely, we have functionality that used to perform one way but no longer does (a regression) verses functionality that is expected to perform one way but doesn’t (a bug).

In this way it could be hard to see the major difference in the two. A bug could well be a regression in a customer’s expectation of what they were getting. It’s not a regression using our original definition since it never worked at some point. But the customer doesn’t care about this distinction.

Based on previous customer engagements we have a belief that consistent functionality matters to them. But there are times when this isn’t the case. We once had an example where a piece of functionality stopped working as it was originally designed. It still worked, it was just a change in behaviour. We noticed this ‘regression’ in behaviour and ‘fixed’ the issue ready for the next release. When the customer received this release they raised a bug report that this functionality was broken. It transpired that they had come to expect and value the change in functionality from its original intent. We were fixing a ‘regression’ that they didn’t want fixing!

This highlights that we often have to challenge our assumptions on what matters to our customers. Ask them if at all possible. Just because something worked well once doesn’t mean that it still fits their needs. How do we go about accessing these concerns since they won’t naturally fall into the regression patterns mentioned at the start? Quality can deteriorate without a loss of functionality purely down to the external factors that might effect a product.

Finally when our automated checks highlight a regression failure we will often ask the business if they still care about this functionality. Sometimes they do, the failure is resolved, and the checks continue. Sometimes they don’t, the failure isn’t resolved, and the automated checks are simply deleted.

For now, the experiment can wait. I still have a lot to think about….

If you are interested in exploring regression testing further Michael Bolton has a good presentation on Things Could Get Worse: Ideas About Regression Testing