Sunday, 28 April 2013

My take away from Regression Obsession

Recently I attended the WeTest session entitled Regression Obsession lead by Michael Bolton. It was an interesting discussion looking at the obsession around regression testing and the possible over subscription to the idea.

There is the opinion of team members especially the non testers that Regression is the safety net for everything that has ever gone and could go wrong in production. This is linked with the adage that "if something goes wrong it is the testers fault yet if it goes well the developers get the praise", isn't it one project team trying to deliver a quality deliverable? Adding everything to the regression suite of course means testing takes longer, is more expensive and more pressure due the time line. What often isn't taken into account is regression is the right place to mitigate the particular risk? Sure some instances regression is the right place but in quite a situations it is not the best place. Regression should not be a series of band aids which ignore the root cause of the issue. If there is a common failing somewhere else in the project why not address the issue there instead of adding to the regression suite? Fixing the root cause will fix it properly in that it will be a generic fix not specific like a regression test will be and you won't have to keep checking. The earlier in the process you can identify an issue the cheaper it is to fix due the shorter feedback loop. Regression testing should not be a generic dumping ground for all issues but only for specially selected based on cost, discussion and risk.

In a lot of cases regression is expected to be run in full, this just isn't a smart use of resources. It comes from the expectation from management that it will be run in full and that you don't get in trouble for doing what you have done before and diverting for the path well travelled is a personal risk. A blame environment firstly is not a good one for moral. The project was delivered as a team so if it is to fail it should fail as a team. At each occasion that regression is being thought about the subset of tests that are going to be run should be define. Sure it is "safe" to run all the test every time but it is not a smart way to test. Like with all things we should work on "testing smarter not harder". One way to quantify the scope for regression is to get a code differences report and work with the developers to understand the scope of their changes and the knock impacts. This should highlight what had changed and the scope and give you confidence that your targeted regression is well founded. If the code diff doesn't give you the confidence required is that not more likely to be an issue with your build system or configuration management should be fixed there not with new regression tests? The counter argument is you don't get fired for what you have done before but is that the best you can do?

Regression usually gets left to the end of the process when time and other pressure is at its greatest, so often ends up being a very stressful time. There is the expectation from management that testing will have all of its steps broken down. Some of this comes from management not understanding testing fully and testers not being able to explain what we are doing as well as we could. The developers don't have to break their time down into how much time they spend writing functionality, creating unit checks, running new and existing unit checks. The developers can have it all lumped as development, so why can't we as testers? Reading the Little Black Book On Test Design you should be continually be reevaluating the tests you are yet to run and adding the new ones. Testing should be a dynamic process and having a fixed schedule and set of test cases goes against that. Being robotic is just checking and we are testers so we should be testing.