Saturday, 13 June 2009

Explaining your testing and what you do

James Bach has a blog post Putting Subtitles to Testing which has an accompanying video. The video is of James and Jon Bach testing an Easy Button and in the video they sub titles along the way explaining the type of testing they were doing.

The point they make at the end of the video is about reporting what you are doing as a tester and that you need to practise putting what you do into words. This can come particularity important if you are a tester like and deal mostly in testing things which don't have a GUI. As if you have to explain to someone what testing you have done with a GUI they will often find it easier to understand what you have done as they visualise or see the GUI. If you start talking about DBs, SOAP, Web Services, CSVs, etc they have more difficulty in understanding what you have done with your time. This also becomes more apparent when you are not working 100% from test scripts as you can't just say I execute these scripts. When you are doing more exploratory testing explaining what you have done is more apparent. Session notes which we write during our testing make sense to us and cover everything so if there are defects we can go back and remember the full chain of events leading up to noticing the issue. While watching the video and the sub titles staying a high level (Requirements Analysis, Stress Testing, etc) can still be very informative.

Card Walls - Mingle

Well I have mentioned the idea of digital card walls in two previous posts (Can defect tracking systems be more helpful? and Visual Representation of Defect Queues) and well apparently I am not the first to think about it. Thought Works has a product called Mingle which does the Agile Card Wall along with a Defect Tracking one. Unfortunately Mingle isn't Free or Open Source so doubt I will spend any more time looking at it unless I get a budget to spend on tools, which is something I don't see any time soon.

Spawner 1.6

I have talked about Spawner before here, here and here. Well version 1.6 of this useful database population tool is out, what it adds is a masked string which lets you make strings up how you like. For instance lets you make bank account numbers, IR numbers or phone numbers (outside of US ones) which is a handy feature to have.

Monday, 1 June 2009

Risk Based Testing

Last week I went to Matt Mansell's “What are the Risks of Risk Based Testing?” presentation at the Wellington TPN. This left me doing some thinking about it. I first came across Risk Based Testing at STANZ 2008 and from then I like the idea of it, but I am personally yet to put it into practise or join a project which is using it.

What is good about Risk Based Testing is we all know testing everything is impossible given the budgets and time frames of most projects. Risk based testing gives you a framework which you can use to prioritise your testing. This needs to balanced with teaching the customer that descoping via risk base testing means that things may NOT be tested, but the customer has prioritised them lower therefore if they break in production it is less of an issue. This will also help get the testers back in the correct position in that they provide information about making the go live decision rather than making the go live decision themselves. Which means you can be more pragmatic and no longer need to be the king/queen of quality as your butt isn't on the line as you didn't make the decision to go live or not. Which helps the the common situation where congratulate dev if it goes well and blame test if it doesn't go so well.

From what I have heard about risk base testing it really needs to be started early (see my previous blog post). As at that time the client reps doing the specifications are still around and most likely still having enough time to spend time helping to do come up with the risks and ranking of them. There is no magic number for the number of risks that are optimal though may at least need to place them into some type of tree structure, so can give different levels to different audiences. Something like FreeMind may come in handy about here for recording the risks which get brainstormed during the meeting of all the stakeholders. The risks for Risk Base testing are a subset of the whole project's risks and focus on the quality related ones which are to be mitigated through testing of the deliverable. All the requirements should have one or more risks associated with them.

A requirement's first risk is does it work or not work? Then all the test cases focus on testing risks rather than requirements. This does cause a bit of an issue as most test management systems link requirements with test cases rather than requirements having risks and risks having test cases.

Before this presentation with risk base testing I had the idea of started at the highest risk and then just work your way down the list. But this isn't the only way, as Matt presented the idea of making sure you touch all the risks but not necessarily use all the test cases. For example lets say that you have three test iterations and there are 100 test cases for each of your high, medium and low risks, so 300 test cases in total.





Iteration123
High (# of Test Cases)503515
Medium (# of Test Cases)355015
Low (# of Test Cases)151570

This assumes that 15 test cases will test all of you risks in one particular risk level. What this gives you is coverage that every iteration you have touched all of your risks so should have found any show stoppers and also if need to stop before all three iterations you can demonstrate in the nice table what you have and have not done which can help the customer make their mind up on if they want to go live now or go live later. This does mean that you are leveling running some of your test cases for the high risks to the very end, so they may get dropped. There is no magic bullet proof vest, there are always drawbacks to one method or another of how to test the risks, but there are different ways to explain it to the customer to better understand your approach for doing it.

Risk base testing may also be used for how to divide up the testing work. For instance it may be decided the testers do the high risks while the client testers work on the low risks. If this happens the test team needs to own the process and also have control over the client testers. This is so if for instance the client testers get called away to do their day job and thus things are not tested properly it generally still gets blamed on the test team for not doing their job properly. If the test team can own the whole testing process and if that happens raise a flag and either do it themselves or get some type of time extension, so that the product does not ship at a standard lower than expected.