Tuesday, 28 May 2013

Quick & Easy Wins to do some security testing

Well last week I lead a discussion at WeTest on Security isn't Scary. One thing that came easy quick wins which testers can do to to find the low hanging fruit to knock out some security issues earlier rather than later. These things shouldn't be too hard for a lot of testers and isn't too much of a learning curve or take up too much time when you are meant to be functional testing. This is not to replace a pen test, just some easy methods that can be employed to help find bugs early and add value without taking up too much time.

Some little things that you can just make habits:
  • When you are testing fields instead of typing in names or cities use `~!@#$%^&*()-_=+[{]}\|;:'",<.>/? etc as a string. This will quite quickly highlight if there are any validation issues and if you get any unhandled errors it could indicate that there is a risk of XSS or SQL Injection. Handled validation errors are what is expected, what you don't want is it failing because of a Database error or when go the the view page that characters are missing or the HTML is broken.
  • Also in the fields type <script type=“text/javascript”>alert('XSS');</script> and then go to the view page and if you see a JS alert people there is an issue with XSS on the site under test.
  • If there is a login authenticated section try going to all the URLS while you aren't logged in, especially true if you are using a caching proxy. Yes it does happen and it is number 7 on the 2013 OWASP TOP 10.

Then there is raising your awareness:
  • You shouldn't see any stack traces being display to the client. This information can be used by an attacker to glean additional information.
  • Your error messages should be safe they shouldn't leak information. There shouldn't be information like code references, IPs, Server names, App Server names, Cities etc in the error messages. 
  • Login error messages shouldn't allow username enumeration and should just return the one error message regardless of the reason it can't log you in; OWASP has a cheat sheet on login and some testing info.

Another valuable thing that testers can do is to ask questions:
  • Has and the architecture/design had a review from a security perspective?
  • Is there penetration test planned, budgeted for, scheduled and a gated hurdle before Go Live?
  • Are the developers aware of the OWASP Top Ten and actively putting in processes to avoid the issues that it outlines?
  • Is HTTPS being used for everything that has a cookie or other sensitive information? Yes a cookie is highly sensitive and must be protected have a read about Firesheep if you have any doubt.
  • Is there a list of all third party libraries being used? Is there a process in place to keep checking for updated versions?

If there is more time you can start doing some reading:
  • If it is an ongoing project and there has been a previous penetration test done, have a look at that report and look for the same types of issues in the new functionality. This is particularly going to happen if no changes to the team's processes have been made to help support the development of a secure application. Have a look at the Microsoft Secure Development Lifecycle (SDL) which is a good framework for improving team practises around security.
  • Have a look at the OWASP site and do some up skilling. There is a testing guide that can aid your testing. There are also a series of cheat sheets which can help in design and also testing.

Thursday, 23 May 2013

Picking hand cuffs and how it is relevant to IT

So how can picking handcuffs have any relevance to IT you may ask? Well let me tell you.

Handcuffs are a tool which law enforcement use for restraining people. In IT we use a lot of tools like libraries. In both instances the tools need to be used properly.

From the outside and watching movies handcuffs are dead simple to use and how can anyone mess up their usage? Well did you know that proper handcuffs are double locking? This second lock stops them from being tighten any more and where you could risk cutting off circulation to the hands. The restrainee may also tighten them and use than as an opportunity to get the restrainor to open them and then break free or attack. With handcuffs the vast majority of law enforcement grade handcuffs can be opened with one key, so really that isn't too secure seeing anyone can really just open them; I have one sitting on my key ring for instance. Even with a bog standard bobby pin I can open a set of double locked cuffs in under thirty seconds.
Picking Hand Cuffs. (Yes it is really that quick)

So knowing that law enforcement only use handcuffs for restraint and not security and will keep a close eye on restrainees. While if you ask a general member of the public they will say they are for security or locking people up.

So now the bit where I start to tie this back to IT. When using libraries in your project you need to know how to use them properly and their limitations. If like hand cuffs you don't fully understand how to use your libraries correctly and not use them as intended they may actually open to issues that you were hoping it to fix or open yourself up to completely unexpected issues.

Moral of the story understand the best usage of the tools you use and their weaknesses. Don't let a poor understanding or incorrect usage bite you later on. Sure in IT your program might not attack you but the front page of the newspaper could really hurt you. So are you and your team using their tools properly? Hopefully you can't open your applications as easily as a pair of handcuffs.

Wednesday, 22 May 2013

A Sexy Software Testing Conference for NZ

Recently I was at the work unconference where a lot of the Devs had Codemania TShirts on and saying great things about it. This got me thinking of Software Testing Conferences in NZ where is our sexy conference that we talk fondly of and wear TShirts from? STANZ isn't a sexy community based conference, it is a big, expensive old style conference (does it even have a TShirt? and would you proudly wear it?). KWST does have some of that sex appeal of the new style conference but it is invite only.

Looking at KWST, WeTest, NZ Tester, etc we don't need the flashy international super star testers to talk at it, there are a lot of people in NZ who have a lot to offer to the community. Then there are also a great number of people who could learn from these people as well.

Also look at what can be done, the Kiwicon the Hackers conference was only $60 per person plus some corporate sponsorship. I pay for Kiwicon out of my own pocket given that I enjoy and have a load of fun there. It is a conference that I go to because I want to go to rather than one of those conference where there is a bit of a feeling of going to keep up appearances and get a little bit out of it. Especially when the price is factored into it.

A common theme I see between Codemania and Kiwicon is that the core people behind making it happen are respected in the community and they are making the conference they would be first in line to queue up for tickets, like people would do for their favourite band when tickets to the gig go on sale.

So does NZ need something like this for the software testers? Or is KWST and WeTest etc enough for us? They are great and wouldn't want them to disappear. Surely there must be more testers who are interested in professional development or are a lot of testers comfortable doing what they are doing an not wanting to rock the boat? Or is that the WeTests all fill up instantly so are will making an elite clique that people can't break into? (Which I assume is no one's intention)

Edit: There has been some discussion on the Software Testers NZ discussion group.

Sunday, 28 April 2013

My take away from Regression Obsession

Recently I attended the WeTest session entitled Regression Obsession lead by Michael Bolton. It was an interesting discussion looking at the obsession around regression testing and the possible over subscription to the idea.

There is the opinion of team members especially the non testers that Regression is the safety net for everything that has ever gone and could go wrong in production. This is linked with the adage that "if something goes wrong it is the testers fault yet if it goes well the developers get the praise", isn't it one project team trying to deliver a quality deliverable? Adding everything to the regression suite of course means testing takes longer, is more expensive and more pressure due the time line. What often isn't taken into account is regression is the right place to mitigate the particular risk? Sure some instances regression is the right place but in quite a situations it is not the best place. Regression should not be a series of band aids which ignore the root cause of the issue. If there is a common failing somewhere else in the project why not address the issue there instead of adding to the regression suite? Fixing the root cause will fix it properly in that it will be a generic fix not specific like a regression test will be and you won't have to keep checking. The earlier in the process you can identify an issue the cheaper it is to fix due the shorter feedback loop. Regression testing should not be a generic dumping ground for all issues but only for specially selected based on cost, discussion and risk.

In a lot of cases regression is expected to be run in full, this just isn't a smart use of resources. It comes from the expectation from management that it will be run in full and that you don't get in trouble for doing what you have done before and diverting for the path well travelled is a personal risk. A blame environment firstly is not a good one for moral. The project was delivered as a team so if it is to fail it should fail as a team. At each occasion that regression is being thought about the subset of tests that are going to be run should be define. Sure it is "safe" to run all the test every time but it is not a smart way to test. Like with all things we should work on "testing smarter not harder". One way to quantify the scope for regression is to get a code differences report and work with the developers to understand the scope of their changes and the knock impacts. This should highlight what had changed and the scope and give you confidence that your targeted regression is well founded. If the code diff doesn't give you the confidence required is that not more likely to be an issue with your build system or configuration management should be fixed there not with new regression tests? The counter argument is you don't get fired for what you have done before but is that the best you can do?

Regression usually gets left to the end of the process when time and other pressure is at its greatest, so often ends up being a very stressful time. There is the expectation from management that testing will have all of its steps broken down. Some of this comes from management not understanding testing fully and testers not being able to explain what we are doing as well as we could. The developers don't have to break their time down into how much time they spend writing functionality, creating unit checks, running new and existing unit checks. The developers can have it all lumped as development, so why can't we as testers? Reading the Little Black Book On Test Design you should be continually be reevaluating the tests you are yet to run and adding the new ones. Testing should be a dynamic process and having a fixed schedule and set of test cases goes against that. Being robotic is just checking and we are testers so we should be testing.

Monday, 22 April 2013

The Little Black Book On Test Design

At the TPN last week Michael Bolton mentioned a book that he had found very good. It is called The Little Black Book On Test Design and is available for free. It is a book on test design written by Rickard Edgren. It is only 32 pages including cover, contents, bibliography etc so is a short quick read.

I found it very interesting and have plenty of thoughts to take away from it. One of my main takeaways from our is how much we focus on verifying/checking the requirements rather than engaging our minds and actually testing the application. Along with rigidity of how we write test cases and then almost follow them blindly rather than thinking about the test ideas and after each test reevaluating our position. Some questions to think about are:
  • Has the information from your last test given you new test ideas?
  • Had the information you have just learnt mean that some of your other test ideas will no longer give you additional information about the system under test, so is there value in still running it?
  • Is the test going to give someone value
  • The test ideas should be ever evolving and not just stay static.
To help you think of test ideas that are more than just the requirements, in the requirements document, which are just the explicit requirements there is a large list of questions/prompts to get the ball rolling included in the book.

Wednesday, 27 March 2013

Third NZ Tester is out

The Third NZ Tester is out and it has an article from me in it. I do a Testing Tool Box article on qTrace

Friday, 8 February 2013

The Purpose Section in a Test Plan, is it used right?

Coming out the We Test session last night on Test Planning I had a thought (while shaving this morning, rather than last night while there:( ). It came out of the discussion around the Test Plans being geared for people other than the test team and that the Test Plans being a dumping ground for everything that isn't documented elsewhere in the documents that the project produces.

Is the purpose section in the Test Plan being written correctly? Is that the problem?

From the Purposes I have seen (if the document has them at all) that I have seen they don't clearly state that the Test Plan is to help the Test Team do there testing and that should be its only purpose. If the purpose is written well, when get the reviewers saying "why didn't you include X" etc can say "that it won't help the Test Team so isn't in here and that it would be better suited in Y document"? If we push back can we get the Test Plans to be useful?

Are there any good Test Plan Purposes out there or can we the community come up with a boil plate one that makes it clear that the Test Plan is for the Test Team and should be helpful to them? It should also be clear that the Test Plan document should be a living document and as we learn more about the application under and more about tools and methodologies the document should change, we are doing the best work we can/