Sunday, 22 November 2009

Testing at University

I did Computer Science while I was at University. Thinking back to my time there I think the only testing covered was about five minutes and introduced the idea of white box and black box testing. Being a tester I think that there should be more testing in University. Sure the majority of people doing Computer Science are going to get into the development but even so developers need to test as well.

The first I ever heard about JUnit and Unit Testing was when I was in the industry. So basically all the developers coming out of University have to learn about Unit Testing when they start working and then they learn all about it, so there Unit Tests mightn't be the best coverage or there may be some resistance to doing it as they haven never seen as part of "coding" before. Though when they start seeing continuous integration catching bugs and they personally start seeing their unit test catching things and that last time they did something similar was caught by the test team and went through the defect tracking system etc which everyone can see.

There will be the argument that Computer Science is there to teach Computer Science not Professional Software Development. What I say to that is how many people doing Computer Science go on to become Computer Scientists and how many go on to do work in Software Development?

Also could it help the students and lecturers? Maybe in first year when assignments are very directed do this, do that, do a while loop, do a for loop etc. giving the students the JUnit tests and have them write their code to match that. This will also make marking very easy as it will be run the Unit Tests and see if they pass or not.

Also later on when students are implementing algorithms to aid the understanding of what they are doing in lectures could get them to write unit tests to help show that they understand the code they are writing and where they were trying to get to with the code they are writing.

Sunday, 30 August 2009

Getting soapUI Test Cases to check Databases

I have done some investigating of how to connect to a Database and query to ensure that the data from the SOAP call in soapUI has made it all the way to the DB.

Firstly ensure you have downloaded the relevant JDBC driver jar file and placed it in $SOAPUI_HOME/bin/ext and then restarted soapUI as it only seems to grab the file on restart.

Below is a simple Groovy which Have put in as a test step which will go to the database and check if the you want is there. It will need some additions like the addition of a where clause I would guess. If the count or data for that matter is right it will assert true or false as applicable and show up in results.

import groovy.sql.Sql
def ERROR_MSG = "The count is not 1"
sql = Sql.newInstance("jdbc:mysql://localhost:3306/demo", "demo", "password", "com.mysql.jdbc.Driver")
row = sql.firstRow("SELECT count(*) count FROM foo")
log.info("Count: ${row.count}")
assert (1 == row.count):ERROR_MSG
sql.close()

Also remember before running these test ensure that the database is truncated or your SQL takes into account only to look the results that you want it to look at not ones from earlier runs. If you are testing automatically with Maven you could get Maven to do the cleaning up and prep work.

Using soapUI to do testing as part of a Maven2 build

soapUI has a Maven plugin. soapUI has a Maven getting started guide and Maven has a Maven getting started guide. Well I thought I would have a go at joining them together, so I can a Maven script automatically run my soapUI tests and output a jUnit compatible results file.

Pre Reqs:
  • Maven 2 installed
  • soapUI with a project that already has some tests setup

Steps:
  1. Setup a new project. This will create you a new project in a new directory called app. To do this run "mvn archetype:create -DgroupId=nz.geek.karit.app -DartifactId=app"
  2. To pom.xml add:
    <!--Add the repository for where Maven can find the soapUI Plugin-->
    <pluginRepositories>
    <pluginRepository>
    <id>eviwarePluginRepository</id>
    <url>http://www.eviware.com/repository/maven2/</url>
    </pluginRepository>
    </pluginRepositories>
    <build>
    <plugins>
    <plugin>
    <groupId>eviware</groupId>
    <artifactId>maven-soapui-plugin</artifactId>
    <!--This is the version of soapUI to grab from plugin repo-->
    <!--At the time of writing the 3.0.1 plugin had not been created-->
    <version>2.5.1</version>
    <configuration>
    <!--The location of your soapUI setting file-->
    <projectFile>/home/test/test.xml</projectFile>
    <!--Where to place the output of the run-->
    <outputFolder>/home/test/output/</outputFolder>
    <!--Make the jUnit results file-->
    <junitReport>true</junitReport>
    </configuration>
    <executions>
    <execution>
    <id>soapUI</id>
    <!--Run as part of the test phase in the Maven lifecycle-->
    <phase>test</phase>
    <goals>
    <!--Run the test phase of eviware:maven-soapui-plugin-->
    <goal>test</goal>
    </goals>
    </execution>
    </executions>
    </plugin>
    </plugins>
    </build>

  3. You can now run the soapUI tests by calling "mvn eviware:maven-soapui-plugin:test"
  4. It will also run as part of Maven with "mvn test" for example

For all the different properties you can set see: http://www.soapui.org/plugin/maven2/properties.html

Along with running the test you can also run the loadtests and start the mock web service all from Maven. You can do this by calling a goal other than test. For the details see: http://www.soapui.org/plugin/maven2/goals.html

My full pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>nz.geek.karit.app</groupId>
<artifactId>app</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>app</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<!--Add the repository for where Maven can find the soapUI Plugin-->
<pluginRepositories>
<pluginRepository>
<id>eviwarePluginRepository</id>
<url>http://www.eviware.com/repository/maven2/</url>
</pluginRepository>
</pluginRepositories>
<build>
<plugins>
<plugin>
<groupId>eviware</groupId>
<artifactId>maven-soapui-plugin</artifactId>
<!--This is the version of soapUI to grab from plugin repo-->
<!--At the time of writing the 3.0.1 plugin had not been created-->
<version>2.5.1</version>
<configuration>
<!--The location of your soapUI setting file-->
<projectFile>/home/test/test.xml</projectFile>
<!--Where to place the output of the run-->
<outputFolder>/home/test/output/</outputFolder>
<!--Make the jUnit results file-->
<junitReport>true</junitReport>
</configuration>
<executions>
<execution>
<id>soapUI</id>
<!--Run as part of the test phase in the Maven lifecycle-->
<phase>test</phase>
<goals>
<!--Run the test phase of eviware:maven-soapui-plugin-->
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

Sunday, 23 August 2009

The Missing Grinder 3 Getting Started Guide

I have been looking at The Grinder to see what it is like at performance testing. Though I found the getting started section to cover the tool but it didn't really cover the the I want to click here, here and here and have it just request a page from my web server, so hopefully this will get you past that. I have always found the first five minutes with a tool the complicated ones and need enough to get going and over that first hurdle.



Pre Reqs:
  • A web server that you have permission to test with. For me it is localhost and running on port 80.
  • I am doing this on Ubuntu doing something similar on Windows should do the trick.

The steps:
  1. Download The Grinder 3
  2. Unzip it. I unzipped it to /home/test/Desktop/grinder-3.2
  3. Create a new directory /home/test/Desktop/grinder-3.2/demo. All files created, edited etc should be in here.
  4. Create a new file in demo called grinder.properties I copied the below from the examples directory and changed the grinder.script to simple.py
    • grinder.properties:
      #
      # Sample grinder.properties
      #
      #
      # The properties can be specified in three ways.
      #  - In the console. A properties file in the distribution directory
      #    can be selected in the console.
      #  - As a Java system property in the command line used to start the
      #    agent. (e.g. java -Dgrinder.threads=20 net.grinder.Grinder).
      #  - In an agent properties file. A local properties file named
      #    "grinder.properties" can be placed in the working directory of
      #    each agent, or a different file name passed as an agent command
      #    line argument.
      #
      # Properties present in a console selected file take precedence over
      # agent command line properties, which in turn override those in
      # an agent properties file.
      #
      # Any line which starts with a ; (semi-colon) or a # (hash) is a
      # comment and is ignored. In this example we will use a # for
      # commentary and a ; for parts of the config file that you may wish to
      # enable
      #
      # Please refer to
      # http://net.grinder.sourceforge.net/g3/properties.html for further
      # documentation.
      
      
      #
      # Commonly used properties
      #
      
      # The file name of the script to run.
      #
      # Relative paths are evaluated from the directory containing the
      # properties file. The default is "grinder.py".
      grinder.script = simple.py
      
      # The number of worker processes each agent should start. The default
      # is 1.
      grinder.processes = 1
      
      # The number of worker threads each worker process should start. The
      # default is 1.
      grinder.threads = 1
      
      # The number of runs each worker process will perform. When using the
      # console this is usually set to 0, meaning "run until the console
      # sneds a stop or reset signal". The default is 1.
      grinder.runs = 0
      
      # The IP address or host name that the agent and worker processes use
      # to contact the console. The default is all the network interfaces
      # of the local machine.
      ; grinder.consoleHost = consolehost
      
      # The IP port that the agent and worker processes use to contact the
      # console. Defaults to 6372.
      ; grinder.consolePort
      
      
      
      #
      # Less frequently used properties
      #
      
      
      ### Logging ###
      
      # The directory in which worker process logs should be created. If not
      # specified, the agent's working directory is used.
      grinder.logDirectory = log
      
      # The number of archived logs from previous runs that should be kept.
      # The default is 1.
      grinder.numberOfOldLogs = 2
      
      # Overrides the "host" string used in log filenames and logs. The
      # default is the host name of the machine running the agent.
      ; grinder.hostID = myagent
      
      # Set to false to disable the logging of output and error steams for
      # worker processes. You might want to use this to reduce the overhead
      # of running a client thread. The default is true.
      ; grinder.logProcessStreams = false
      
      
      ### Script sleep time ####
      
      # The maximum time in milliseconds that each thread waits before
      # starting. Unlike the sleep times specified in scripts, this is
      # varied according to a flat random distribution. The actual sleep
      # time will be a random value between 0 and the specified value.
      # Affected by grinder.sleepTimeFactor, but not
      # grinder.sleepTimeVariation. The default is 0ms.
      ; grinder.initialSleepTime=500
      
      # Apply a factor to all the sleep times you've specified, either
      # through a property of in a script. Setting this to 0.1 would run the
      # script ten times as fast. The default is 1.
      ; grinder.sleepTimeFactor=0.01
      
      # The Grinder varies the sleep times specified in scripts according to
      # a Normal distribution. This property specifies a fractional range
      # within which nearly all (99.75%) of the times will lie. E.g., if the
      # sleep time is specified as 1000 and the sleepTimeVariation is set to
      # 0.1, then 99.75% of the actual sleep times will be between 900 and
      # 1100 milliseconds. The default is 0.2.
      ; grinder.sleepTimeVariation=0.005
      
      
      ### Worker process control ###
      
      # If set, the agent will ramp up the number of worker processes,
      # starting the number specified every
      # grinder.processesIncrementInterval milliseconds. The upper limit is
      # set by grinder.processes. The default is to start all worker
      # processes together.
      ; grinder.processIncrement = 1
      
      # Used in conjunction with grinder.processIncrement, this property
      # sets the interval in milliseconds at which the agent starts new
      # worker processes. The value is in milliseconds. The default is 60000
      # ms.
      ; grinder.processIncrementInterval = 10000
      
      # Used in conjunction with grinder.processIncrement, this property
      # sets the initial number of worker processes to start. The default is
      # the value of grinder.processIncrement.
      ; process.initialProcesses = 1
      
      # The maximum length of time in milliseconds that each worker process
      # should run for. grinder.duration can be specified in conjunction
      # with grinder.runs, in which case the worker processes will terminate
      # if either the duration time or the number of runs is exceeded. The
      # default is to run forever.
      ; grinder.duration = 60000
      
      # If set to true, the agent process spawns engines in threads rather
      # than processes, using special class loaders to isolate the engines.
      # This allows the engine to be easily run in a debugger. This is
      # primarily a tool for debugging The Grinder engine, but it might also
      # be useful to advanced users. The default is false.
      ; grinder.debug.singleprocess = true
      
      
      ### Java ###
      
      # Use an alternate JVM for worker processes. The default is "java" so
      # you do not need to specify this if java is in your PATH.
      ; grinder.jvm = /opt/jrockit/jrockit-R27.5.0-jdk1.5.0_14/bin/java
      
      # Use to adjust the classpath used for the worker process JVMs.
      # Anything specified here will be prepended to the classpath used to
      # start the Grinder processes.
      ; grinder.jvm.classpath = /tmp/myjar.jar
      
      # Additional arguments to worker process JVMs.
      ; grinder.jvm.arguments = -Dpython.cachedir=/tmp
      
      
      ### Console communications ###
      
      # (See above for console address properties).
      
      # If you are not using the console, and don't want the agent to try to
      # contact it, set grinder.useConsole = false. The default is true.
      ; grinder.useConsole = false
      
      # The period at which each process sends updates to the console. This
      # also controls the frequency at which the data files are flushed.
      # The default is 500 ms.
      ; grinder.reportToConsole.interval = 100
      
      
      ### Statistics ###
      
      # Set to false to disable reporting of timing information to the
      # console; other statistics are still reported. See
      # http://grinder.sourceforge.net/faq.html#timing for why you might
      # want to do this. The default is true.
      ; grinder.reportTimesToConsole = false
      
      # If set to true, System.nanoTime() is used for measuring time instead
      # of System.currentTimeMills(). The Grinder will still report times in
      # milliseconds. The precision of these methods depends on the JVM
      # implementation and the operating system. Setting to true requires
      # J2SE 5 or later. The default is false.
      ; grinder.useNanoTime = true
      
  5. Following the bit at the very end of getting started guide setup the three .sh scripts (setGrinderEnv.sh, startAgent.sh, startConsole.sh) as outlined and place them in the demo directory. I changed them to run to use bash instead of ksh as have bash installed not ksh.
    • setGrinderEnv.sh:
      #!/bin/bash
      GRINDERPATH=/home/test/Desktop/grinder-3.2/
      GRINDERPROPERTIES=/home/test/Desktop/grinder-3.2/demo/grinder.properties
      CLASSPATH=$GRINDERPATH/lib/grinder.jar:$CLASSPATH
      #How I found it
      #ls -l `which java`
      #The above gave me: /usr/bin/java -> /etc/alternatives/java
      #ls -l /etc/alternatives/java
      #Which gave me: /etc/alternatives/java -> /usr/lib/jvm/java-6-sun/jre/bin/java
      JAVA_HOME=/usr/lib/jvm/java-6-sun/jre/
      PATH=$JAVA_HOME/bin:$PATH
      export CLASSPATH PATH GRINDERPROPERTIES
      
    • startConsole.sh:
      #!/bin/bash
      . ./setGrinderEnv.sh
      java -cp $CLASSPATH net.grinder.Console
      
    • startAgent.sh:
      #!/bin/bash
      . ./setGrinderEnv.sh
      java -cp $CLASSPATH net.grinder.Grinder $GRINDERPROPERTIES
      
  6. Make the four .sh scripts executable chmod +x *.sh
  7. Run ./startConsole.sh
  8. Run ./startAgent.sh
  9. In the Grinder Console under the Processes tab it shows that the agent has connected to the console

  10. Create a new file called simple.py and place the following in it:
    • simple.py:
      from net.grinder.script.Grinder import grinder
      from net.grinder.script import Test
      from net.grinder.plugin.http import HTTPRequest
      
      test1 = Test(1, "Request resource")
      request1 = test1.wrap(HTTPRequest())
      
      class TestRunner:
          def __call__(self):
              result = request1.GET("http://localhost:80/")
      
  11. In the script tab select simple.py and click the send changed file to worker processes button
  12. I found that the console could not upload the files to the agent because one of the threads died with a "Java.lang.OutOfMemoryError : Java heap space" error. To get around this in startConsole.sh I added -Xmx1024m to the java command like the below
    • startConsole.sh:
      #!/bin/bash
      . ./setGrinderEnv.sh
      java -Xmx1024m -cp $CLASSPATH net.grinder.Console
      
  13. So turn the agent and console off. And then start the console and the agent
  14. Click the start processes button
     
  15. Yay you are now testing against your web server
      
  16. When you are done click the stop button and you can have a look at the results
     
  17. So there you go the Missing Grinder Getting Started Guide.

Where to from here? Well you can have multiple agents, talk more than just HTTP (look in the examples directory for JDBC, JMS, SMTP, etc) and record TCP streams via a proxy.

Something I found out later. The reason I was getting the heap of space was that the log directory was one of the directories which was trying to be synced up the agents and this was too large so it crashed console. I will have to figure out how to make it save the logs somewhere else.

Sunday, 16 August 2009

Testing on the Bog - Pushing the Boundaries

PDF version for placing in your office.
pdfreaders.org

What is Testing on the Bog?

Code has “if” statements in it. When testing you need to check that the different conditions of the if statement. Lets take the following code snippet

if input_date <= current_date:
echo 'Date not in the future'
else:
echo 'Date in the future'


The obvious values to test is a date less than or equal to today and one in the future. Test with 2009-08-08 and 2009-08-29 would give you 100% statement and branch coverage when you get your coverage statistics from your unit tests. Well that was easy wasn't it?

“High Code Coverage” ≠ “Well tested Code”

Firstly it is “<=” so both the “<” and the “=” need to be tested separately. Also mentioned in the previous Testing on the Bog dates can be complicated beast as well.

So looking at the clear boundaries to test there are also the not so obvious ones. An example of this could be:
  • Time is often stored as an integer and only formated in the interface layer, so can you put in a date which so big that it wraps the integer around and it becomes less than the current date? And the flip so negative this get confused as well?
The explicit boundaries, which are normally documented as functional requirements, are generally ok to find it is the other ones, which may be non functional requirements or just not documented at all, that aren't obvious which will catch you out some of the time. It is impossible to think of every boundary along the way but just remember you have to look beyond of the obvious ones. Some of these may seem a bit crazy but what users will input into a system can some of the time be very bizzare.

Rigour in Testing

This week I went to a presentation by James Bach on the Myths of Rigour at the Wellington Testing Professional Network Presentation.

The biggest thing that I took away/had reafrimed was that rigour/process should not be blindly followed. The person executing the process needs to understand why they are doing what they are doing. If something is just being blindly followed aspects of it may be missed as instructions only cover what the instruction writer knows to write down there is a lot of steps that are implied or just done because not doing them would be stupid or not mean things would work right, so the writer doesn't think to write them down. If the background knowledge is there and the understanding of the process is there these implied steps should be piked up naturally and issues will be detected better. Also part of a tester personal attributes should be inquisitive and personally want to have an understanding an a in depth knowledge of how things work and why they work that way. Else you might as well just believe the developer when they say yes it is tested and work fines.

If you just follow a process how do you know if the output you are producing are correct or are helpful? This can be for testing itself but also for the documentation that surrounds testing. There are templates which say a test plan should look like X and a test case should look like Y. But if you don't know the why behind all the headings is it helpful? Also an organisation's test plan template may be 25 pages sure fine when on a project where have multiple testers and is going to six months plus of testing work, but what about when one person needs to do some testing that is only going to take a couple of days to complete? Is this 25 page template useful? If you don't understand it you will have to just fill it all out and it is going to take longer than the testing is going to take. If you understand this, one would hope that you can only fill out the sections that make sense and quickly explain why you haven't filled in some of the sections seeing they are not applicable to what you need to do.

One example I have seen where someone who understands something forgets to teach all the basics along the way. I was giving some business users some help with some basic SQL queries. They were saying SQL was hard and they just didn't understand what they were doing. They said that the other person had spent a day with them teaching them SQL they knew about selects, update, inserts, deletes, wheres and joins. So I was a little perplexed that they were having issues with a SELECT * FROM foo WHERE bar = foobar. But after some talking with them found out they didn't actually really know what a database really was and what tables were. After taking a step back and saying a table is like a spreadsheet with columns and rows and a where was like a filter. That a database was a collection of tables and a basic of joins using a person spreadsheet and a address spreadsheet, which could be of type street or postal they were away running.

Coming back to the rigour aspect I go back to my time at university. Seeing I did honours there I spent a lot involved in academic rigour. One of the big things with that type of rigour was that someone with a similar background should be able to follow your method and reach the same results. They mightn't draw the same conclusions from the results but the results themselves should be the same. So this aspect of rigour is applicable to what we do in the software testing world. Our testing of systems should be repeatable by people other than ourself and repeatability comes into great importance with regression testing and test automation especially when it is being run every night or on every check in. Even when it is automated the person receiving the result needs to be able understand the process even do it manually, so that they can fully understand the results and draw accurate conclusions from it all.

Sunday, 26 July 2009

Experiences with Mantis 1.2 - Part 2 – Setting up User Roles

I wanted to set up some basic roles so different users could have different permissions. The roles I wanted were:
  • Viewer
  • Client
  • Developer
  • Tester
  • Administrator

In config_inc.php I added:
    $g_access_levels_enum_string = '10:viewer, 20:client, 30:developer, 40:tester, 90:administrator';


Then I had to assign strings to these values. Seeing I only speak English I only edited the English strings files. In lang/strings_english.txt I edited it to read:
$s_access_levels_enum_string = '10:viewer,20:client,30:developer,40:tester,90:administrator';


And the roles have been created and you can now create new users who have those roles

Past Experiences with Mantis 1.2:

Saturday, 25 July 2009

The Great Firewall of New Zealand

Firstly need to say that child pornography is bad and should be stopped.

Secondly sorry for this not being on testing but felt I needed to say something.

Here in NZ the DIA is setting up a system akin to the Great Firewall of China. The DIA is setting a system up to screen out child pornography. There is nothing wrong with this as it should be stopped.

What I see wrong:
  • This is being implemented under the radar without any legislation governing it bounds and scope. With the infrastructure in place it is very easy just to add a new site in here or there
  • There is no formal tribunal and process to get sites unblocked
  • DIA are deciding what is in and out, not NZ's Chief Censor. This being the case there is nothing stopping the of blocking other sites.
  • It may slow sites down as if there is an IP hit the request goes via DIA'a infrastructure to decide if the URL is ok. I am guessing all Google's IPs will be placed in the check list as Google has their cached version service.
  • They are not publishing a list of blocked sites, as they clam it will be a list of sites to visit for people who are sick and want to have a look. As a NZ citizen I would like to know what sites my Government is blocking. Look at the Chief Censor they provide a list of everything they have rated along with their ratings and there is a process to get them re-evaluated.

Also I don't think it will work that well as from what I know about the Movie, TV and Music piracy scenes HTTP plays almost no role, as people don't download things from a HTTP server they all use other methods. Bittorrent for example generally uses HTTP for its trackers but can also use DHT. Then there is also Tor, Freenet and Onion Routing which can disguise and hide the traffic until it is somewhere else in the world. Then there is encryption and stenography. There is also old school basic things like NNTP which will store and distribute it al around the world.

This being the case it I feel this web filtering is giving the naïve some warm fussies that the government is trying to stamp Child Pornography out. When it really won't do all that much and places a internet filter in NZ's internet infrastructure that has not legislation governing it and can also be used by the government to stamp out free speech that it doesn't like. So NZ is now no better than China when it comes to the internet and one step closer to fasicit regime where the government controls everything and stamps out free speech and free choice.

More Info:
General FAQs
Technical FAQs

Sunday, 12 July 2009

Testing on the Bog - Testing Dates and Times

PDF version for placing in your office.
pdfreaders.org

What is Testing on the Bog?

The handling of dates and times is one area that commonly has issues when to comes to testing. This Testing on the Bog will give you some ideas when it comes to testing dates and times in your application, this doesn't tell you how to do it, as you need to know what your application should be doing in these situations.

Date Formats - Most of the world uses Day then Month while the US for example uses Month then Day. Then there is also the ISO8601 format for dates. Does the application format correct, accept input correctly, store and convert correctly? Does it reject input when expected?

Two Digit Years - This was the whole issue around Y2k problem. When the application is faced with a two digit does it do something sensible with it? Does it store correct by adding 19 or 20 to the front or does it add 00? If you roll your clock forwards is it still adding the century correctly?

Daylight Saving - Does the application handle the short day and the long day correctly? In the long day can you tell the difference between the first hour and the second hour that is doubled up? For example ':' maybe used for the first hour and ';' is used for the second hour e.g. 02:59 is followed by 02;00.

Daylight Saving Dates - Does your application use the correct library for figuring out the start and end dates? This is particularly important if the local rules have been recently changed.

Time Zones - Are Time Zones stored? Does convert correctly? When no Time Zone is given does it correctly figure out what time zone it should be? Transitions in and out of Day Light Savings Time Zones. If you manually ask the system (e.g. via SQL) to convert the time zone will it? If you manually force a date to be inserted with a Time Zone which is different the the default does the application do the correct thing with it?

Leap Years - Firstly remember the rule is "Every year divisible by 4 expect those divisible by 100 unless it is also divisible by 400" 1900 was not a leap year yet 2000 was. Do things work as expected in this day? Date range calculations? Is 365 hard coded anywhere for calculations?

Leap Seconds - Leap seconds happen every so often and result in an extra second being added to the day (and in theory a second may also be removed). So the valid range of seconds in a minute is 59 - 61. A leap second minute goes x:59. x:60, y:00. Can the application display a minute with 61 seconds? What happens if you insert a time with 60 as the value of seconds? What happens with a transaction submitted at 61? Does the application assume there are always 86400 seconds in a day?

Intervals - With Leap Years, Leap Seconds and Daylight Savings do intervals take these into account correctly?

Year 2038 Problem - At 2038-01-19T03:14:07Z the 32bit counter which has been counting the number of seconds since 1970-01-01T00:00:00Z will roll over to a negative integer which may produce date/times in the past.

Introducing Testing on the Bog

I would like to introduce everyone to Testing on the Bog (TotB). TotB pays homage to Google's Testing on the Toilet. Seeing TotT isn't producing a new one each week and I am running out of past episodes to place the toilet doors, I thought I would have a go a writing some. My background is different so will of course approach Testing from a different angle. Hopefully you do find these interesting and useful. Do provide me with feedback on these and if you are using them at your company. Also I would like to say that if anyone would like to contribute one or maybe just a topic idea please do feel free to get in touch. I can't promise these will be produced weekly but I will try and publish as often as I can.

I will be posting the first installation very soon.

Saturday, 11 July 2009

Experiences with Mantis 1.2 - Part 1 - The Install

I have bee looking at different defect tracking systems and I am liking the look of Mantis Bugtracking System. I have experimented with the 1.1.x version and liked it. I have only used on virtual machine at home, so I have never used it in anger so to speak. Seeing 1.2 of Mantis is now at a Release Candidate stage I thought I would have another look at it. What I plan to is to write a series of blog posts as I get it set up in a demo form.

So Part 1 the install. Put Ubuntu on a a VM and made sure that Apache, mySQL and PHP were installed. Then just downloaded Mantis and placed it in /var/www/mantis and fired up a web browser to point at it. Comes up with a install page just type in the database connection details and is installed in a matter of seconds. Totally painless and simple.

What I plan to do:
  • Set up some different user roles
  • Set up some custom statuses
  • Set up a customer work flow
  • Set up multiple clients so internal people (I work at an IT vendor company) can see more than clients and clients can only see things related to them and not other clients.

Sunday, 5 July 2009

Full Risk Based SDLC

In the testing sphere there is a lot about Risk Based Testing and methods about how to go about it. I have had a quick look on Google (it may be that I am just blind) but there doesn't seem to be much in the way of a similar thing for development.

The development side does have different things like pair programming, code reviews, code coverage, etc which are expensive to run all the time but could using the same data coming out from the Risk Based Testing data gathering sessions to gear their development. They may also place a a higher weighting/bias to the ones with a higher development risk due to complexity for instance. An example could be:
  • Low Risk
    • Unit Tests with 95% branch and statement coverage
    • Static analysis using something like Sonar for complexity, copy and paste and coding standards
  • Medium Risk
    • Everything in Low Risk plus
    • A code review by a peer
  • High Risk
    • Everything in Low Risk plus
    • Pair Programming
    • A code review by the tech lead

That way with Risk Based Development (RBD) and Risk Based Testing (RBT) the whole SDLC can be Risk Based. With Development and Test both using the Risk Data gathered the time spent gathering the data would be easier to justify as it will be used twice and should develop a better end product as the riskier areas of the product are better developed and tested.

Saturday, 13 June 2009

Explaining your testing and what you do

James Bach has a blog post Putting Subtitles to Testing which has an accompanying video. The video is of James and Jon Bach testing an Easy Button and in the video they sub titles along the way explaining the type of testing they were doing.

The point they make at the end of the video is about reporting what you are doing as a tester and that you need to practise putting what you do into words. This can come particularity important if you are a tester like and deal mostly in testing things which don't have a GUI. As if you have to explain to someone what testing you have done with a GUI they will often find it easier to understand what you have done as they visualise or see the GUI. If you start talking about DBs, SOAP, Web Services, CSVs, etc they have more difficulty in understanding what you have done with your time. This also becomes more apparent when you are not working 100% from test scripts as you can't just say I execute these scripts. When you are doing more exploratory testing explaining what you have done is more apparent. Session notes which we write during our testing make sense to us and cover everything so if there are defects we can go back and remember the full chain of events leading up to noticing the issue. While watching the video and the sub titles staying a high level (Requirements Analysis, Stress Testing, etc) can still be very informative.

Card Walls - Mingle

Well I have mentioned the idea of digital card walls in two previous posts (Can defect tracking systems be more helpful? and Visual Representation of Defect Queues) and well apparently I am not the first to think about it. Thought Works has a product called Mingle which does the Agile Card Wall along with a Defect Tracking one. Unfortunately Mingle isn't Free or Open Source so doubt I will spend any more time looking at it unless I get a budget to spend on tools, which is something I don't see any time soon.

Spawner 1.6

I have talked about Spawner before here, here and here. Well version 1.6 of this useful database population tool is out, what it adds is a masked string which lets you make strings up how you like. For instance lets you make bank account numbers, IR numbers or phone numbers (outside of US ones) which is a handy feature to have.

Monday, 1 June 2009

Risk Based Testing

Last week I went to Matt Mansell's “What are the Risks of Risk Based Testing?” presentation at the Wellington TPN. This left me doing some thinking about it. I first came across Risk Based Testing at STANZ 2008 and from then I like the idea of it, but I am personally yet to put it into practise or join a project which is using it.

What is good about Risk Based Testing is we all know testing everything is impossible given the budgets and time frames of most projects. Risk based testing gives you a framework which you can use to prioritise your testing. This needs to balanced with teaching the customer that descoping via risk base testing means that things may NOT be tested, but the customer has prioritised them lower therefore if they break in production it is less of an issue. This will also help get the testers back in the correct position in that they provide information about making the go live decision rather than making the go live decision themselves. Which means you can be more pragmatic and no longer need to be the king/queen of quality as your butt isn't on the line as you didn't make the decision to go live or not. Which helps the the common situation where congratulate dev if it goes well and blame test if it doesn't go so well.

From what I have heard about risk base testing it really needs to be started early (see my previous blog post). As at that time the client reps doing the specifications are still around and most likely still having enough time to spend time helping to do come up with the risks and ranking of them. There is no magic number for the number of risks that are optimal though may at least need to place them into some type of tree structure, so can give different levels to different audiences. Something like FreeMind may come in handy about here for recording the risks which get brainstormed during the meeting of all the stakeholders. The risks for Risk Base testing are a subset of the whole project's risks and focus on the quality related ones which are to be mitigated through testing of the deliverable. All the requirements should have one or more risks associated with them.

A requirement's first risk is does it work or not work? Then all the test cases focus on testing risks rather than requirements. This does cause a bit of an issue as most test management systems link requirements with test cases rather than requirements having risks and risks having test cases.

Before this presentation with risk base testing I had the idea of started at the highest risk and then just work your way down the list. But this isn't the only way, as Matt presented the idea of making sure you touch all the risks but not necessarily use all the test cases. For example lets say that you have three test iterations and there are 100 test cases for each of your high, medium and low risks, so 300 test cases in total.





Iteration123
High (# of Test Cases)503515
Medium (# of Test Cases)355015
Low (# of Test Cases)151570

This assumes that 15 test cases will test all of you risks in one particular risk level. What this gives you is coverage that every iteration you have touched all of your risks so should have found any show stoppers and also if need to stop before all three iterations you can demonstrate in the nice table what you have and have not done which can help the customer make their mind up on if they want to go live now or go live later. This does mean that you are leveling running some of your test cases for the high risks to the very end, so they may get dropped. There is no magic bullet proof vest, there are always drawbacks to one method or another of how to test the risks, but there are different ways to explain it to the customer to better understand your approach for doing it.

Risk base testing may also be used for how to divide up the testing work. For instance it may be decided the testers do the high risks while the client testers work on the low risks. If this happens the test team needs to own the process and also have control over the client testers. This is so if for instance the client testers get called away to do their day job and thus things are not tested properly it generally still gets blamed on the test team for not doing their job properly. If the test team can own the whole testing process and if that happens raise a flag and either do it themselves or get some type of time extension, so that the product does not ship at a standard lower than expected.

Sunday, 24 May 2009

Testing of Requirements... Is always a good thing™

I was having a read of The Impact of Requirements issues on Testing which is on the SoftEd Resources page. It says that 40% of software errors come from requirements errors. It would tend to agree with this as on the whole as it there is possibility for ambiguity and ambiguity leads to the business thinking it should do X, the developer codes Y and the tester tests it assuming it will do Z. Which leads to the standard inclusion of:



I'm sure everyone has seen this picture before, it is the standard diagram indicating misunderstandings in requirements.

What I see the problem is that testers only get involved when it is time to the output from development. This is always the case and I doubt it will really change until the client can understand spending time and money on testers to be engaged from day zero of the project and test every output along the way before going onto the next stage in the project life cycle. For example the project requirements are normally only signed off by the client. This isn't always the best as the the client doesn't always understand the technology fully and they described it to the BAs so they are can fill in the gaps and ambiguity without thinking about it. Having testers involved at this stage would bring in a viewpoint from people who have been involved in requirements gathering and also bring the testing viewpoint to things and looking for holes and defects.

Why I don't think this will fly without a lot of re-education?:
  • Testing will slow it down
  • This will require people so therefore require more money to be spent

But I personally think these are false realities:
  • It may lengthen the time for requirements gathering, but the overall project will be less. As picking up these errors before development will mean that some pieces will not have to be redesigned. redeveloped and retested. This is the whole the sooner in the cycle you detect a defect the cheaper it is to fix. Though usually it is only looked at with Unit Test vs. System Test vs. User Acceptance Test vs. Production.
  • The second point of needed more people and therefore money is linked into what I just said above. You will just be spreading the testing effort out over the project and will hopefully make the time spent testing the requirements back by spending less time retesting.

What is needed? Re-education in that life cycles should have test team involvement the whole way through the project not just post development. This will need to be taught in project management courses and made part of the methodologies used. Some of the Agiles are better at this as the test team is at the kick off meetings etc. and everyone is co-located so ambiguity can be sorted out quickly without needing to schedule meetings etc. Though seeing Agile doesn't scale to larger projects it would be good if some how some of these aspects could be worked into methodologies for the larger projects to use.

Sunday, 17 May 2009

Can defect tracking systems be more helpful?

I wrote about Visual Representation of Defect Queues a little while ago and the whole idea has been sitting in the back of head churning away.

Most defecting tracking systems are good at showing lists of defects. They also maybe able to show you scoreboards/traffic lights or graphs. Lists are good for a person who has a list of defects in ranked ordered assigned to them to fix or retest. The graphs are good for management as they can see the counts and if they are increasing or decreasing. There is more information than that stored in most defect tracking systems which could come in handy. There is a lot of information store about times things spent in X statuses. The above two examples is either a snapshot in time or a series of snapshots with little information shared between the snapshots. So is there are way to use more of this information stored inside of these defect tracking systems?

Really so far we have got something which is good for the management and the client and something good for the developer defect fixing or tester retesting, nut what about the team lead? Is there a middle ground in between using some of this extra information? How is their team's progress? Can they get a picture of there team in more detail than the managers but maybe a bit less than the developers personal queue?

As mentioned before, in my Visualising Defect Queues post, there is the swim lane type approach. What it can give you is a 2D view of everyone's personal list. Along the X you display the current status of the defect and the Y you can show the severity/priority of the defects. Using index cards stuck to the wall you can some more information as a number on a scoreboard means something, but people are more visual than that, so adding the cards can give you more information. For instance one thing that you can more easily see is bottle necks as it is there in front of you and over the day/week you can just see the cards moving along and you can see bottle necks as large groups of defect start to bunch together. That is extra information which you weren't getting before by just display the information differently. This isn't really using the extra information which is stored in a defect tracking system's database.

If we take the swim lane idea, now we can give each person a colour for the defects assigned to them. Now could get a view for bottle necks as a whole and also the individual people in your team. Is one person sinking? Is it the whole team generally? Could the defects be better distributed amongst the team members? Cool this is now getting more useful and we are using more information in the database to display useful information. What else can we can we display?

Well maybe adjust the transparency of the colour for how long it has been sitting in that status, so now the bottle necks could be more useful again one person may just have the one but they have it for quite a while, so these more opaque cards might need some looking into. There maybe some else has a whole lot of defects sitting with them, but they are the quicker turn around defects so all have the day 1 transparency on them. With just a scoreboard things would look fine as number are moving well and the graph trends look fine but it is masking information about things which could improve your team's performance.

Maybe failed retest could have a border on them to identify them which would mean they might be worth an extra code review as something failing retest more than once never looks to good.

Is this display more information? Yes
Will this lead to information overload? No
Why? Well as if display this visually humans are quite good at picking up patterns from it.
How could this help? Well hopefully team leads can better manage their team and start to see patterns and identify issues with the project sooner than when management ask about a something in particular or the client asks about their pet defect which they raised ages ago isn't fixed.

So what needs doing? Well a defect tracking project needs to take this idea and run with it. Could implement it in flash, so could just drag the little cards around the swim lanes. Then for the people using an old box hooked up to a projector could have it continuously up on the wall, so the defect situation is always there on the wall, just like the story wall in agile. Or maybe MS would like to get in on the act and implement it in MS Surface.

If you were doing one of the Agiles with their swim lanes you could use the same framework. And seeing in Agile things are time boxed, you could have the stories slowly moving across the screen in regards to their complexity and expected time to do. Though maybe this would cause people to rush if they saw things moving faster than they could actually do the work as they found they encountered some type of snag. Which may decrease the quality of the product coming out.

Spawner revisited

Just an update on Spawner well Seth the developer of Spawner has already placed the idea of generic text mask into SVN. So looking forward to the build which includes that feature. It is really cool when you make a suggestion to an open source product and you see you idea get implement.

Sunday, 10 May 2009

Firewall port testing #3

Here is an update to my Firewall port testing script. You can find more about it in my first two epsiodes about it (one and two). I have made some changes to it nothing big just some comments and should now get the host name on Linux systems for placing in the results file name. This code is released under a do what ever you want with it license.

#!/usr/bin/python

"""PortTester will test to see if the machine that this script is running
on has access to another server on a given port. The servers to test are
outlined in input.txt and the results are placed in the
results_<servername>_<YYYYMMDD>-<HH24MMSS>.txt file. This output file is
CSV.

The input file should look like:
Info,Host,Port
Router Web Admin,10.0.0.1,80
Blog,blog.karit.geek.nz,80
Won't work,www.example.com,12345
"""

import telnetlib
import thread
import os
import datetime
import platform


class PortTester(object):

thread_count = 0

def __init__(self, input, log):

for row in input:
thread.start_new(self.testPort, (row['info'], row['host'], row['port'], log))
self.thread_count = self.thread_count + 1
while self.thread_count > 0:
pass


def testPort(self, info, host, port, log):
print 'Testing %s. host %s on port %s' % (info, host, port)
try:
connection = telnetlib.Telnet(host, port, 5)
connection.close()
log.write('%s,%s,%s,pass\n' % (info,host,port))
log.flush()
except:
log.write('%s,%s,%s,fail\n' % (info,host,port))
log.flush()
self.thread_count = self.thread_count - 1

def main():
test_list = readCSV('input.txt', ',', 1)
hostname = ''
if os.name == 'posix':
hostname = platform.node()
else:
hostname = os.getenv('COMPUTERNAME')
output = open('results_%s_%s.txt'%(hostname,datetime.datetime.now().strftime('%Y%m%d-%H%M%S')), 'w')
output.write('info,host,port,result\n')
output.flush()
PortTester(test_list, output)

def readCSV(path, delimiter, header_row):
text = open(path, 'r').read()
text = text.replace('%s%s' % (delimiter, delimiter), '%s %s' % (delimiter, delimiter))
lines = text.split('\n')
rows = []
if header_row:
headers = []
if lines[0] != '':
headers = lines[0].split(delimiter)
tmp = []
for head in headers:
tmp.append(head.strip().lower())
headers = tmp
del lines[0]
for line in lines:
if line != '':
values = line.split(delimiter)
row = {}
for i in range(0, len(headers)):
row[headers[i]] = values[i].strip()
rows.append(row)
else:
for line in lines:
if line != '':
values = line.split(delimiter)
rows.append(values)
return rows

if __name__ == '__main__':
main()

Tablets and Handwriting Recognition in Linux

As I have mentioned before I have an eeePC 901 to write my notes on. One thing that is missing is ability to draw on it and seeing there currently aren't any netbook tablets which run Linux (preferably Ubuntu) yet here in New Zealand, I had to do the next best thing and brought myself a drawing tablet.



It is taking some getting use to as you are drawing on a tablet and it is appearing on the screen. I'm sure this will just take some time to get use to. It is fine for drawing things in OpenOffice.org Draw. With this means I can draw diagrams as I am going along. Drawing freehand on the computer is quite as can print it, email it etc. a bit like a printable whiteboards (the client site I am currently has printable whiteboards in every meeting and breakout room). This means I can have that one directory which has all my notes and diagrams and don't have to go thumbing through books to find it. It will take some getting us to until my drawing on the screen is as good as a whiteboard or paper but in the mean time it is really cool. I am using OpenOffice.org Draw because it comes standard in the Ubuntu 9.04 build though there are some Journal/Notebook apps out there that I could try e.g. Xournal, Jarna, Gournal and NoteLab. These all look interesting but having found the need to use them yet as gedit and Draw do what I need currently and also from having a quick look at them none of them have handwriting recognition.

I have also installed CellWriter this provide handwriting recognition and after less than five minutes of training it was fairly accurately picking up charterers. Currently the only characters I am having a lot of issues with are t and +. The only other issue I had in getting CellWriter to work has I had to disable “extended input events”. It is fast and can write at a normalish speed on it. Though it does some odd behaviour when you reach the end of the input line and it sometimes moves the whole word onto the new line and sometime doesn't and also when you want a space at the end of the input line. I need to sit with a bit more and figure out what I am seeing and then a what I expect it to be doing.

Saturday, 9 May 2009

How I track my session notes

Looking around the web most of the session notes look something like what is outlined here. Mine follow pretty much along the same lines. My difference is that I don't separate likes like issues out I just put them in line in my notes section which is more like a time line than notes per so. I mark the issues with '@issue' so I can grep through them later to find the issue bits.

Some of this difference may come down to that I haven't tested a GUI in nearly two years. This may mean that my style has adapted to work more like that as part of digging into an issue my result in looking multiple databases and a handful of log files. Something when you notice an issue it may be the chain leading up to it so it comes in handy to have the issue in line with the notes and having a time stamp on it.

My template current looks like this:
CHARTER
-------


AREAS
-----


ENVIRONMENT
-----------


START
-----


TESTER
------
Dave

NOTES
-----

Keeping tracks of testing notes and my eeePC

With post like Exploratory Testing: Recording and Reporting by Michael Bolton in which he links to an article and some slides of his about using notebooks to store your thoughts and notes in, and in particular the Moleskine. I have tried keeping notes and such on paper in a book, my biggest problem is that my hand writing is shocking and a lot of time even I have issues deciphering it later on. I have started to use to my eeePC 901 to start taking notes on.






I am running Ubuntu 9.04 on it. I normally just use gedit  to do my note taking in. gedit is just nice and simple with nothing that gets in the way of just writing what you are doing. There are downsides to using a netbook for taking notes rather than paper:
  • Battery
  • Boot Time
  • Sketches and Diagrams
  • Saving
  • Small keyboard

Well I am getting over five hours of battery on it with the WiFi off which is fairly handy to do what you need to do with going to a plug. Well with Ubuntu 9.04 I am getting about 35 seconds to the login screen and 10 seconds from the login screen to the desktop. Well I do still have a a book next to me so I can still draw while I am going along and just make a note that I have a diagram in my book. Well have autosave turned on to save every minute so the chances of losing something is fairly low. I do admit the small keyboard is a bit of pain especially seeing I am use to using a MS Natural Ergonomic Desktop 7000 but I am just writing notes rather than long documents so it is alright. If am using it at a desk it is easy enough to plug in a second keyboard into it.

Well there are some advantages of using a netbook over paper:
  • Easy to put in timestamps
  • Only bad spelling and not having to worry about my handwriting

What is really nice about using a text editor is that it is really easy to put in timestamps into your testing notes. This comes in really handy when you need to come back and look at logs so can tie things together easyly. Text is nice and easy to read and can do spell checking though that can be a bit of a pain seeing I use a lot of shorthand, acronyms and technical words which aren't in dictionaries.

One thing they both lack is not copy and paste and no insert of screen shots directly into your notes.  I have always had a text file on the computer and also directory for screenshots and logs.

So why don't I just use the computer I am and write notes there? Will I like the idea of the computer sitting to my right is my notebook and is for taking notes while the computer directly in front of me is working on. Other things like remote desktop can capture alt-tab which makes switching between servers and local text files a bit of a pain.

Sunday, 3 May 2009

Populating Databases with random data using Spawner

Spawner is an open source tool for generating data to populate databases. There are Linux and Windows binaries for it.

First thing out of box is the Linux version won't run on my Ubuntu 9.04 install with the following error message “./spawner: error while loading shared libraries: libglib-1.2.so.0: cannot open shared object file: No such file or directory”. Seeing I have Wine installed, I just thought I would get the Windows version and try that. Well under Wine it loaded up fine, so won't spend any more time trying to get it to work under Linux. Though if anyone knows what I need to do to get it to work please leave a comment.

Well Spawner opens up with a fairly simple UI.



The first tab is the main one in which you describe the table that you want to populate. It can randomly make numbers, strings and dates, but it can also do some smarts with these like sequences, ranges, number of word, number of characters, humans like names, emails, addresses and SSNs (shame this isn't tweakable to handle IR number for here in NZ XXX-XXX-XXX or maybe bank account numbers XX-XXXX-XXXXXXXX-XXX). It will also make IPv4 addresses but not IPv6 addresses.



Once you have set up your output options. It will make you a CSV file, a SQL file with inserts or directly place the information into a MySQL database. Just select how many rows you would like and press the go button and voilà there you go a file with all your data in it ready to place into your DB.



Some features that I would like to see in it are what I have mentioned above with more data types and/or maybe something where you could define your own format mask, which would come in handy for country specific things like bank account and phone numbers. The other thing which could come in handy with be the ability to do parent child tables. Though thinking about you do it manually using number ranges on the ID column. It also doesn't seem to handle putting binary data into a column though I guess with this would need to point it at a directory with some images or documents in it. Another output type that would come in handy would be an XML type which could give the XSD to and could tell Spawner how to populate each field be it randomly or with a a static value. Though with XML the child table feature may come in handy so you can do nesting of data.

Sunday, 26 April 2009

"What is testing?" The question that we all get asked

One question which I get asked and I'm sure a lot of others as well is "What is testing?" or some derivation of that. Well most of the time I will just say something like "I'm paid to break stuff" and "preferably before the users do". This is somewhat of the truth and without spending too long explaining it all (as most people don't care that much) is accurate enough for them. And then I actually enjoying find it fun finding new and interesting ways to break the new piece of code which I have been given to test. Though I prefer it when it is a challenge to break it, it is not all that fun to just have it break when you look at it oddly and have it die terribly. Though testing of course is more than just this with checking meets requirements, fit to deploy to production etc.. Then again if I don't break or find some cool or weird behaviour, I feel a little disappointed and like I am not doing my job properly.

While thinking about this I think back to an old blog post of mine where was looking at the difference between QA and Testing. QA is really what the project auditors do by looking at processes, scope etc. as QA can look at if you are following a set process to ensure quality of the process which results in the delivery of the final product. I think is this more like what would do in say the food industry where they ensure the processes are correct and only sample a small percentage of the output while in Software Testing we try and cover as many combinations as possible to get as close to 100% coverage as can given the project constraints.

Saturday, 25 April 2009

soapUI and a few other tools

I have a bunch of tools which I need to have a good look at but just need to find the time to do so. They normally just sit in open tabs in Firefox and I live in hope that Firefox doesn't lose them for any reason.

Though this week I did get some time to play with soapUI. I have used soapUI before but that was a while ago and someone else had already set up the SOAP queries and was an old version and I just left it at that because had a tight deadline to meet and just had to get the testing done. Which meant back then I didn't have all that much time to learn the tool. Though this week I had to give a demo of soapUI to some people, so I had do some playing with it and learn about it quickly.

Well it was a lot more powerful than I thought it was. Up to now I knew it could make SOAP requests based on a WSDL and then have those requests editable to send to your servers. There is the one gotcha that need both the WSDL and the XSD in the same directory if pulling it from a directory on your computer . Well that feature is all good and comes in handy as is an easy way to send requests in and get a response back from a web service.

So this week I found out it will also do proper testing rather than a tool just to do do ad hoc SOAP requests. You can set up test cases that will do assertions and what not so it will tell if you get the correct response back or not. If the build in assertions are not enough there is Groovy scripting, so you can do what ever validation you want. It can then run the test suite and make you a nice little report at the end of it. It can also go the next step and does load test using your test suites. I haven't really dug that much into load and performance testing as recently the projects I have been on haven't needed it all that much in the areas I have been testing. It also looks like you can script this this all up and rather than having to use a GUI to trigger it the test runs off. This would be quite nice as then you can put it on your automated build and validation server and have run all the time and alert the developers as soon as a regression occurs or use it as test driven development, where all the tests are created ahead of time and they keep developing until all the tests past.

The other neat feature I found out that soapUI has is that it can create you mock web services. This will come in handy where you have you an application which is a web service client and the application which is providing the web service isn't available yet and you need to start testing. So long as the WSDL is ready you can quickly make a mock web service which will respond to your requests with either hard coded or if you want can use Groovy to make your responses a little more dynamic.

If you already have the client and server side built and either want to build a test suite, migrate a suite of tests from a another tool or want to get some response timings there is a proxy tool. What you can set this up to do is sit between the client and the server and it will record everything that is going backwards and forwards and can covert this into a test case or a mock web service.

Best of thing of all is that this it is an open source tool. There is a pro version which you will need pay for. It adds things like coverage, access to data sources and sinks (which you can use other tools to populate or validate) and a better global store for your Groovy scripts amongst some other things.

The other tools which I would like to have a better look and haven't had a good chance to learn about yet are:
  • Mantis - This is a defect tracking tool. I have had a look at this and am really liking it. What I need to do it is get a proper work flow set up in it and some dummy projects which match some of our real projects, so people can get a real feel for it and try and get it in for at least the up to System Testing of projects while they are still internal and haven't gone to the client yet, where on the whole need to use their own corporate defect tracking system.
  • Sonar - This a tool for static analysis for the code. I haven't done much more looking at than what is on the website for it, but from that it looks quite interesting.
  • Testlink - A Test Case management tool. Like sonar haven't done much more looking at than what is included on it web site.
  • Session Tester - A tool for writing down your session notes. This is one to watch to see how it develops over time (it is only 0.2). Though it doesn't 100% work how I like to work as I like my notes to be much more of a time line with everything intertwined on timestamps, so when come across and issue or bug you can easily see where it fits in with the actions that you did and have timestamps to aid looking in the logs.
  • Spawner – This a tool which will generate data to populate a database do you when you need a bunch of data in the tables. I have had a look at it previously one this blog. IT will do all your different data types and what not. It is far better than the way I previously did it by writing a Python to make me a bunch of insert statements.

Sunday, 19 April 2009

Visual Representation of Defect Queues

One of our devs has had the idea of placing our defect queue on the wall using a grid and catalog cards. Y axis is severity and priority and X is status. It is simple and is working well to see where our defects are currently sitting and where any bottle necks ares. The downside to it is that it does require effort to keep the wall and our client's defect tracking system in sync.

Over some drinks on Friday night we were discussing how we like it and how it was helping. Though of course the bit of keeping the wall and the defect system in sync. Well we started thinking about Microsoft Surface. Well Surface does have the tactile of moving defects around when changing the status. So it would be really cool if some would either make a defect tracking tool or an addon to one to linked into Surface. Then you could get a big screen running surface and mount on the wall and manage the defect queue that, changing statuses, filtering. You also don't need defect reports in the same way to your PM as they just walk past and visually see the defects progressing and where bottle necks are.

I am also thinking that for the next project while in the system test stage before getting the client involved to work solely off the the wall for those defects without using a defect tracking tool. Sometimes the low tech solution works really well and because we are in IT we always go for the techy solution rather and the simple one that may work better.

Monday, 9 March 2009

Firewall port testing #2

Well another week and some more tweaks to my firewall port testing python script. I have added a new column to the input and results, so you can have an info field and also added a header row. Also the results filename will now have a date stamp and if running under windows it will have the host name in the file to help ordering and what now.

PortTester.py
#! /usr/bin/python

import telnetlib
import thread
import os
import datetime


class PortTester(object):

thread_count = 0

def __init__(self, input, log):

for row in input:
thread.start_new(self.testPort, (row['info'], row['host'], row['port'], log))
self.thread_count = self.thread_count + 1
while self.thread_count > 0:
pass


def testPort(self, info, host, port, log):
print 'Testing %s. host %s on port %s' % (info, host, port)
try:
connection = telnetlib.Telnet(host, port)
log.write('%s,%s,%s,pass\n' % (info,host,port))
log.flush()
except:
log.write('%s,%s,%s,fail\n' % (info,host,port))
log.flush()
self.thread_count = self.thread_count - 1

def main():
test_list = readCSV('input.txt', ',', 1)
output = open('results_%s_%s.txt'%(os.getenv('COMPUTERNAME'),datetime.datetime.now().strftime('%Y%m%d-%H%M%S')), 'w')
output.write('info,host,port,result\n')
output.flush()
PortTester(test_list, output)

def readCSV(path, delimiter, header_row):
text = open(path, 'r').read()
text = text.replace('%s%s' % (delimiter, delimiter), '%s %s' % (delimiter, delimiter))
lines = text.split('\n')
rows = []
if header_row:
headers = []
if lines[0] != '':
headers = lines[0].split(delimiter)
tmp = []
for head in headers:
tmp.append(head.strip().lower())
headers = tmp
del lines[0]
for line in lines:
if line != '':
values = line.split(delimiter)
row = {}
for i in range(0, len(headers)):
row[headers[i]] = values[i].strip()
rows.append(row)
else:
for line in lines:
if line != '':
values = line.split(delimiter)
rows.append(values)
return rows

if __name__ == '__main__':
main()


input.txt
  Info      , Host    ,  Port   
Google 81, google.com,81
Google Work,google.com,80
Google 82,google.com,82
Google IP,74.125.67.100,80
Google IP 81 , 74.125.67.100 , 81
Blog Work,blog.karit.geek.nz,80
Blog Fail,blog.karit.geek.nz,81


results.txt
info,host,port,result
Google IP,74.125.67.100,80,pass
Google Work,google.com,80,pass
Blog Work,blog.karit.geek.nz,80,pass
Google IP 81,74.125.67.100,81,fail
Blog Fail,blog.karit.geek.nz,81,fail
Google 81,google.com,81,fail
Google 82,google.com,82,fail

Sunday, 1 March 2009

Firewall port testing

During the week I got the task of testing that some firewalls have been set up between the servers our application runs on and those at a third party which we need to exchange information with. The firewall is pin hole with this source and this destination and port number. There are eight boxes, four destination boxes and five ports on each of the destination boxes. Jumping on each of source boxes and testing each of the 20 destination and port combinations is rather difficult.

So I needed a tool. A quick look around either turned up online port scanners or tools like nmap. What I wanted was something that could take a CSV of IP and port and output a CSV of IP, port and result. Sure would have to run it on each of the source machines but will be far quicker and less manual and I am sure it will come up again.

I did it manually during weeks as I had to do it there and then, so this weekend I wrote a little python script that takes some IPs and port as input and test them all and write a little CSV as output in Python. Though Python wasn't all I needed as Python requires installation of stuff. That is where py2exe comes in it lets you make .py into a exe plus some library stuff. So can drop it onto a server, do your testing and then just delete the directory. This is a must have seeing I am having to do this on production servers before go live.

This script is also threaded so it will try to make connections to all the endpoint simultaneously. This can come in handy as the time out for the connections can be quite a long time. This way you only have to wait for the time out once rather than N*time out.

To use on Windows:
  1. Have Python installed on your desktop
  2. Download and install py2exe
  3. Read the tutorial about how to make Python into exe
  4. Go make you exe and zip up the dist directory
  5. Drop the files on the source computer
  6. Set up the input.txt file (see example below)
  7. Double click the .exe file
  8. Check the results
To use on *nix:
A lot of *nix boxes will have python installed out of the box so can run the .py file all by itself. You will just have to do the input.txt and run it.

PortTester.py
#! /usr/bin/python

import telnetlib
import thread

class PortTester(object):

thread_count = 0

def __init__(self, input, log):

for row in input:
thread.start_new(self.testPort, (row[0], row[1], log))
self.thread_count = self.thread_count + 1
while self.thread_count > 0:
pass


def testPort(self, host, port, log):
print 'Testing %s on port %s' % (host, port)
try:
connection = telnetlib.Telnet(host, port)
log.write('%s,%s,pass\n' % (host,port))
log.flush()
except:
log.write('%s,%s,fail\n' % (host,port))
log.flush()
self.thread_count = self.thread_count - 1

def main():
test_list = readCSV('input.txt', ',')
output = open('results.txt', 'w')
PortTester(test_list, output)

def readCSV(path, delimiter):
text = open(path, 'r').read()
text = text.replace('%s%s' % (delimiter, delimiter), '%s %s' % (delimiter, delimiter))
lines = text.split('\n')
rows = []
for line in lines:
if line != '':
values = line.split(delimiter)
rows.append(values)
return rows

if __name__ == '__main__':
main()


setup.py (for py2exe)
from distutils.core import setup
import py2exe

setup(console=['PortTester.py'])


example input.txt
10.0.0.126,80
10.0.0.126,81
10.0.0.0,80
google.com,81
google.com,80
google.com,82
74.125.67.100,80
74.125.67.100,81
blog.karit.geek.nz,80
blog.karit.geek.nz,81


example results.txt
10.0.0.126,80,pass
10.0.0.126,81,fail
10.0.0.0,80,fail
74.125.67.100,80,pass
blog.karit.geek.nz,80,pass
google.com,80,pass
74.125.67.100,81,fail
blog.karit.geek.nz,81,fail
google.com,81,fail
google.com,82,fail

Tuesday, 17 February 2009

Google's Testing on the Toilet

I came across Google's Testing on the Toilet series about a month back. For those who haven't seen Testing on the Toilet have a look at http://googletesting.blogspot.com/search?q=testing+on+the+toilet what it is is a series of one page unit testing articles designed for the back of the toilet door.

I have started to print them out and put them on the back of the toilet door. I am getting positive feedback from the devs on it. If they are thinking testing and making their unit test better it is good. I much prefer if they catch more at unit test time and also run the test every time they do a build. Also being on the toilet they have no choice but to read it. I had one guy say that he went and try not to read it, but read the first paragraph and then stopped going mustn't read more, but then he said he finished reading as there just wasn't anything else to do and when you are in that type of situation there is nothing else you can do so you will just read it.

So thanks to the people behind TotT at Google I should hopefully be seeing some better unit testing in the future.

Sunday, 15 February 2009

I'm a Tester not a QA

I think we are somewhat lucky here in New Zealand as QA is barely used if at all. I test, I find issues and I check that they have been fixed. I as a tester am not and should not be the sole person responsible for the quality of the system. Everyone working on the project should have a goal of delivering a quality system. That includes BA, Dev, Test, Management, Support/Help Desk, Infrastructure, etc.

Sunday, 11 January 2009

Mantis Defect Tracking Tool

I have been looking at some different defect tracking tools recently and came across Mantis and it is my favourite. Like it as it is simple to configure as is done via config files. I really like config files and is really easy to add more statuses and customise the workflow. I find a lot of open source defect tracking tools are built be developers for devlopers and once the defect is past dev it is over no System Testing, Regression Testing or User Testing.

I had it up and running on a VM within about 30 min and after an afternoon of exploring it was really liking it and had it working how I wanted a defect tracking tool to work for me.

Pylot

A little Python tool that wants to take on JMeter by the looks. Looking at it the config looks simpler than JMeter, so it might be quite good along with me having a soft spot for Python.

Database population Tool

Came across this a little while ago for populating DBs with data. Have had a quick play and it looks ok. Will have to have a proper play later with when had a DB to populate.

Feeds I read

This is my current OPML of the stuff I read:

<?xml version="1.0" encoding="UTF-8"?>
<opml version="1.0">
<head>
<title>karit subscriptions in Google Reader</title>
</head>
<body>
<outline text="F1-Live.com" title="F1-Live.com" type="rss"
xmlUrl="http://en.f1-live.com/f1/en/xml/infos/f1en_rss.xml" htmlUrl="http://www.f1-live.com/f1/en/"/>
<outline text="Mantis Bug Tracker Blog"
title="Mantis Bug Tracker Blog" type="rss"
xmlUrl="http://www.mantisbt.org/blog/?feed=rss2" htmlUrl="http://www.mantisbt.org/blog"/>
<outline text="Outrageous Fortune - The official site"
title="Outrageous Fortune - The official site" type="rss"
xmlUrl="http://feeds.feedburner.com/Outrageous-Fortune" htmlUrl="http://www.outrageousfortune.co.nz"/>
<outline text="Urban Word of the Day"
title="Urban Word of the Day" type="rss"
xmlUrl="http://feeds.urbandictionary.com/UrbanWordOfTheDay" htmlUrl="http://www.urbandictionary.com/"/>
<outline title="Music" text="Music">
<outline text="BLABBERMOUTH.NET Latest News"
title="BLABBERMOUTH.NET Latest News" type="rss"
xmlUrl="http://www.roadrunnerrecords.com/blabbermouth.net/newsfeed.xml" htmlUrl="http://www.roadrunnerrecords.com/blabbermouth.net/"/>
<outline text="blog.Shihad.com" title="blog.Shihad.com"
type="rss" xmlUrl="http://blog.shihad.com/?feed=rss2" htmlUrl="http://blog.shihad.com"/>
<outline text="Day in Rock Report"
title="Day in Rock Report" type="rss"
xmlUrl="http://feeds.feedburner.com/dayinrock?format=xml" htmlUrl="http://www.antimusic.com/dayinrock/"/>
<outline text="FLAC News" title="FLAC News" type="rss"
xmlUrl="http://flac.sourceforge.net/feeds/news-atom1.xml" htmlUrl="http://flac.sourceforge.net/"/>
<outline text="Last.fm – the Blog"
title="Last.fm – the Blog" type="rss"
xmlUrl="http://blog.last.fm/atom/" htmlUrl="http://blog.last.fm/"/>
<outline text="MusicBrainz Blog" title="MusicBrainz Blog"
type="rss"
xmlUrl="http://blog.musicbrainz.org/?feed=rss2" htmlUrl="http://blog.musicbrainz.org"/>
<outline text="Shihad Wiki - Recent changes [en]"
title="Shihad Wiki - Recent changes [en]" type="rss"
xmlUrl="http://shihadwiki.com/w/index.php?title=Special:Recentchanges&amp;feed=rss" htmlUrl="http://shihadwiki.com/wiki/Special:RecentChanges"/>
</outline>
<outline title="Tech" text="Tech">
<outline text="A List Apart" title="A List Apart" type="rss"
xmlUrl="http://www.alistapart.com/feed/rss.xml" htmlUrl="http://www.alistapart.com/"/>
<outline text="Aardvark" title="Aardvark" type="rss"
xmlUrl="http://aardvark.co.nz/aardvark.rss" htmlUrl="http://aardvark.co.nz/"/>
<outline text="Advice and Opinion -"
title="Advice and Opinion -" type="rss"
xmlUrl="http://feeds.feedburner.com/cio/feed/solutions/1373" htmlUrl="http://advice.cio.com"/>
<outline text="AppleInsider" title="AppleInsider" type="rss"
xmlUrl="http://www.appleinsider.com/appleinsider.rss" htmlUrl="http://www.appleinsider.com/"/>
<outline text="Ars Technica" title="Ars Technica" type="rss"
xmlUrl="http://arstechnica.com/index.ars/rss" htmlUrl="http://arstechnica.com/index.ars"/>
<outline text="CIO.com - News" title="CIO.com - News"
type="rss"
xmlUrl="http://feeds.feedburner.com/cio/feed/solutions/1375" htmlUrl="http://www.cio.com/"/>
<outline text="CIO.com - Research &amp; Analysis"
title="CIO.com - Research &amp; Analysis" type="rss"
xmlUrl="http://feeds.feedburner.com/cio/feed/solutions/1374" htmlUrl="http://www.cio.com/"/>
<outline text="Computerworld" title="Computerworld"
type="rss"
xmlUrl="http://computerworld.co.nz/rss/computerworld.xml" htmlUrl="http://computerworld.co.nz/"/>
<outline text="Data Center Knowledge"
title="Data Center Knowledge" type="rss"
xmlUrl="http://feeds.feedburner.com/DataCenterKnowledge" htmlUrl="http://www.datacenterknowledge.com"/>
<outline text="Delicious/twit" title="Delicious/twit"
type="rss" xmlUrl="http://del.icio.us/rss/twit" htmlUrl="http://delicious.com/twit"/>
<outline text="Digg" title="Digg" type="rss"
xmlUrl="http://digg.com/rss/index.xml" htmlUrl="http://digg.com/"/>
<outline text="Digg / Technology" title="Digg / Technology"
type="rss"
xmlUrl="http://digg.com/rss/containertechnology.xml" htmlUrl="http://digg.com/"/>
<outline text="Full Circle Magazine"
title="Full Circle Magazine" type="rss"
xmlUrl="http://fullcirclemagazine.org/feed/" htmlUrl="http://fullcirclemagazine.org"/>
<outline text="LinuxDevices.com" title="LinuxDevices.com"
type="rss"
xmlUrl="http://www.linuxdevices.com/backend/headlines.rss" htmlUrl="http://www.linuxdevices.com/?kc=rss"/>
<outline text="MacRumors : Mac News and Rumors"
title="MacRumors : Mac News and Rumors" type="rss"
xmlUrl="http://www.macrumors.com/macrumors.xml" htmlUrl="http://www.macrumors.com"/>
<outline text="mozillaZine feedHouse"
title="mozillaZine feedHouse" type="rss"
xmlUrl="http://feedhouse.mozillazine.org/rss20.xml" htmlUrl="http://feedhouse.mozillazine.org/"/>
<outline text="Netcraft" title="Netcraft" type="rss"
xmlUrl="http://news.netcraft.com/index.rdf" htmlUrl="http://news.netcraft.com/"/>
<outline text="OSNews" title="OSNews" type="rss"
xmlUrl="http://osnews.com/files/recent.xml" htmlUrl="http://www.osnews.com"/>
<outline text="Phoronix" title="Phoronix" type="rss"
xmlUrl="http://feedproxy.google.com/Phoronix" htmlUrl="http://www.phoronix.com/"/>
<outline text="Planet Mozilla" title="Planet Mozilla"
type="rss" xmlUrl="http://planet.mozilla.org/rss20.xml" htmlUrl="http://planet.mozilla.org/"/>
<outline text="Reviews Tom's Hardware US"
title="Reviews Tom's Hardware US" type="rss"
xmlUrl="http://www.pheedo.com/f/toms_hardware" htmlUrl="http://www.tomshardware.com"/>
<outline text="Slashdot" title="Slashdot" type="rss"
xmlUrl="http://rss.slashdot.org/Slashdot/slashdot" htmlUrl="http://slashdot.org/"/>
<outline text="SmallNetBuilder Full Feed"
title="SmallNetBuilder Full Feed" type="rss"
xmlUrl="http://www.smallnetbuilder.com/index.php?option=com_rd_rss&amp;id=3" htmlUrl="http://www.smallnetbuilder.com"/>
<outline text="Songbird Blog" title="Songbird Blog"
type="rss"
xmlUrl="http://feeds.songbirdnest.com/songbird-blog" htmlUrl="http://blog.songbirdnest.com"/>
<outline
text="SourceForge.net: Project File Releases: FreeMind"
title="SourceForge.net: Project File Releases: FreeMind"
type="rss"
xmlUrl="http://sourceforge.net/export/rss2_projfiles.php?group_id=7118" htmlUrl="http://sourceforge.net/projects/freemind/"/>
<outline
text="Spread Firefox - The Home of Firefox Community Marketing"
title="Spread Firefox - The Home of Firefox Community Marketing"
type="rss" xmlUrl="http://www.spreadfirefox.com/rss.xml" htmlUrl="http://www.spreadfirefox.com"/>
<outline text="Tech noise from CityLink"
title="Tech noise from CityLink" type="rss"
xmlUrl="http://news.clnz.net/index.rss" htmlUrl="http://news.clnz.net/index.rss"/>
<outline text="TG Daily - All News"
title="TG Daily - All News" type="rss"
xmlUrl="http://www.tgdaily.com/feed/allsections.php" htmlUrl="http://www.tgdaily.com"/>
<outline text="The Register" title="The Register" type="rss"
xmlUrl="http://www.theregister.co.uk/headlines.rss" htmlUrl="http://www.theregister.co.uk/"/>
<outline text="ThinkGeek :: What's New"
title="ThinkGeek :: What's New" type="rss"
xmlUrl="http://www.thinkgeek.com/thinkgeek.rss" htmlUrl="http://www.thinkgeek.com/"/>
<outline text="Tom's Guide" title="Tom's Guide" type="rss"
xmlUrl="http://www.pheedo.com/f/denguru" htmlUrl="http://www.tomsguide.com/"/>
<outline text="TorrentFreak" title="TorrentFreak" type="rss"
xmlUrl="http://feeds.feedburner.com/Torrentfreak" htmlUrl="http://torrentfreak.com"/>
<outline
text="TWiT.TV - Netcasts you love from people you trust"
title="TWiT.TV - Netcasts you love from people you trust"
type="rss" xmlUrl="http://www.twit.tv/node/feed" htmlUrl="http://www.twit.tv"/>
</outline>
<outline title="Testing" text="Testing">
<outline text="All StickyMinds.com Feeds"
title="All StickyMinds.com Feeds" type="rss"
xmlUrl="http://feeds.feedburner.com/AllStickyMindsFeeds" htmlUrl="http://www.stickyminds.com/"/>
<outline
text="Association for Software Testing - Advancing the understanding and practice of software testing"
title="Association for Software Testing - Advancing the understanding and practice of software testing"
type="rss"
xmlUrl="http://www.associationforsoftwaretesting.org/drupal/rss.xml" htmlUrl="http://www.associationforsoftwaretesting.org/drupal"/>
<outline text="Collaborative Software Testing"
title="Collaborative Software Testing" type="rss"
xmlUrl="http://www.kohl.ca/blog/index.rdf" htmlUrl="http://www.kohl.ca/blog/"/>
<outline text="DevelopSense Blog" title="DevelopSense Blog"
type="rss"
xmlUrl="http://www.developsense.com/blog/atom.xml" htmlUrl="http://www.developsense.com/blog.html"/>
<outline text="Google Testing Blog"
title="Google Testing Blog" type="rss"
xmlUrl="http://feedproxy.google.com/blogspot/RLXA" htmlUrl="http://googletesting.blogspot.com/"/>
<outline text="James Bach's Blog" title="James Bach's Blog"
type="rss" xmlUrl="http://www.satisfice.com/blog/feed" htmlUrl="http://www.satisfice.com/blog"/>
<outline text="Planet LDTP" title="Planet LDTP" type="rss"
xmlUrl="http://ldtp.freedesktop.org/planet/atom.xml" htmlUrl="http://ldtp.freedesktop.org/planet"/>
<outline
text="testingReflections.com - The mind-share information resource for software testing, agile testing and test-first/test-driven development"
title="testingReflections.com - The mind-share information resource for software testing, agile testing and test-first/test-driven development"
type="rss"
xmlUrl="http://www.testingreflections.com/node/feed" htmlUrl="http://www.testingreflections.com"/>
</outline>
<outline title="Friends" text="Friends">
<outline text="J's Pixel Life" title="J's Pixel Life"
type="rss"
xmlUrl="http://renderharmless.blogspot.com/feeds/posts/default" htmlUrl="http://renderharmless.blogspot.com/"/>
</outline>
<outline title="Comic" text="Comic">
<outline text="Diesel Sweeties by R Stevens"
title="Diesel Sweeties by R Stevens" type="rss"
xmlUrl="http://www.dieselsweeties.com/ds-unifeed.xml" htmlUrl="http://dieselsweeties.com"/>
<outline text="OK/Cancel" title="OK/Cancel" type="rss"
xmlUrl="http://feeds.feedburner.com/ok-cancel" htmlUrl="http://www.ok-cancel.com"/>
<outline text="User Friendly RSS Feed"
title="User Friendly RSS Feed" type="rss"
xmlUrl="http://www.userfriendly.org/rss/uf.rss" htmlUrl="http://www.userfriendly.org/static/"/>
<outline text="xkcd.com" title="xkcd.com" type="rss"
xmlUrl="http://www.xckd.com/atom.xml" htmlUrl="http://xkcd.com/"/>
</outline>
<outline title="T-Shirts" text="T-Shirts">
<outline text="Design By Humans - Shirts"
title="Design By Humans - Shirts" type="rss"
xmlUrl="http://www.designbyhumans.com/shop/rss" htmlUrl="http://www.designbyhumans.com/shop"/>
<outline text="T-Shirt Watch" title="T-Shirt Watch"
type="rss"
xmlUrl="http://feeds.feedburner.com/T-shirtWatch" htmlUrl="http://www.tshirtwatch.com"/>
<outline text="Threadless Weekly" title="Threadless Weekly"
type="rss"
xmlUrl="http://feeds.feedburner.com/ThreadlessWeekly" htmlUrl="http://www.threadless.com/"/>
<outline text="TShirtReview.com" title="TShirtReview.com"
type="rss"
xmlUrl="http://www.tshirtreview.com/?feed=rss2" htmlUrl="http://www.tshirtreview.com"/>
</outline>
</body>
</opml>