Sunday, 30 August 2009

Getting soapUI Test Cases to check Databases

I have done some investigating of how to connect to a Database and query to ensure that the data from the SOAP call in soapUI has made it all the way to the DB.

Firstly ensure you have downloaded the relevant JDBC driver jar file and placed it in $SOAPUI_HOME/bin/ext and then restarted soapUI as it only seems to grab the file on restart.

Below is a simple Groovy which Have put in as a test step which will go to the database and check if the you want is there. It will need some additions like the addition of a where clause I would guess. If the count or data for that matter is right it will assert true or false as applicable and show up in results.

import groovy.sql.Sql
def ERROR_MSG = "The count is not 1"
sql = Sql.newInstance("jdbc:mysql://localhost:3306/demo", "demo", "password", "com.mysql.jdbc.Driver")
row = sql.firstRow("SELECT count(*) count FROM foo")
log.info("Count: ${row.count}")
assert (1 == row.count):ERROR_MSG
sql.close()

Also remember before running these test ensure that the database is truncated or your SQL takes into account only to look the results that you want it to look at not ones from earlier runs. If you are testing automatically with Maven you could get Maven to do the cleaning up and prep work.

Using soapUI to do testing as part of a Maven2 build

soapUI has a Maven plugin. soapUI has a Maven getting started guide and Maven has a Maven getting started guide. Well I thought I would have a go at joining them together, so I can a Maven script automatically run my soapUI tests and output a jUnit compatible results file.

Pre Reqs:
  • Maven 2 installed
  • soapUI with a project that already has some tests setup

Steps:
  1. Setup a new project. This will create you a new project in a new directory called app. To do this run "mvn archetype:create -DgroupId=nz.geek.karit.app -DartifactId=app"
  2. To pom.xml add:
    <!--Add the repository for where Maven can find the soapUI Plugin-->
    <pluginRepositories>
    <pluginRepository>
    <id>eviwarePluginRepository</id>
    <url>http://www.eviware.com/repository/maven2/</url>
    </pluginRepository>
    </pluginRepositories>
    <build>
    <plugins>
    <plugin>
    <groupId>eviware</groupId>
    <artifactId>maven-soapui-plugin</artifactId>
    <!--This is the version of soapUI to grab from plugin repo-->
    <!--At the time of writing the 3.0.1 plugin had not been created-->
    <version>2.5.1</version>
    <configuration>
    <!--The location of your soapUI setting file-->
    <projectFile>/home/test/test.xml</projectFile>
    <!--Where to place the output of the run-->
    <outputFolder>/home/test/output/</outputFolder>
    <!--Make the jUnit results file-->
    <junitReport>true</junitReport>
    </configuration>
    <executions>
    <execution>
    <id>soapUI</id>
    <!--Run as part of the test phase in the Maven lifecycle-->
    <phase>test</phase>
    <goals>
    <!--Run the test phase of eviware:maven-soapui-plugin-->
    <goal>test</goal>
    </goals>
    </execution>
    </executions>
    </plugin>
    </plugins>
    </build>

  3. You can now run the soapUI tests by calling "mvn eviware:maven-soapui-plugin:test"
  4. It will also run as part of Maven with "mvn test" for example

For all the different properties you can set see: http://www.soapui.org/plugin/maven2/properties.html

Along with running the test you can also run the loadtests and start the mock web service all from Maven. You can do this by calling a goal other than test. For the details see: http://www.soapui.org/plugin/maven2/goals.html

My full pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>nz.geek.karit.app</groupId>
<artifactId>app</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>app</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<!--Add the repository for where Maven can find the soapUI Plugin-->
<pluginRepositories>
<pluginRepository>
<id>eviwarePluginRepository</id>
<url>http://www.eviware.com/repository/maven2/</url>
</pluginRepository>
</pluginRepositories>
<build>
<plugins>
<plugin>
<groupId>eviware</groupId>
<artifactId>maven-soapui-plugin</artifactId>
<!--This is the version of soapUI to grab from plugin repo-->
<!--At the time of writing the 3.0.1 plugin had not been created-->
<version>2.5.1</version>
<configuration>
<!--The location of your soapUI setting file-->
<projectFile>/home/test/test.xml</projectFile>
<!--Where to place the output of the run-->
<outputFolder>/home/test/output/</outputFolder>
<!--Make the jUnit results file-->
<junitReport>true</junitReport>
</configuration>
<executions>
<execution>
<id>soapUI</id>
<!--Run as part of the test phase in the Maven lifecycle-->
<phase>test</phase>
<goals>
<!--Run the test phase of eviware:maven-soapui-plugin-->
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

Sunday, 23 August 2009

The Missing Grinder 3 Getting Started Guide

I have been looking at The Grinder to see what it is like at performance testing. Though I found the getting started section to cover the tool but it didn't really cover the the I want to click here, here and here and have it just request a page from my web server, so hopefully this will get you past that. I have always found the first five minutes with a tool the complicated ones and need enough to get going and over that first hurdle.



Pre Reqs:
  • A web server that you have permission to test with. For me it is localhost and running on port 80.
  • I am doing this on Ubuntu doing something similar on Windows should do the trick.

The steps:
  1. Download The Grinder 3
  2. Unzip it. I unzipped it to /home/test/Desktop/grinder-3.2
  3. Create a new directory /home/test/Desktop/grinder-3.2/demo. All files created, edited etc should be in here.
  4. Create a new file in demo called grinder.properties I copied the below from the examples directory and changed the grinder.script to simple.py
    • grinder.properties:
      #
      # Sample grinder.properties
      #
      #
      # The properties can be specified in three ways.
      #  - In the console. A properties file in the distribution directory
      #    can be selected in the console.
      #  - As a Java system property in the command line used to start the
      #    agent. (e.g. java -Dgrinder.threads=20 net.grinder.Grinder).
      #  - In an agent properties file. A local properties file named
      #    "grinder.properties" can be placed in the working directory of
      #    each agent, or a different file name passed as an agent command
      #    line argument.
      #
      # Properties present in a console selected file take precedence over
      # agent command line properties, which in turn override those in
      # an agent properties file.
      #
      # Any line which starts with a ; (semi-colon) or a # (hash) is a
      # comment and is ignored. In this example we will use a # for
      # commentary and a ; for parts of the config file that you may wish to
      # enable
      #
      # Please refer to
      # http://net.grinder.sourceforge.net/g3/properties.html for further
      # documentation.
      
      
      #
      # Commonly used properties
      #
      
      # The file name of the script to run.
      #
      # Relative paths are evaluated from the directory containing the
      # properties file. The default is "grinder.py".
      grinder.script = simple.py
      
      # The number of worker processes each agent should start. The default
      # is 1.
      grinder.processes = 1
      
      # The number of worker threads each worker process should start. The
      # default is 1.
      grinder.threads = 1
      
      # The number of runs each worker process will perform. When using the
      # console this is usually set to 0, meaning "run until the console
      # sneds a stop or reset signal". The default is 1.
      grinder.runs = 0
      
      # The IP address or host name that the agent and worker processes use
      # to contact the console. The default is all the network interfaces
      # of the local machine.
      ; grinder.consoleHost = consolehost
      
      # The IP port that the agent and worker processes use to contact the
      # console. Defaults to 6372.
      ; grinder.consolePort
      
      
      
      #
      # Less frequently used properties
      #
      
      
      ### Logging ###
      
      # The directory in which worker process logs should be created. If not
      # specified, the agent's working directory is used.
      grinder.logDirectory = log
      
      # The number of archived logs from previous runs that should be kept.
      # The default is 1.
      grinder.numberOfOldLogs = 2
      
      # Overrides the "host" string used in log filenames and logs. The
      # default is the host name of the machine running the agent.
      ; grinder.hostID = myagent
      
      # Set to false to disable the logging of output and error steams for
      # worker processes. You might want to use this to reduce the overhead
      # of running a client thread. The default is true.
      ; grinder.logProcessStreams = false
      
      
      ### Script sleep time ####
      
      # The maximum time in milliseconds that each thread waits before
      # starting. Unlike the sleep times specified in scripts, this is
      # varied according to a flat random distribution. The actual sleep
      # time will be a random value between 0 and the specified value.
      # Affected by grinder.sleepTimeFactor, but not
      # grinder.sleepTimeVariation. The default is 0ms.
      ; grinder.initialSleepTime=500
      
      # Apply a factor to all the sleep times you've specified, either
      # through a property of in a script. Setting this to 0.1 would run the
      # script ten times as fast. The default is 1.
      ; grinder.sleepTimeFactor=0.01
      
      # The Grinder varies the sleep times specified in scripts according to
      # a Normal distribution. This property specifies a fractional range
      # within which nearly all (99.75%) of the times will lie. E.g., if the
      # sleep time is specified as 1000 and the sleepTimeVariation is set to
      # 0.1, then 99.75% of the actual sleep times will be between 900 and
      # 1100 milliseconds. The default is 0.2.
      ; grinder.sleepTimeVariation=0.005
      
      
      ### Worker process control ###
      
      # If set, the agent will ramp up the number of worker processes,
      # starting the number specified every
      # grinder.processesIncrementInterval milliseconds. The upper limit is
      # set by grinder.processes. The default is to start all worker
      # processes together.
      ; grinder.processIncrement = 1
      
      # Used in conjunction with grinder.processIncrement, this property
      # sets the interval in milliseconds at which the agent starts new
      # worker processes. The value is in milliseconds. The default is 60000
      # ms.
      ; grinder.processIncrementInterval = 10000
      
      # Used in conjunction with grinder.processIncrement, this property
      # sets the initial number of worker processes to start. The default is
      # the value of grinder.processIncrement.
      ; process.initialProcesses = 1
      
      # The maximum length of time in milliseconds that each worker process
      # should run for. grinder.duration can be specified in conjunction
      # with grinder.runs, in which case the worker processes will terminate
      # if either the duration time or the number of runs is exceeded. The
      # default is to run forever.
      ; grinder.duration = 60000
      
      # If set to true, the agent process spawns engines in threads rather
      # than processes, using special class loaders to isolate the engines.
      # This allows the engine to be easily run in a debugger. This is
      # primarily a tool for debugging The Grinder engine, but it might also
      # be useful to advanced users. The default is false.
      ; grinder.debug.singleprocess = true
      
      
      ### Java ###
      
      # Use an alternate JVM for worker processes. The default is "java" so
      # you do not need to specify this if java is in your PATH.
      ; grinder.jvm = /opt/jrockit/jrockit-R27.5.0-jdk1.5.0_14/bin/java
      
      # Use to adjust the classpath used for the worker process JVMs.
      # Anything specified here will be prepended to the classpath used to
      # start the Grinder processes.
      ; grinder.jvm.classpath = /tmp/myjar.jar
      
      # Additional arguments to worker process JVMs.
      ; grinder.jvm.arguments = -Dpython.cachedir=/tmp
      
      
      ### Console communications ###
      
      # (See above for console address properties).
      
      # If you are not using the console, and don't want the agent to try to
      # contact it, set grinder.useConsole = false. The default is true.
      ; grinder.useConsole = false
      
      # The period at which each process sends updates to the console. This
      # also controls the frequency at which the data files are flushed.
      # The default is 500 ms.
      ; grinder.reportToConsole.interval = 100
      
      
      ### Statistics ###
      
      # Set to false to disable reporting of timing information to the
      # console; other statistics are still reported. See
      # http://grinder.sourceforge.net/faq.html#timing for why you might
      # want to do this. The default is true.
      ; grinder.reportTimesToConsole = false
      
      # If set to true, System.nanoTime() is used for measuring time instead
      # of System.currentTimeMills(). The Grinder will still report times in
      # milliseconds. The precision of these methods depends on the JVM
      # implementation and the operating system. Setting to true requires
      # J2SE 5 or later. The default is false.
      ; grinder.useNanoTime = true
      
  5. Following the bit at the very end of getting started guide setup the three .sh scripts (setGrinderEnv.sh, startAgent.sh, startConsole.sh) as outlined and place them in the demo directory. I changed them to run to use bash instead of ksh as have bash installed not ksh.
    • setGrinderEnv.sh:
      #!/bin/bash
      GRINDERPATH=/home/test/Desktop/grinder-3.2/
      GRINDERPROPERTIES=/home/test/Desktop/grinder-3.2/demo/grinder.properties
      CLASSPATH=$GRINDERPATH/lib/grinder.jar:$CLASSPATH
      #How I found it
      #ls -l `which java`
      #The above gave me: /usr/bin/java -> /etc/alternatives/java
      #ls -l /etc/alternatives/java
      #Which gave me: /etc/alternatives/java -> /usr/lib/jvm/java-6-sun/jre/bin/java
      JAVA_HOME=/usr/lib/jvm/java-6-sun/jre/
      PATH=$JAVA_HOME/bin:$PATH
      export CLASSPATH PATH GRINDERPROPERTIES
      
    • startConsole.sh:
      #!/bin/bash
      . ./setGrinderEnv.sh
      java -cp $CLASSPATH net.grinder.Console
      
    • startAgent.sh:
      #!/bin/bash
      . ./setGrinderEnv.sh
      java -cp $CLASSPATH net.grinder.Grinder $GRINDERPROPERTIES
      
  6. Make the four .sh scripts executable chmod +x *.sh
  7. Run ./startConsole.sh
  8. Run ./startAgent.sh
  9. In the Grinder Console under the Processes tab it shows that the agent has connected to the console

  10. Create a new file called simple.py and place the following in it:
    • simple.py:
      from net.grinder.script.Grinder import grinder
      from net.grinder.script import Test
      from net.grinder.plugin.http import HTTPRequest
      
      test1 = Test(1, "Request resource")
      request1 = test1.wrap(HTTPRequest())
      
      class TestRunner:
          def __call__(self):
              result = request1.GET("http://localhost:80/")
      
  11. In the script tab select simple.py and click the send changed file to worker processes button
  12. I found that the console could not upload the files to the agent because one of the threads died with a "Java.lang.OutOfMemoryError : Java heap space" error. To get around this in startConsole.sh I added -Xmx1024m to the java command like the below
    • startConsole.sh:
      #!/bin/bash
      . ./setGrinderEnv.sh
      java -Xmx1024m -cp $CLASSPATH net.grinder.Console
      
  13. So turn the agent and console off. And then start the console and the agent
  14. Click the start processes button
     
  15. Yay you are now testing against your web server
      
  16. When you are done click the stop button and you can have a look at the results
     
  17. So there you go the Missing Grinder Getting Started Guide.

Where to from here? Well you can have multiple agents, talk more than just HTTP (look in the examples directory for JDBC, JMS, SMTP, etc) and record TCP streams via a proxy.

Something I found out later. The reason I was getting the heap of space was that the log directory was one of the directories which was trying to be synced up the agents and this was too large so it crashed console. I will have to figure out how to make it save the logs somewhere else.

Sunday, 16 August 2009

Testing on the Bog - Pushing the Boundaries

PDF version for placing in your office.
pdfreaders.org

What is Testing on the Bog?

Code has “if” statements in it. When testing you need to check that the different conditions of the if statement. Lets take the following code snippet

if input_date <= current_date:
echo 'Date not in the future'
else:
echo 'Date in the future'


The obvious values to test is a date less than or equal to today and one in the future. Test with 2009-08-08 and 2009-08-29 would give you 100% statement and branch coverage when you get your coverage statistics from your unit tests. Well that was easy wasn't it?

“High Code Coverage” ≠ “Well tested Code”

Firstly it is “<=” so both the “<” and the “=” need to be tested separately. Also mentioned in the previous Testing on the Bog dates can be complicated beast as well.

So looking at the clear boundaries to test there are also the not so obvious ones. An example of this could be:
  • Time is often stored as an integer and only formated in the interface layer, so can you put in a date which so big that it wraps the integer around and it becomes less than the current date? And the flip so negative this get confused as well?
The explicit boundaries, which are normally documented as functional requirements, are generally ok to find it is the other ones, which may be non functional requirements or just not documented at all, that aren't obvious which will catch you out some of the time. It is impossible to think of every boundary along the way but just remember you have to look beyond of the obvious ones. Some of these may seem a bit crazy but what users will input into a system can some of the time be very bizzare.

Rigour in Testing

This week I went to a presentation by James Bach on the Myths of Rigour at the Wellington Testing Professional Network Presentation.

The biggest thing that I took away/had reafrimed was that rigour/process should not be blindly followed. The person executing the process needs to understand why they are doing what they are doing. If something is just being blindly followed aspects of it may be missed as instructions only cover what the instruction writer knows to write down there is a lot of steps that are implied or just done because not doing them would be stupid or not mean things would work right, so the writer doesn't think to write them down. If the background knowledge is there and the understanding of the process is there these implied steps should be piked up naturally and issues will be detected better. Also part of a tester personal attributes should be inquisitive and personally want to have an understanding an a in depth knowledge of how things work and why they work that way. Else you might as well just believe the developer when they say yes it is tested and work fines.

If you just follow a process how do you know if the output you are producing are correct or are helpful? This can be for testing itself but also for the documentation that surrounds testing. There are templates which say a test plan should look like X and a test case should look like Y. But if you don't know the why behind all the headings is it helpful? Also an organisation's test plan template may be 25 pages sure fine when on a project where have multiple testers and is going to six months plus of testing work, but what about when one person needs to do some testing that is only going to take a couple of days to complete? Is this 25 page template useful? If you don't understand it you will have to just fill it all out and it is going to take longer than the testing is going to take. If you understand this, one would hope that you can only fill out the sections that make sense and quickly explain why you haven't filled in some of the sections seeing they are not applicable to what you need to do.

One example I have seen where someone who understands something forgets to teach all the basics along the way. I was giving some business users some help with some basic SQL queries. They were saying SQL was hard and they just didn't understand what they were doing. They said that the other person had spent a day with them teaching them SQL they knew about selects, update, inserts, deletes, wheres and joins. So I was a little perplexed that they were having issues with a SELECT * FROM foo WHERE bar = foobar. But after some talking with them found out they didn't actually really know what a database really was and what tables were. After taking a step back and saying a table is like a spreadsheet with columns and rows and a where was like a filter. That a database was a collection of tables and a basic of joins using a person spreadsheet and a address spreadsheet, which could be of type street or postal they were away running.

Coming back to the rigour aspect I go back to my time at university. Seeing I did honours there I spent a lot involved in academic rigour. One of the big things with that type of rigour was that someone with a similar background should be able to follow your method and reach the same results. They mightn't draw the same conclusions from the results but the results themselves should be the same. So this aspect of rigour is applicable to what we do in the software testing world. Our testing of systems should be repeatable by people other than ourself and repeatability comes into great importance with regression testing and test automation especially when it is being run every night or on every check in. Even when it is automated the person receiving the result needs to be able understand the process even do it manually, so that they can fully understand the results and draw accurate conclusions from it all.