Wednesday, 7 May 2008

Who tests the testers?

Got an interesting situation at my current contract. The application I'm working on sources data via various ETL (extract, transform and load) processes, via various datamarts, and into our application database.

In the application, there are various calculations that drive off the data in the database. Some of these calculations can get rather hairy. As a tester, we have to prove the calculations on the UI are correct, based on what's in the database (as we can't go any further back to enter/source data), so we write rather large, ugly SQL scripts, that possibly make incorrect assumptions, or may be just plain wrong.

As the test lead, I need to come up with a method to minimise mistakes made by the testers, so we're not creating too much 'noise' in the defect list. I have some possible solutions, none of which are perfect, but should suffice in the short term. It will be interesting too see if the development team have made the same assumptions or conclusions re how to derive the answers, based on the same Use Cases as we have.

I guess this is a bit of a questions of 'who tests the testers' - how far down the line of testing the test scripts do we want to go? Should it be a case of the project team just biting the bullet, and sucking up the extra analysis time needed from all involved on the first few times these queries are run, to get them correct based on the requirements documentation?

Image courtesy of post406's photostream (creative commons licence)


  1. Can you create 'base' scenario data, that have been proven, and then create test cases off that? Perhaps have a test database that you can test against? Perhaps one can be reset daily, hourly or at the users' request? It sounds like the problem could be testers generating valid data to test against?

    Interesting topic - especially for the test cases where you actually need to test against real data, but the data needs to be valid too...

  2. Sadly, on this project, the approach was never taken to test against a 'known' set of test data (before later retesting integration against the 'real' live data coming in the ETL). So there isn't time to create the test data to cover the scenarios needed, and get that test data verified as well as the sql...

    It's a bit of a thinker this one :)

  3. Hi, you is very beautifull!


  4. This post is very beneficent for me because i am interested in about software management. That will be very beneficent for the students of of software engineering really.