Tuesday, 30 May 2006

Mirror Image

Mirror image

Tuesday, 23 May 2006

Jobserve rant

Jobserve is IMHO the UK's best job search site. This has nothing to do with the technology they use, but just because so many agencies post there and search the CV database. I think all the contracts I've ever had have been applied from jobserve, or the agent has seen my CV there.
However, it has a few shortcomings....

1. No spellchecker in job ads -> I've used the job posting functionality as a
'recruiter' before, and noticed this. It drives me crazy seeing basic technical terms spelt incorrectly in job ads - could be due to laziness or a genuine mistake on the agents behalf, or the agent not really knowing what they are writing. Today I saw 'Mogzilla'... If only they ran a basic spellcheck with a dictionary of standard IT industry terms! I had to write up my job ad's in Word to get it all spellchecked, then copy and paste into the job ad. I guess that won't help with agents who just totally get the context wrong alltogether tho - next time I see one of those I'll post it in.
2. CV database - I often get completely random email shots from agents who've just indexed something random in my CV (ie tested a product written in Delphi or something), and so I start getting totally random ads posted to me. Especially the permanent/contract role thing... This is really frustrating, as part of the Jobserve profile contains what job titles you want, and if you're after perm/contract/both. I don't know if this is the agencies re-indexing CV's and doing their own mailshots, or jobserve's cv indexing, but somewhere the job title and type is being totally ignored.
3. Random job results in the search - often I just get rubbish coming back... even tho it's filtered to UK, often get Australian jobs (not a huge issue, but annoying).
4. Links in the rss feed are often broken, or try to run another search rather than posting to the referenced job.

I do like the rss feed feature, but that seems to have different results to searching on the site - I use the same search terms, so perhaps they have a bug somewhere.

posted via email

Friday, 19 May 2006

Issue reporting styles in the real(?) world

My first real <rant/>...

So I've come across two major schools of thought when it comes to issue reporting...
School 1: for every bug/change/improvement, log a seperate issue
School 2: for related bugs/changes/improvements, log a single issue that contains all the related issues

My personal preference is for School 1. Why?

1. Bug Maintainence Time
Logging a issue with lots of issues in it is faster for administration - easier to close one bug, and resolve one bug to a developer.
Winner: School 2

2. Reporting
Logging seperate issues for each issue obviously means your reports will be accurate. If parts of the issue are fixed and not others in a version, its impossible to accuratley tally up the opened/closed counts against a version, or a person (ie bugs assigned per developer, bugs closed per developer etc). And even worse if the issue affects multiple 'components' or projects, then can't tell which areas have the most problems. Some systems enable time to be logged against tasks - estimates and actual time, and for future learning/planning, each problem/issue should have time logged seperatley allowing for better future planning. I've also come across sites where multiple issues that aren't even particularaly related are logged in a single issue (yeah, maybe they all happened during installation, but really some were installation bugs, others totally unrelated to installation, but a build issue instead).
Winner: School 1

3. Accuracy
Often if there's a large list of issues in one issue, some can get missed out (at spec, dev or test phase)
Winner: School 1

4. Parallel workflow
Assigning an issue containing lots of issues to one person means it wont show up in anyone elses work list until the first person is done/reassigned the issue. Seperating out each issue into it's own report means they can be worked on in parallel where possible, and not have to be passed from one to another.
Winner: School 1

Tools like JIRA are able to have 'sub tasks' - many small tasks logged seperatley, but grouped under a large task. Ie A group task called 'fix layout headers in all apps' - with a small subtasks for each application. This allows each subtask to be completed, reported etc seperatley, and also easily see the progress on the entire group of issues as a whole. Also tools that allow bulk editing make life easier for managers/developers etc to update multiple issues at a time, removing a lot of the administration time required.

Of course there needs to be some flexibility (ie if there are 5 spelling mistakes on a web page, then drop those into one issue, but spelling mistakes in different parts of the website need to be seperated out), but for the sake of simplicity and accuracy [speaking as someone who's been responsible for reporting and managing a change request system], logging seperate issues for each change is the way I would go. For a few extra minutes a day of administration time by your developers/managers, you'll be rewarded with a more accurate and realistic view of what's really going on in your change management process, and the overall quality of your products, assuming quality is something that is considered important in your overall product of course!

Another thing that's been bugging me is Issue ownership. I've worked at some sites where the Test Manager isn't particulary involved with issues after they've been logged. My philosophy is that the Test team are responsible for saying yay or nay to whether they consider a release is *OK*, so they own those issues raised, and should be involved with any planning around getting the issues fixed, changed, postponed, deleted etc. There are also sites where any old person (support, marketing, sales and other non-testers) have access to the issue/change management tools, so can raise issue, but then aren't considered responsible for ensuring those issues are dealt with, or confirming they are correctly resolved. If a bug is raised by a non-test team member, (i.e. it's usually been found outside a normal test cycle) it's my belief that someone from that team should be ultimatley responsible for verifying/confirming that issue is resolved to their satisfaction.