Saturday, March 10, 2012

What do you do when you "Hit a wall" as a tester ?

Further to the thoughts i shared in the post Great bugs exist "Beyond the Obvious" , i wanted to share a slightly different perspective.
The earlier post talked about how to challenge the notion of Stable components and working to redefine your Testing challenge around these components. It brought into focus the point that in Testing important things aren't always in front of us but often they are hidden. Below is one more perspective that i have found in my experience around challenging the things that visible and obvious and go some levels down to hunt the invisible and unobvious (read Bugs!)

In my experience, when a tester focuses on testing a Software application as a Black box, often the focus stays on two things- one is the User interface being shown in front and second is the Test case document being followed. Even if a tester does not follow a formal Test case document, it sometimes invariably occurs that focus remains on the how the application's UI is behaving under different functional conditions. There is nothing wrong with having such a focus as it helps to find the relevant bugs and simulate the user behaviour. On the contrary, in my experience, i have seen that limited focus on Software only as a Black box sometimes tend to limit the perspective and cause the shortage of ideas to test.

In running parlance , such a behaviour is sometimes called as "Hitting the Wall" i.e. when a runner has spent all his energies (happens because of depletion of glycogen stores in the liver and muscles) but still has a long distance to cover. When such a thing happens, a runner seeks inspiration from other sources to reach the finish-line. Similarly, a tester "Hits the wall" when he feels consumed, when he feels that all the ideas that were left to be tried have been tried and tested. I call this as a sort of trap because Software usually offers invariable no. of ways in which it could be tested all of which resides in tester's mind.

In such a situation of despair when the bugs seem to have dried up, the idea is to find the key that unlocks the mind of tester. Its actually the time to look "Beyond the obvious". One of the things that i have found useful in such situations is to gather the information around how the source code logic is developed. Always looking at a product from the Black box perspective, it usually becomes hard to know how the source code works. But one of approaches that i have found helpful is creating a flowchart (or mindmap) detailing how the internal logic works with the start-point being how user talks to the application and the probing what checks or logic would be built in the source code. As a tester, ask probing questions about how the internal logic works and dont be satisfied till you get an answer that makes sense. (That’s where the good relationship with Developers help!) In doing so, a lot of uncomfortable questions about the logic and thus unobvious bugs gets revealed.Having such conversations with developers help in more ways that one. It helps you enhance your credibility but also many times I have seen it gives ideas to developers on how to improve the code.

One more approach or heuristic (widely used term these days) is analyzing the source-code check-ins to derive more meaningful tests. Will talk about it in the coming posts.

At this stage, I remember one more story from Sherlock Holmes-

This point is also made in the oft-quoted Sherlock Holmes short story, "Silver Blaze", about the disappearance of a championship race horse. During the investigation, a detective asked Holmes: "Is there any point to which you would wish to draw my attention?" Holmes replied, "To the curious incident of the dog in the night-time." "The dog did nothing in the night-time." "That was a curious incident," remarked Sherlock Holmes.
For our fictional sleuth, at least, sometimes the most important things are those that don't happen. In this case a non-barking dog provided clue that the thief was probably someone the dog knew, and that narrowed the list of possible culprits.

If we're going to be resourceful as a tester, we should also take note of what's obviously not present (or not happening) as well.

I know this blog represents just one modest idea, Do you have more ideas on how to come out successfully when a tester "hits the wall" ?

Images Source:

No comments: