I was recently reading the book "A Whack on the side of the head" and came across an interesting Sherlock Holmes story-
It seems that Sherlock Holmes and Dr. Watson went on a camping trip. They pitched their tent under stars and then went to sleep. In the middle of the night Holmes awakened and exclaimed," Watson, look up and tell me what you deduce". Watson opened his eyes, and said,"I see billions and billions of stars. Its likely that some of these stars have planetary systems. Further more, I deduce that there is probably oxygen on some of these planets, and its possible that life has developed on a few of the. Is that what you see ?
Holmes replied, "No, you Idiot. Somebody stole our tent."
I would like to share some perspective based on my experience in Software testing around this story-
The main point of this story is that sometimes the most important things aren’t the ones right in front of us but rather the ones that aren't. Most of the times when we test a Software product or application, the usual tendency is to test the newly developed areas which will have more likelihood of having bugs. This approach is pretty valid as it helps to attack the new code and find as many bugs as possible. Over a period of time, when the code changes starts to get reduced, the no. of bugs also tends to go down. What i have observed is that Testing teams start labelling the different modules in the Software products as "Stable", "not-so-stable", "unstable" depending upon the history of bugs found, doing the defect trend analysis. If the bug trends show one module to be stable, the usual strategy is to cut down on the tests and save time and efforts. While this approach may be quite obvious to follow but keeping overall view in mind, this approach could prove to be grossly inadequate. Here's the reason why-
Stability of a Software component depends upon myriad of factors beyond just the bug trends, Some of them are-
- Have i really tested the component adequately ? Are there any more heuristics that i can apply ?
- Do i base my test coverage only on the basis of written test cases or do i also take into account the exploratory testing efforts ? How do I measure Exploratory testing efforts ?
- Have the code changed in the recent past ? How do i know if the code has changed ? Did i follow the code changes check-in?
- How is my Software component getting impacted with integration from other "nearby" components? Are these neighbouring components stable too ?
As is evident, the stability of a module is actually a function of many things of which the Defect density and Defect trends are just one part. It is grossly inadequate to just analyse the bug trends but not have the trends specific to changes in the code, test coverage including exploratory testing etc. in front of you. Like with the profession of farming, testing also follows the principle of "the more you sow, the more you reap". The more test coverage and testing time you give to a certain module, the more chances you would have to find defects. Like in the Sherlock Holmes' story, dont overlook the obvious. It is often quite risky to fall in the trap of "Stable module having no bugs", eventually it may be your goldmine of bugs.
One learning i have had over the years is- When Software tends to get Stable, treat that as a time to redefine your Testing Challenge and focus. It often helps to go beyond the obvious and attack the stable areas with a renewed approach.
Which obvious aspects of work did you challenge today ?