How to choose a regression testing strategy


What is regression testing?
Find out here!

"It's a funny thing about testing - the worse software is when it gets to us, the longer it takes to get it out."

Summary: From my experience, software regression testing of any type of an application is the most time-consuming, and is sometimes even a bottleneck. In this article I will try to summarize my research and experience in choosing a strategy and running regression testing suites.

From my experience, regression testing of any type of an application is the most time-consuming, and is sometimes even a bottleneck. In this article I will try to summarize my research and experience in choosing a strategy and running regression testing suites. If SDLC is an iterative development, inserting regression testing into the schedule during the second iteration without any additional resources already becomes a problem (my assumption is that the number of test cases may be doubled from the first iteration to the second). From my point of view the second iteration is not the right time to automate regression testing. I will not discuss any automation aspects of regression testing in this article. To survive, you have to automate regression testing after developing and stabilizing the application anyway. If you have reached 80% coverage of the existing TC (test cases) by automation (that is an excellent result - most automation gives 50% TC coverage from my own experience) you will still need to run the rest of the TC manually. Even automated tool providers expect the manual TC. For example, "Rational" included a manual TC option in the "Test Manager" 2002 version. So let us assume that we need to run a set of TC manually. The set of TC is growing dramatically during the SDLC process of development. So you can't allocate time and resources to run all existing TC every time you make any changes in the application. To maintain and deliver good software to the client, you must define a proper strategy for regression testing that matches your goals and targets from the beginning. If you are initially involved in the development process and can decrease the variability of the system it will be great but can be discussed in a different article.


Assumptions:
a. If you have a test case, you must run it.
b. The number of TC for every non-trivial software system aspires to infinity.
c. Some test cases are more important than others.
d. This is not safety-critical software.


Let us begin with some popular definition of software regression testing:
1. Software regression testing - testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Myers, 1979]
2. Software Regression Testing - any repetition of tests (usually after software or data change) intended to show that the software's behavior is unchanged except insofar as required by change to the software or data. [B. Beizer, 1990]
3. Software Regression Testing - testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.
4.Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
5. Software Regression testing - rerunning test cases, which a program has previously executed, correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.
6. Software Regression testing - selective retesting of system or component to verify that modifications have not caused unintended effects and that system still complies with its specified requirements. [IEEE 610]



I think that there is no need to explain the above definitions and so we move on.
Let us define the most popular strategies for selecting regression test suites:
1. Retest all. Rerun all existing test cases. Simple but impossible in the time that we have in our everyday practice.
2. Retest Risky Use Cases. Choose baseline tests to rerun by risk heuristics. Here we are speaking about RUP (Rational Unified Process development activities) for details see: http://www.rational.com/products/rup/
3. Retest By profile. Choose baseline tests to rerun by allocating time in proportion to operational profile.
4. Retest Changed Segment. Choose baseline tests to rerun by comparing code changes (White Box Software regression testing strategy).
5. Retest within Firewall. Choose baseline tests to rerun by analyzing dependencies (White box regression testing strategy). [R. V. Binder, 1999]
6. Apply hierarchical increment testing (HIT) (close to Retest within Firewall). [John D. McGregor, 2001]
7. Apply Black Box Monkey Testing - [Thomas R. Arnold, 1998]

It is possible to write a separate article about any of these strategies and a separate book about their comparison. If you are interested in details please refer to Binder and McGregor books. I am trying to use the KISS law (keep it stupid simple) and think that when you need to choose something and it depends on your or somebody else's opinion - mistakes can take place. In my regression testing strategy, I am trying to minimize all human factors in common.


Suggested strategy for software regression testing

In my practice, I simply combine three different popular strategies.
Any one of the existing regression testing strategies may be good, but in the real world the combination may be a better decision. We are assuming that first we test the change (fix) itself by running all related TC.
1. Rerun the TC that have a high-risk or their failure results would affect the system from the business perspectives.
2. Run a continuous cycle - regression testing (run remaining test cases until you finish the cycle).
3. Use Exploratory Testing or any other types of testing to keep your test cases updated.

  • Retest the risky TC -30% of the allowed time. First of all my meaning of risky - from the business point of view. As components for defining priority I use business risk and frequency of using this scenario by the customer. The easiest way to define two levels (high and normal) is to add two columns in a test log or other document, you use and sort you TC (do not forget to rerun all of these test cases every iteration and exclude them from running as all test suites. Anyway, you will have some redundancy.). Yes - sometimes it is difficult to choose risky TC after different changes/fixes in the system.

  • Continuous cycle regression testing for running existent test suite- 50% of allowed time (Rerun all existing test cases until you finish them. It can take the duration of the two or even three iterations. You will begin the next continuous cycle of testing after finishing the first).

  • Exploratory Testing - 20% of allowed time (testing is a creative and innovative process and if you do not continuously improve the testing data and TC and do not create new test cases, believe me - something is wrong! Do not forget to properly document the results of exploratory testing. As a minimum -update the test log. If you do not like the sound of 'exploratory testing', use the time in the schedule to improve your understanding of the requirements, the system and your logical and architectural coverage of application by TC. You must allocate time and resources for this task anyway).
    50%; 30% and 20% is my suggestion. You can play this around, as you like. Most importantly, all your existing TC will run, you will have the priority for running test cases and you will allocate time for keeping your TC suit under continuous improvement.
    Now you can begin your own experiments but remember- do not implement all at once. Begin from ideas you like the best and go ahead.

    Some options for future consideration.

    When you have eliminated the impossible, whatever remains, however improbable, must be the truth.
    Sir Arthur Conan Doyle, (Sherlock Holmes)
  • Use the Orthogonal array testing for reduction of variation [Elfriede Dustin, 2001].

    The Orthogonal Array Testing technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC. Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by Taguchi [G. Taguchi, 1987]

  • Look for Model Based Testing.

    Model-Based Testing is a technique for generating a suite of test cases from a list of the requirements. Testers using this approach concentrate on a data model and generation infrastructure instead of hand-crafting individual tests. [Online papers]



    Conclusion:
    1. Be sure to have all your existing TC run in some period of time (not necessarily in every delivery).
    2. Try to manage risky test cases.
    3. Always spend some time to improve you test case suite.
    OK, these are my two cents of input.
    Good luck.
    Alex Samurin.
    Bibliography:
    1.Practical Guide to Testing Object-Oriented Software. McGregor, John D. and Sykes, David A. Canada: Addison-Wesley, 2001.
    2.Testing Object-Oriented Systems: Models, Patterns, and Tools (The Addison-Wesley Object Technology Series) by Robert V. Binder. October, 1999) Addison-Wesley Pub Co.
    3.STQE Magazine volume 3, issue 5, September/October 2001 p.46 Orthogonally Speaking, by Elfriede Dustin
    4.The Art of Software Testing by Glenford J. Myers 1979
    5.Software testing techniques by Boris Beizer, 1990
    6.Online Papers About Model-Based Testing http://www.geocities.com/model_based_testing/online_papers.htm
    7.Black Box Monkey Testing - Chapter 14 Visual Test 6 Bible by Thomas R. Arnold, 1998
    8.Orthogonal Arrays and Linear Graphs by G. Taguchi, 1987
    9. ANSI/IEEE standard 610.12-1990 -glossary of software engineering terminology.
    contributed to a online resource for helping you produce better software StickyMinds.com on Mar 10, 2002
    republished by The Southern California Quality Assurance Association (SCQAA) Los Angeles Chapter on July 01, 2004

    Software Testing Main Page
    © 2005 Alex Samurin geocities.com/xtremetesting/