sc0 ls6 ws3">graphs of RE (ri) exhibit consistent degrees of slopes for
all programs. Thus, the graph analysis of RE (ri) can allow
testers to roughly predict the time to terminate the test-
ing.
4.3. Threats to Validity
Risk is largely divided into Project Risk, Process Risk
and Product Risk. We evaluated test case priority by us-
ing the risk exposure values of risk items for product
risks. Thus, we made the evaluation under the assump-
tion that there exist no project or process risks associ-
ated with test cases.
The source size of the Siemens programs which are the
subject of our e mpirical stud ies is small. In this sense, we
need to conduct future works to validate whether the ex-
periment results can be held in other experimental situa-
tions including programs which are larg er than the so urce
size of the Siemens program or other types of programs.
Nevertheless, our experiment could have relative objec-
tivity in terms of evaluating number of fault detected,
fault detection rate and the severity of faults since Sie-
mens programs have a set of faulty versions. Moreover,
the programs could be advantageous in that they also
could be used as a subject in other priority techniques to
be compared.
5. Conclusions
We developed a technique to prioritize test cases by em-
ploying risk exposure values calculated in each require-
ment and described the proposed prioritization technique
based on comparative analysis between ours and several
other existing methods. The characteristics of our method
are as follows.
First, our method does not require the pre-executed test
results, unlike other existing techniques. Instead, we de-
velop and use a metric for risk item evaluation. Th is me-
thod is feasible to b e conducted without the previou s test
execution results and thus it is expected to have a wide
range of applications. In addition, we specifically defined
product risk items and it is expected to be useful for risk
identification process.
Second, we presented an empirical study comparing
the effectiveness of our approach with other prioritization
approaches. Our empirical study shows our prioritization
technique using risk exposure is promising in terms of
effectives in detecting severe faults and benefits in terms
of time and cost efficiency.
The risk-based test approach we propose somewhat
focus on the functional testing. We plan to expand our
study on test case prioritization technique by employing
Copyright © 2012 SciRes. JSEA
A Test Case Prioritization through Correlation of Requirement and Risk 831
Figure 3. APFD.
Copyright © 2012 SciRes. JSEA
A Test Case Prioritization through Correlation of Requirement and Risk
832
Figure 4. APFD box-plox (vertical axis is APFD score).
Copyright © 2012 SciRes. JSEA
A Test Case Prioritization through Correlation of Requirement and Risk 833
Table 9. Data of all in Figure 4.
No prior iti za ti on RE (ri) Safety tests Optimal prioritizationFET-total
Max. 0.97 0.97 0.97 0.98 0.97
70% 0.96 0.96 0.96 0.96 0.96
50% 0.96 0.95 0.96 0.96 0.96
30% 0.70 0.94 0.91 0.94 0.92
Min. 0.01 0.19 0.17 0.20 0.16
Figure 5. Severity of fault.
Copyright © 2012 SciRes. JSEA
A Test Case Prioritization through Correlation of Requirement and Risk
Copyright © 2012 SciRes. JSEA
834
risk metric of performance features.
6. Acknowledgements
This research was supported by the MKE (The Ministry
of Knowledge Economy), Korea, under the ITRC (In-
formation Technology Research Center) support program
supervised by the NIPA (National IT Industry Promotion
Agency (NIPA-2012-(H0301-12-3004)).
REFERENCES
[1] G. Rothermel, R. H. Untch, C. Chu and M. J. Harrold,
“Prioritizing Test Cases for Regression Testing,” IEEE
Transactions Software Engineering, Vol. 27, No. 10, 2001,
pp. 929-948. doi:10.1109/32.962562
[2] R. Krishnamoorthi and S. A. Mary, “Factor Oriented
Requirement Coverage Based System Test Case Prioriti-
zation of New and Regression Test Cases,” Information
and Software Technology, Vol. 51, No. 4, 2009, pp. 799-808.
doi:10.1016/j.infsof.2008.08.007
[3] S. Yoo and M. Harman, “Regression Testing Minimiza-
tion, Selection and Prioritization: A Survey,” Software
Testing, Verification and Reliability, Vol. 22, No. 9, 2012,
pp. 67-120. doi:10.1002/stvr.430
[4] H. Do and G. Rothermel, “On the Use of Mutation Faults
in Empirical Assessments of Test Case Prioritization
Techniques,” I EE E T ra ns actions on S oftware Engineering ,
Vol. 32, No. 9, 2006, pp. 733-752.
doi:10.1109/TSE.2006.92
[5] H. Stallbaum, A. Metzger and K. Pohl, “An Automated
Technique for Risk-Based Test Case Generation and Pri-
oritization,” Proceedings of the 3rd International Work-
shop on Automation of Software Test, New York, 11 May
2008, pp. 67-70. doi:10.1145/1370042.1370057
[6] Y. Chen, R. Probert and D. P. Sims, “Specification-Based
Regression Test Selection with Risk analysis,” Proceed-
ings of the 2002 Conference of the Centre for Advanced
Studies on Collaborative Research, Toronto, 30 Septem-
ber-3 October 2002, pp. 1-14.
[7] B. W. Boehm, “Software Risk Management: Principles
and Practices,” Software, Vol. 8, No. 1, 1991, pp. 32-41.
doi:10.1109/52.62930
[8] P. Gerrad and N. Thompson, “Risk-Based E-Business
Testing,” Artech House, Norwood, 2002.
[9] Institute of Electrical and Electronics Engineers, IEEE.
Std. 1044.1-1995, “IEEE Guide to Classification for Soft-
Ware Anomalies,” 5 August 1996.
[10] D. R. Wallace and D. R. Kuhn, “Failure Modes in Medi-
cal Device Software: An Analysis of 15 Years of Recall
Data,” Reliability, Quality and Safety Engineering, Vol. 8,
No. 4, 2001, pp. 301-311.
[11] M. Sullivan and R. Chillarege, “A Comparison of Soft-
ware Defects in Database Management Systems and Op-
erating Systems,” Digest of Papers FTCS-22, The 22nd Inter-
national Symposium on Fault Tolerant Co mp uti ng , Boston,
8-10 July 1992, pp. 475-484.
[12] A. Jha, “A Risk Catalog for Mobile Applications Com-
puter Sciences,” Florida Institute of Technology, Mel-
bourne, 2007.
[13] J. Bach, “Heuristic Risk-Based Testing,” Software Test-
ing and Quality Engineering Magazine, Vol. 1, No. 6,
1999, pp. 96-98.
[14] S. Pertet and P. Narasimhan, “Causes of Failure in Web
Applications,” Parallel Data Laboratory, Carnegie Mellon
University, 2005.
[15] T. L. Saaty, “How to Make a Decision: The Analytic
Hierarchy Process,” European Journal of Operational
Research, Vol. 24, No. 6, 1990, pp. 9-26.
doi:10.1016/0377-2217(90)90057-I
[16] S. Amland, “Risk-Based Testing: Risk Analysis Funda-
mentals and Metrics for Software Testing Including a Fi-
nancial Application Case Study,” Journal of Systems and
Software, Vol. 53, No. 3, 2000, pp. 287-295.
doi:10.1016/S0164-1212(00)00019-4
[17] “Siemens Program,” 2012.
http://pleuma.cc.gatech.edu/aristotle/Tools/subjects/
[18] S. Elbaum, A. G. Malishevsky and G. Rothermel, “Test
Case Prioritization: A Family of Empirical Studies,”
IEEE Transactions Software Engineering, Vol. 28, No. 2,
2002, pp. 159-182. doi:10.1109/32.988497
[19] D. Jeffrey and N. Gupta, “Test Case Prioritization Using
Relevant Slices,” Proceedings of the 30th Annual Inter-
national Computer Software and Applications Conference,
Chicago, 17-21 September 2006, pp. 411-420.
A Test Case Prioritization through Correlation of Requirement and Risk 835
Appendix
Comparison between the proposed risk items and referred risk items.
Copyright © 2012 SciRes. JSEA