In a performance test, the standards for assessing its test results are not sufficiently determined due to the lack of a well-structured test developing methods which are found in a functionality test. By extending the established workflow structure, this approach will concentrate on tradeoffs within T-workflow and further develop tests based on T-workflow. The monitoring and tuning point have also been investigated to understand the validity and performance of software. Finally through a case study, it has been shown that better assessment of software performance can be obtained with the suggested tests developed based on T-workflow and by locating its monitoring point and tuning point.
A quality of software (SW) is directly related to its performance testing, in which the system’s efficiency and reliability are assessed. Performance test measures the speed under certain loading conditions and discovers bottlenecks within the functions of a system. A performance test is conducted primarily for verifying a system’s satisfaction of the performance objectives [
A SW performance is validated with a performance evaluation before SW development. It is also validated by performance test completed after SW development. Performance models are used to build the performance evaluations, and the most frequently used models are based on the software architecture (SA) [2,3]. Most performance evaluations are built with performance models that only assess the performance of SA, not the SW. Therefore, there is an inevitable gap between the performance results analyzed with performance models and the realized SW performance. In other words, there are limitations in a performance evaluation designed only with performance models.
On the other hand, a performance test is built with performance requirements and workload models. Many studies have been asserting the importance of clarification of performance requirements in developing dependable performance tests due to the fact that most of the tests are conducted by framing test scenarios based on the performance requirements. Other performance tests are built with more realistic workload models developed through analyzing user behavior patterns. Whether the test cases were developed based on performance requirements or workload models, only achievement of performance requirements can be verified for test items. Therefore the complex relationships between performance attributes are not reflected despite of their importance. In this paper, a performance test’s coverage is defined for analyzing performance attributes’ side-effects and the test cases satisfying the suggested coverage is developed.
It is generally believed that the performance of SW is determined at the SA development stage. Before the development of SW, SAs are mostly used for performance assessment [
Following this introduction, existing architecture-based performance evaluations, performance analyses and performance tests are examined in Section 2. In Section 3, suggested methods and processes of developing such test cases are explained. In Section 4, case study for NAND flash memory file system applying the suggested methods is explained. Finally in Section 5, this study is concluded with future plans.
Performance is one of the features of a whole system, reflecting its overall functionalities. A performance test is usually conducted at the system test level after completion of system development. The scenario-based blackbox technique is used for commonly employed performance test methods and tools. The technique develops test scenarios based on performance requirements [4,6] or measuring workloads by analyzing the existing usage data [
Studies on the existing model-based performance testings are mostly aimed at constructing more realistic workloads [
Software architecture is a set of important decisions made on the structure of SW. SA illustrates structures of SW at a high level of abstraction [
There are various studies on SW performance analysis, using SAs at the early SW development stage through performance prediction and evaluation. By analyzing the SW performance in SW development stage, weaknesses of SW can be discovered as early as to be supplemented or adjusted, leading to the improvement of SW quality. In Software Performance Engineering, SPE [
Recently, a performance model using the SA regular requirement models has been suggested for analysis of the SW performance. Such a model can help select the SA with optimum performance [2,3]. However, these methods are basically used for selecting the optimal architecture at the development stage due to the inevitable gap between the SAs and the realized SWs. Therefore, additional performance tests are required after SW development.
In this paper, utilization of analyzed results of architectural decisions’ tradeoffs is suggested for developing performance test cases. Architecture decisions are major SA solutions, directly influencing on the establishment of performance attributes and their tradeoff-relationships. They can influence over more than one quality attributes. Through analysis of the tradeoffs within architecture decisions, four methods have been suggested for performance testing: 1) setting performance evaluation indices; 2) developing test cases applying the tradeoff-based workflow design as a test coverage; 3) identifying a monitoring point and using the performance-affecting data, monitored for interpreting the performance test results; and 4) identifying a tuning point. Several terms, such as tradebased workflow, a monitoring point, and a tuning point, are more clearly defined and explained in detail in following Section 3.2. Besides four methods proposed in this paper, the study also aims at analysis of the sideeffects of the performance attributes through a performance test, in which performance indices are set and test cases are built based on the tradeoffs.
To build performance test cases more effectively, the study addresses the two major test issues. First is “what should be tested in the performance test”, which uses SA tradeoffs in building performance tests. In the existing test methods, only one performance index has been evaluated. However, if a performance attribute in a tradeoff-relationship with another is selected to be tested as a performance index, its tradeoffs belonged to another quality attribute are also selected as performance indices. By evaluating the two or more performance indices simultaneously, the test results can be focused on the analysis of side-effects of performance attributes.
Second test issue is “what should be selected as input variables for each test case”. Problems in performance are usually caused by complex functions with various factors, and therefore, it is difficult to discover the causes. However, if a test case is built with selected key variables in SW performance, the cause of the performance problems can be more easily understood and analyzing the actual values of input variables can become easier to handle. In this paper, a workflow which illustrates the tradeoff-relationships between performance attributes, called T-workflow, has been drawn and test cases have been built in a way to cover this T-workflow to select appropriate input variables. Furthermore, new methods for locating a monitoring point and a tuning point is introduced. A monitoring point is a performance affecting point with which analysis of the causes of performance degradation can be better understood. A tuning point is a point at which performance adjustment can be made to find the optimal performance state.