Please wait a minute...
, No 3
  • Original articles
    A Systematic-Testing Methodology for Software Systems
    2006, 2(3): 205-221.  doi:10.23940/ijpe.06.3.p205.mag
    Abstract    PDF (208KB)   
    Related Articles

    In this paper, a software testing methodology called two-level testing is developed to improve testing effectiveness by reducing the testing efforts and at the same time, by ensuring the predetermined quality of software products. The testing procedure including the criteria for alternating 100% and sampling testing is proposed by incorporating the characteristics of testing behavior into the well-known sampling method. The metrics of the testing performance are derived based on the transition probability. Various combinations of controllable parameters ensuring equivalent quality are also provided for the purpose of effective application. A numerical example is provided to illustrate the testing effectiveness of the proposed testing method.
    Received on January 25, 2006
    References: 22

    A Human Factor Analysis for Software Reliability in Design-Review Process
    2006, 2(3): 223-232.  doi:10.23940/ijpe.06.3.p223.mag
    Abstract    PDF (311KB)   
    Related Articles

    Software faults introduced by human development work have great influence on the quality and reliability of a final software product. The design-review work can improve the final quality of a software product by reviewing the design-specifications, and by detecting and correcting a lot of design faults.
    In this paper, we conduct an experiment to clarify human factors and their interactions affecting software reliability by assuming a model of human factors which consist of inhibitors and inducers. Finally, extracting the significant human factors by using the quality engineering approach based on the orthogonal array L18(21×~37) and the signal-to-noise ratio, we discuss the relationships among them and the classification of detected faults, i.e., descriptive-design faults and symbolic-design ones, in the design-review process.
    Received on December 16, 2005
    References: 11

    Transient Cost Analysis of Non-Markovian Software Systems with Rejuvenation
    2006, 2(3): 233-243.  doi:10.23940/ijpe.06.3.p233.mag
    Abstract    PDF (330KB)   
    Related Articles

    In this paper, we perform the transient analysis of software cost models with periodic/non-periodic rejuvenation. We derive the Laplace-Stieltjes transforms of the ergodic probabilities for respective semi-Markov and Markov regenerative process models, and evaluate numerically the expected cumulative costs experienced by an arbitrary time and its time average by using the Laplace inversion technique, where an improved version of the classical Dubner and Abate's algorithm is used. Numerical examples suggest that the optimal software rejuvenation policy minimizing the expected cumulative cost shows quite different aspects from the steady-state solution which minimizes the long-run average cost.
    Received on March 31, 2006
    References: 20

    Testing Effort Control using Flexible Software Reliability Growth Model with Change Point
    2006, 2(3): 245-263.  doi:10.23940/ijpe.06.3.p245.mag
    Abstract    PDF (240KB)   
    Related Articles

    There exist various Software Reliability Growth models (SRGMs) in the software reliability engineering literature which assume diverse testing environments like distinction between failure and removal processes, learning of the testing personnel, possibility of imperfect debugging and error generation etc. But most of them are based upon constant or monotonically increasing Fault Detection Rate (FDR). In practice, as the testing grows, so does the skill and efficiency of the testers. With the introduction of new testing strategies and new test cases, there comes a change in FDR. The time point where the change in removal curve appears is termed as 'change point'. In this paper we incorporate the concept of change point in flexible SRGM with testing effort. We further extend to include the problem of 'Testing Effort Control'. When the testing is in its late stage and the product release date is approaching, an assessment is done to review the progress of testing and requirement for the additional efforts is worked out to meet the pre-specified reliability targets. The proposed model is extended to yield the trade off analysis with respect to aspiration level for reliability of the product. The predictive power and accuracy of the model has been worked on two real failure datasets. The results obtained show remarkable improvements and are fairly encouraging.
    Received on March 10, 2006
    References: 18

    Integrated Product and Process Attribute - Quantitative Model for Software Quality
    2006, 2(3): 265-276.  doi:10.23940/ijpe.06.3.p265.mag
    Abstract    PDF (118KB)   
    Related Articles

    The software size and complexities are increasing and software project management has become very crucial. The industry demands for consistent software quality within the stipulated cost and time frame. In order to achieve quality, cost and schedule target, quantification and prediction of these attributes early in the life cycle of development have become an important problem. Software does not manifest its quality properties directly; instead it is exhibited through certain contributory measures of the process steps and intermediate work products.
    Attempts have been made in past by various researchers to correlate the product and process attributes, however these type of modeling is done only for subset of the attributes spanning to one and two software development phase[1]. In the present paper Integrated Product and Process Attribute - Quantitative Model (IPPA-QM) is proposed which is based on the relationship of elemental product and process attributes. Prediction equations are developed for the software development phases. Requirements phase equations are substantiated using quantitative techniques.
    IPPA-QM provides the holistic view of the product and process attributes throughout the software development life cycle by applying various quantitative techniques. IPPA-QM enables prediction based planning and corrective action early in the development lifecycle thereby improving the execution capability of the IT organization and achieving quality, cost and schedule targets.
    Received on March 10, 2006
    References: 08

    Dependability Benchmarks for Operating Systems
    2006, 2(3): 277-289.  doi:10.23940/ijpe.06.3.p277.mag
    Abstract    PDF (321KB)   
    Related Articles

    Dependability evaluation is playing an increasing role in system and software engineering together with performance evaluation. Performance benchmarks are widely used to evaluate system performance while dependability benchmarks are hardly emerging. A dependability benchmark for operating systems is intended to objectively characterize the operating system's behavior in the presence of faults, through dependability and performance-related measures, obtained by means of controlled experiments. This paper presents a dependability benchmark for general-purpose operating systems and its application to three versions of Windows operating system and four versions of Linux operating system. The benchmark measures are: operating system robustness (as regards possible erroneous inputs provided by the application software to the operating system via the application programming interface), operating system reaction and restart times in the presence of faults. The workload is JVM (Java Virtual Machine), a software layer, on top of the operating system allowing applications in Java language to be platform independent.
    Received on April 18, 2006
    References: 16

    Generalized Exponential Poisson Model for Software Reliability Growth
    2006, 2(3): 291-301.  doi:10.23940/ijpe.06.3.p291.mag
    Abstract    PDF (168KB)   
    Related Articles

    Software reliability modeling is challenging since no single Software Reliability Growth Model (SRGM) is considered suitable in all situations owing to poor goodness of fit, lack of predictive validity of the models and their sensitivity to fluctuations in the number of failures in the data sets. In this paper, we propose a Non-Homogenous Poisson Process Model whose failure intensity function has the same Mathematical form as that of the probability density function (pdf) of a generalized exponential distribution. The performance of the proposed model was verified and also compared with six chosen SRGMs using failure data from 18 software systems and the model is found to be adequate in terms of goodness of fit statistic and predictive validity. It is also less sensitive to fluctuations in data.
    Received on April 14, 2006
    References: 18

ISSN 0973-1318