Please wait a minute...
, No 2
Special Section on Dependable Systems & Applications
Background
International Journal of Performability Engineering (IJPE), in collaboration with the 6th International Conference on Dependable Systems and Their Applications (DSA – an IEEE technically sponsored conference), will have a special section focusing on innovative methodologies, techniques, tools, and management to produce dependable and trustworthy systems and their applications in a more cost-effective way. It provides [Detail] ...

■  Cover Page (JPG 4.81 MB) ■ Editorial Board (PDF 72.8 KB) ■ Table of Contents, Feb 2020 (PDF 304 KB)

  
  • Orginal Article
    Using Cost-Effectiveness Acceptability Curves as a Basis for Prioritizing Investments in Safety Measures in the Offshore Oil and Gas Industry
    Eirik Bjorheim Abrahamsen, Jon Tømmerås Selvik, and Håkon Bjorheim Abrahamsen
    2020, 16(2): 163-170.  doi:10.23940/ijpe.20.02.p1.163170
    Abstract    PDF (339KB)   
    References | Related Articles

    This paper reviews and discusses cost-effectiveness acceptability curves as a basis for prioritising investments in safety measures in the offshore oil and gas industry. We conclude that such curves should be used with caution, as cost-effectiveness acceptability curves focus mainly on the probability that safety measures are cost-effective. Consequently, safety measures with a small probability of being cost-effective, but with a potential of being highly cost-effective in actual demand situations, could then be given low priority compared to other safety measures. To improve the basis for prioritising safety measures, we recommend including assessments of the cost-effectiveness given an actual demand situation, in addition to the probability that the safety measure is cost-effective. We also highlight the importance of reflecting the strength of knowledge on which the probability and the cost-effectiveness assignments are based.

    System Dynamics Simulation of Global Software Development Process
    Jiamei Niu, Xuan Zhang, Ziqi Tang, Jingzhuan Zhao
    2020, 16(2): 171-184.  doi:10.23940/ijpe.20.02.p2.171184
    Abstract    PDF (359KB)   
    References | Related Articles

    With the wide application of computer systems in various fields, the complexity and importance of the software system are increasing. It is difficult to develop a dependable software system. In this paper, we use the system dynamics (SD) simulation method to simulate the global software development (GSD) process, which is used to help project teams control the degree of temporal dispersion and geographical dispersion in the software development process. If it is properly controlled, more dependable software can be developed. First, a simulation modeling framework for the GSD process is proposed. Then, the SD simulation subsystem models for GSD are built and tested. Finally, through the simulation analysis of the Apache Hadoop and Ambari project, we summarize the impact of temporal and geographical dispersions on the quality and schedule of global software development projects. The feasibility of the simulation model is also verified.

    Fast Interprocess Communication Algorithm in Microkernel
    Xinghai Peng, Kun Xiao, Yun Li, Lirong Chen, and Wen Zhang
    2020, 16(2): 185-194.  doi:10.23940/ijpe.20.02.p3.185194
    Abstract    PDF (424KB)   
    References | Related Articles

    Since most of the system services of the microkernel run in user space, the user process needs to request these services through the kernel's communication mechanism. Therefore, the frequency and amount of IPC (inter-process communication) in the microkernel is much higher than those in the monolithic kernel, causing the performance of the microkernel to be poor. In this paper, two exchange-based IPC algorithms are proposed to optimize the communication algorithm based on the replication mode, which is commonly used in the microkernel. One is based on the exchange of physical pages of MMU (memory management unit), and the other is based on the exchange of segment base addresses of MMU. These two exchange algorithms can achieve efficient transmission while keeping the independent address space to ensure the correctness and security of the transmitted data, greatly improving the efficiency of communication in the microkernel.

    Analysis of Local Government Behaviors and Technology Decomposition of Carbon Emission Reduction under Hard Environmental Protection Constraints
    Wei Cui, An-Wei Wan, Jun-Chao Feng, and Zhi-Yuan Zhu
    2020, 16(2): 195-202.  doi:10.23940/ijpe.20.02.p4.195202
    Abstract    PDF (441KB)   
    References | Related Articles

    In this study, carbon emission reduction (CER) behaviors of each local government of China were analyzed under hard environmental protection constraints from the perspective of the CER policies formulated by China’s central government. Game theory proved that local governments will centralize superior resources to meet the requirements on CER if CER results are uncertain. The Malmquist index also empirically confirmed that China had low carbon emission efficiency before the introduction of a series of hard environmental protection systems in 2015, but the efficiency showed a leap-like increase in 2016 and 2017. Decomposition analysis of the technology used in CER revealed that the enhanced CER was attributed to the efficiency improvement of carbon emission technology. In summary, the implementation of hard environmental protection system was proven to have an immediate effect on CER, and the technical efficiency was confirmed to be the main factor for enhanced CER.

    LAL: Meta-Active Learning-based Software Defect Prediction
    Yubin Qu, Fang Li, and Xiang Chen
    2020, 16(2): 203-213.  doi:10.23940/ijpe.20.02.p5.203213
    Abstract    PDF (842KB)   
    References | Related Articles

    Software defect prediction plays an important role in improving the quality of software systems. Active learning can be used to choose unlabeled instances to construct a classifier for software defect prediction so that a smaller size of labeled instances and lower costs are needed. However, in the real software quality assurance process, there are a few labeled instances in the initial stage of software development. Moreover, there is a natural class imbalance in gathered software modules because most of software modules are defect-free modules. Therefore, a meta-active learning is introduced to resolve this problem. Firstly, the target dataset distribution can be learned via learning active learning (LAL) from historical datasets using random forests. The regression model is learned from the unbalanced dataset with Gaussian distribution. Finally, the model is used to calculate the loss gain of the unlabeled software module, and the sample with the max loss increase is labeled. In our empirical study, we conduct experiments on AEEEM, MORPH, and NASA datasets, which are gathered from real open source projects. Firstly, we analyze the influence of different query strategies and find that LAL can achieve the best performance on the three datasets when the proportion of labeled datasets is lower. Then, we compare the LAL query strategy with five state-of-the-art query strategies when the initial labeled instances ratio changes from 1% and 5% to 10%. We find that LAL can achieve the best performance.

    Lightweight Fault Localization using Weighted Dynamic Control Flow Subgraph
    Yong Wang, SanMing Liu, Jun Li, Xiangyu Cheng, and Wan Zhou
    2020, 16(2): 214-222.  doi:10.23940/ijpe.20.02.p6.214222
    Abstract    PDF (549KB)   
    References | Related Articles

    Lightweight fault localization techniques (LFL), which identify fault location(s) in a buggy program by comparing the execution statistics of program spectra in passed executions and failed executions, are popular automatic debugging techniques. They assume “perfect fault understanding”, which means that giving a fault location suffices for programmers to understand the root cause of failures. However, this assumption is unrealistic in practice. Many user studies showed that programmers need some context or explanations to understand the fault before being able to recognize it. To solve this issue, we propose an LFL approach to expedite software debugging. In our approach, we firstly perform module-level fault localization and then generate a weighted dynamic control flow subgraph (WDCFS) for a hypothesis fault module, which weighs the suspiciousness nodes (basic-blocks) to further localize the root cause of failures. In order to evaluate the effectiveness of our approach, we conduct a controlled experiment to compare two different module-level LFL fault localization methods and validate the effectiveness of WDCFS. According to our preliminary experiments, the results are promising.

    Integrated Modeling Method of Complex Embedded System with SAVI Framework
    Ning Zhang, Yunwei Dong, and Feng Xue
    2020, 16(2): 223-237.  doi:10.23940/ijpe.20.02.p7.223237
    Abstract    PDF (1836KB)   
    References | Related Articles

    SAVI is well known for addressing cost and schedule overruns due to the growing growth complexity of embedded systems. There are a number of SAVI frameworks for model-based system integration, each with different characteristics and capabilities. In order to choose an appropriate framework for complex embedded system design, modeling methods should be carefully considered. This paper presents an integrated modeling method for complex embedded systems based on the SAVI framework. In order to solve the high abstraction and complex interoperability of models, the method adopts view-based model representation, multi-model transformation based on binary transformation combinations, and service-oriented model bus implementation, providing a feasible way to promote the consistency and integrity of heterogeneous models. In addition, a supporting tool, ESMEAT, is implemented and applied. The results of the proposed method are of great significance to the development and automation of complex embedded systems.

    Model-based Safety Analysis for an Aviation Software Specification
    Jun Hu, Shuo Chen, Defeng Chen, Jiexiang Kang, and Hui Wang
    2020, 16(2): 238-254.  doi:10.23940/ijpe.20.02.p8.238254
    Abstract    PDF (1247KB)   
    References | Related Articles

    Model-based safety analysis (MBSA) is a kind of safety analysis technology that combines system fault models with formal analysis methods. In this paper, a real flight guidance subsystem (FGS) in aviation domain is studied, and an example of safety modeling and formal analysis of high-level software requirement specification is given. A framework of model transformation is established, which can transform a high-level FGS software requirement model described by Requirement State Machine Language (RSML-e) into a formal NuSMV model. Then, according to the real system requirements and engineering experience, the relevant failure modes and the safety properties that need to be verified are designed. Finally, formal safety analysis and verification based on NuSMV are implemented in a platform xSAP. This case study shows that the MBSA method can be used effectively for the safety analysis of the real aviation system.

    Survey on Methods for Automated Measurement of the Software Scale
    Jing Zhu, Song Huang, Yaqing Shi, Mingyu Chen, Jialuo Liu, and Erhu Liu
    2020, 16(2): 255-264.  doi:10.23940/ijpe.20.02.p9.255264
    Abstract    PDF (374KB)   
    References | Related Articles

    Software scale automated measurement plays an increasingly important role in the early stage of software development. Mainstream software scale measurement methods are mainly based on function point methods, which are dependent on manpower. Therefore, software scale measurement based on automated acquisition of function points is one of the emphases of future research. In this paper, software scale measurement is discussed, and main algorithms based on the IFPUG function point method are introduced. These algorithms are compared and analyzed in order to summarize their advantages and disadvantages. Then, four steps that should be performed to automatically obtain function points are proposed, and the technical challenges of each step are pointed out. Finally, the future research direction is given.

    Repeatedly Coding Inter-Packet Delay for Tracking Down Network Attacks
    Lian Yu, Lei Zhang, Cong Tan, Bei Zhao, Chen Zhang, and Lijun Liu
    2020, 16(2): 265-283.  doi:10.23940/ijpe.20.02.p10.265283
    Abstract    PDF (696KB)   
    References | Related Articles

    Attacks against Internet service provider (ISP) networks will inevitably lead to huge social and economic losses. As an active traffic analysis method, network flow watermarking can effectively track attackers with high accuracy and a low false rate. Among them, inter-packet delay (IPD) embeds and extracts watermarks relatively easily and effectively, and it has attracted much attention. However, the performance of IPD is badly affected when networks have perturbations with high packet loss rate or packet splitting. This paper provides an approach to improve the robustness of IPD by repeatedly coding the inter-packet delay (RCIPD), which can smoothly handle situations with packet splitting and merging. This paper proposes applying the Viterbi algorithm to obtain the convolutional code of a watermark such that the impact of network perturbation on the watermark can be worked off; applying the harmony schema, which controls the rhythm and embeds RCIPD bits into network flow, to improve the invisibility of watermarking; and applying K-means to identify dynamically bits of the watermark that may change the intervals due to the latency of networks. A cyclic-similarity algorithm (CSA) is designed to separate the repeated coding and eventually obtain the watermark. Experiments are carried out to compare RCIPD with other three schemas. The results show that the proposed approach is more robust, especially in the case of packet splitting.

    Defect Prediction of Radar System Software based on Bug Repositories and Behavior Models
    Xi Liu, Zhiyong Zhao, Haifeng Li, Chang Liu, and Shengli Wang
    2020, 16(2): 284-296.  doi:10.23940/ijpe.20.02.p11.284296
    Abstract    PDF (428KB)   
    References | Related Articles

    Software plays an important role in radar products. Software quality has become one of the key factors of radar quality. The application of defect prediction may help understand the possible distribution of defects and therefore gain confidence regarding radar software quality. With a repository of software bugs and behavior models, a defect prediction approach based on the system-theoretic accident modeling process (STAMP) is proposed for radar system software. Firstly, a radar system software control model is built based on STAMP, the bug repository, and behavior models. A Bayesian network learning model is then constructed on process control models, and a training process is conducted on bug repositories to obtain defect prediction rules. Finally, the rules are applied on targeted radar software to predict possible defects. To verify the effectiveness and applicability of the proposed approach, a case study is also given on some typical radar system software.

    Metamorphic Relation Generation for Physics Burnup Program Testing
    Meng Li, Lijun Wang, Shiyu Yan, and Xiaohua Yang
    2020, 16(2): 297-306.  doi:10.23940/ijpe.20.02.p12.297306
    Abstract    PDF (522KB)   
    References | Related Articles

    Due to the high complexity of physics burnup calculation, its output is extremely difficult to predict. Therefore, the traditional testing methods are ineffective. Metamorphic testing (MT) is an effective method to solve the oracle problem. However, the manual identification of metamorphic relation (MRs) significantly hinders its application. We propose a novel MR automatic generation framework using gene expression programming (GEP), where the MR identification problem is converted into a symbolic expression regression problem. Specifically, the explicit MRs and the domain-related function operators are inferred by static analysis from the nuclear background knowledge. The MR is encoded as the gene expression by using function operators, symbolic variables, and constants. After a series of genetic operations, we obtain the implicit MRs by decoding the optimal solution of the expression. Moreover, the correctness of each MR is evaluated according to reliability verification, logical conflict, and logical redundancy. The effectiveness is automatically verified by comparing the consistency of output from a group of burnup calculation programs.

    Lithium-ion Battery Performance Degradation Recognition Method based on SOC Estimation
    Jin Tao, Hao Gang
    2020, 16(2): 307-313.  doi:10.23940/ijpe.20.02.p13.307313
    Abstract    PDF (638KB)   
    References | Related Articles

    The development of ship electric propulsion technology poses new challenges for ship energy storage, ship intelligence and electrification. The green new energy application represented by lithium batteries will drive the technological innovation of green ships and smart ships. Lithium-ion batteries have a residual capacity attenuation during cyclic charging and discharging, and the identification of performance degradation has important implications for the operation and maintenance of lithium-ion batteries. In this paper, the SOC estimation value of real-time data collected by the lithium-ion battery cycle charging and discharging is adopted. After denoising by the outlier algorithm, the FCM algorithm is used to establish the performance degradation model. Finally, the change of the membership function value of the normal cluster center is used as the lithium ion battery. Performance degradation assessment results, showing that the method can effectively and intuitively evaluate the performance degradation of lithium-ion batteries.

    An Adaptive Traffic-Aware Migration Algorithm Selection Framework in Live Migration of Multiple Virtual Machines
    Yong Cui, Liang Zhu, Zengyu Cai, and Ying Hu
    2020, 16(2): 314-324.  doi:10.23940/ijpe.20.02.p14.314324
    Abstract    PDF (367KB)   
    References | Related Articles

    In IaaS cloud computing platforms, live migration of multiple virtual machines plays a dominant role in the dynamic scheduling, optimization and management of IT resources. Although Pre-copy and Post-copy are the prevalent live migration algorithms for the single virtual machine, which both have pros and cons, only one of them is monotonously adopted in the context of the gang of live migration. This scheme cannot choose the best migration algorithm for each virtual machine according to its business traffic and cause a traffic contention problem. This paper proposes an adaptive traffic-aware live migration algorithm selection framework, which leverages fuzzy clustering method to classify the virtual machines to be migrated according to their business traffics and migrate them with the chosen best-fitting migration algorithms. Experiment results show that the proposed framework can effectively choose the suitable migration algorithms for classified virtual machines and eventually improve the whole live migration performance meanwhile avoiding the degradation of business performance.

ISSN 0973-1318