Please wait a minute...
, No 2
  
  • Editorial
    March 2014 Editorial
    KRISHNA B. MISRA
    2014, 10(2): 121.  doi:10.23940/ijpe.14.2.p121.mag
    Abstract   
    Related Articles

    This is the second issue of the year 2014 and the International Journal of Performability Engineering (IJPE) enters the 10th year of its publication. We are often receiving queries from students, teachers expecting promotions, researchers and sometimes from librarians, particularly from India about the impact factor of IJPE. In this issue, for the benefit of our readers, we have explained on page 196 of this issue what does an impact factor signify? We also bring to our readers a note, on pages 234-236, on the visibility factor of IJPE which would inform our readers how IJPE has fared since its inception.in 2005. I must say here that we are not in competition with any other journal but we want to provide the best from the profession we are engaged in and I leave it to our readers to judge and assess the value of contributions that this journal has made to the scientific world over years after reading these two notes.
    In this issue, we present 9 full papers and two short communications from diverse topics and areas of performability engineering. In the first paper of this issue, author presents a novel approach of structuring mission-critical systems with an emphasis on intrusion tolerance, recognizing the fact that it is virtually impossible today to completely prevent intrusion attacks from penetrating mission-critical systems, and efforts are being made on building intrusion tolerant systems. The second paper emphasizes the importance of reliability of a surveillance system to enhance the security level of a protected area breakdown of such system would leave the monitoring area unobserved and encountered much higher risk under the attacks. The paper provides a model which considers among other factors, the environment and skill of the intruder for predicting the reliability of such a system.
    The third paper utilizes the powerful mathematical and statistical capabilities of Excel to propose a method of computing reliability of multi-phase mission system, which should be found useful by the reliability engineers. The fourth paper of the issue is from manufacturing area and illustrates the use of the robust deign technique to tackle some of the challenges and to establish it as a formal mechanism for ‘parameter design’ for carrying out machining operations efficiently in the job shops type of manufacturing firms. The technique is illustrated using a case study. The fifth paper suggests a special Monte Carlo algorithm based on importance sampling and extends the conventional network terminal connectivity criterion to develop a network reliability estimation methodology with unreliable nodes.
    In the sixth paper, the authors present a probability voting strategy to combine several binary Support Vector Machines (SVMs) and then the multi-class SVMs are combined using majority voting strategy to obtain multi-class classifiers which is applied to the problem of fault diagnosis of gearbox. The seventh paper has an application from civil engineering area and demonstrates the use of Least Square Support Vector Machine (LSSVM) and Multivariate Adaptive Regression Spline (MARS), for determination of Uniaxial Compressive Strength of oporto granite which is a key parameter for determination of deformation behaviour of rock mass.
    The eighth paper provides a method of monitoring of coast-down time (CDT) which is the time elapsed between the instant the power is switched off till the rotor system comes to rest. The paper demonstrates that, the CDT helps detect the defects of the shaft assembly. Experiments were conducted on a specifically fabricated rig. The authors define a parameter, called DIP that is found to correlate uniquely with the unbalance and the radial off-set defects in a shaft assembly. The ninth paper addresses the problem of managing availability requirements for systems that include prognostics and health management (PHM) strategies. PHM methods are incorporated into systems to avoid unanticipated failures that can potentially impact system safety and operation. Lastly, two short communications related to Grid computing and a Greenhouse Gas Baseline Emission Level Reporting System are included in this issue.
    I like to thank all the authors, who have contributed to this issue with the assurance that we will continue to present various applications and aspects of performability engineering to various engineering disciplines and new research to our readers through this journal in the time to come.

    Original articles
    A Novel Approach to Building Intrusion Tolerant Systems
    WENBING ZHAO
    2014, 10(2): 123-132.  doi:10.23940/ijpe.14.2.p123.mag
    Abstract    PDF (276KB)   
    Related Articles

    A novel approach of structuring mission-critical systems with an emphasis on intrusion tolerance is described. Key components in the proposed system include traf?c regulation, application request processing, state protection, integrity checking, and process/node health monitoring. In particular, the separation of execution and state management enables the use of a single process to manage application requests, thereby reducing run-time overhead and enables highly concurrent executions. Furthermore, intrusion attacks are mitigated by two means: (1) append-only state logging so that a compromised execution node cannot corrupt state updates from other nodes; and (2) acceptance testing as a way to verify the integrity of the execution of application requests. When an attack is detected, the malformed requests that materialized the attack are quarantined, and such requests (current and future ones) are rejected.


    Received on March 08, 2013, revised on October 31, 2013
    References: 12
    A Dual-Stochastic Process Model for Surveillance Systems with the Uncertainty of Operating Environments Subject to the Incident Arrival and System Failure Processes
    YAO ZHANG HOANG PHAM
    2014, 10(2): 133-142.  doi:10.23940/ijpe.14.2.p133.mag
    Abstract    PDF (272KB)   
    Related Articles

    Surveillance system is widely used today to enhance the security level of the protected area. Reliability of the entire surveillance system is a critical issue since the breakdown of such system would leave the monitoring area unobserved and encountered much higher risk under the attacks. This paper presents a dual stochastic-process model for predicting the reliability of surveillance systems consisting of many subsystems (units) with considerations of the environmental factors, skill of intruder to avoid detection, the intrusion/incident arrival process and subsystem failure process. Several numerical examples are presented to illustrate the proposed model.


    Received on April 30, 2013 and revised on August 29, 2013
    References: 17
    Multi-Phase System Reliability Analysis using Excel
    P. S. SARMA BUDHAVARAPU, S. K. CHATURVEDI, DAMODAR GARG, and SUDHANGSHU CHAKRAVORTY
    2014, 10(2): 143-154.  doi:10.23940/ijpe.14.2.p143.mag
    Abstract    PDF (233KB)   
    Related Articles

    This paper proposes an Excel based algorithm for evaluating phased mission systems. The algorithm estimates the system reliability for a given target operating period. A hybrid methodology has been implemented for developing the algorithm. This algorithm utilizes the powerful mathematical and statistical capabilities of Excel. In this proposed approach, the phase wise functional Reliability Block Diagram (RBD) of the system under investigation is first transformed into a table in an Excel Spreadsheet. Each cell within the table corresponds to a specific connection in the RBD. Failure distributions of the components are also taken as input through Excel sheet. Excel’s macro feature enables automatic running of phase algebra based calculations. The analysis time of a given system is dependent on its complexity, computer configuration, and the accuracy desired and may vary from a few seconds to a few minutes. The algorithm has been successfully applied on several examples taken from the literature.


    Received on April 01, 2013, Revised on September 20, 2013
    References: 17
    Use of Robust Design Technique in Job Shop Manufacturing: A Case Study of Die-Sinking Electro Discharge Machining
    R.M. CHANDIMA RATNAYAKE I. VALBO
    2014, 10(2): 155-162.  doi:10.23940/ijpe.14.2.p155.mag
    Abstract    PDF (274KB)   
    Related Articles

    Job shops typically run in small manufacturing businesses handling job production. In general, they move on to different jobs when each job is completed. The nature of the job shop type of manufacturing operation means that it usually requires different skills, expert knowledge, machine settings, materials and processes. In this context, when the die-sinking electro discharge machining (EDM) process has been used for job shops, as it involves several parameters, their settings have to be pre-determined to achieve optimized manufacturing and quality performance. It is possible to accomplish this via a ‘parameter design’ approach suggested in the robust design technique (RDT). The ‘parameter design’ focuses on designing a process to make the performance minimally sensitive to the various causes of variation. This manuscript illustrates the use of RDT in optimizing the performance of die-sinking EDM. It also verifies the reliability of the approach using a verification experiment.


    Received on April 05, 2013, revised on November 07, 2013
    References: 12
    Network Reliability Monte Carlo With Nodes Subject to Failure
    ILYA GERTSBAKH, YOSEPH SHPUNGIN, and R. VAISMAN
    2014, 10(2): 163-172.  doi:10.23940/ijpe.14.2.p163.mag
    Abstract    PDF (249KB)   
    Related Articles

    We extend the network reliability estimation methodology based on evolution (creation) Monte Carlo into four directions: (i) introducing unreliable nodes; (ii) adjusting the evolution process with merging to "closure" operation suitable for unreliable nodes; (iii) in case of numerical instability in computing convolutions, we suggest a special Monte Carlo algorithm based on importance sampling; (iv) we extend the traditional network terminal connectivity criterion to criteria describing network disintegration into a critical number of clusters, or the critical size of the largest component.


    Received on June 04, 2013, revised on November 15, 2013
    References: 11
    A Novel Multi-class Support Vector Machines Using Probability Voting Strategy and Its Application on Fault Diagnosis of Gearbox
    DEHUI WU, CHAO LI, JUN CHEN, DEHAI YOU, and XIAOHAO XIA
    2014, 10(2): 173-186.  doi:10.23940/ijpe.14.2.p173.mag
    Abstract    PDF (286KB)   
    Related Articles

    In divide-and-combine approach, multi-class support vector machines (SVMs) are divided into several binary SVMs and then the SVMs are combined to obtain multi-class classifiers. In this paper, a new probability voting strategy is presented to combine several binary SVMs. By the method, not only the unclassifiable regions existing in conventional strategy are solved, but also the decision results satisfy the probability distribution. Firstly, two most commonly combining strategy: MaxWins and FSVM are discussed, and their performances are compared through posterior probability distribution. Secondly, an estimate function for the prior probability in a binary classification problem is defined, and an adjustment function satisfying prior probability is normalized in the range of 0~1. Thirdly, a novel probability voting is improved by considering the conventional voting and the adjustment function. Finally, 5-class SVMs-based fault diagnosis models for gearbox respectively with MaxWins majority voting, FSVM and the presented strategy are tested. All the tests and data indicate that the multi-class SVMs combined by probability voting strategy has more capacity of reliability and robustness, and are suitable for fault diagnosis of gearbox.


    Received on April 11, 2013, revised on July 30, and on November 28, 2013
    References: 20
    Machine Learning Techniques Applied to Uniaxial Compressive Strength of Oporto Granite
    MANOJ KUMAR, BHAIREVI. G. AIYER, and PIJUSH SAMUI
    2014, 10(2): 189-195.  doi:10.23940/ijpe.14.2.p189.mag
    Abstract    PDF (126KB)   
    Related Articles

    This article employs two machine learning techniques, viz., Least Square Support Vector Machine (LSSVM) and Multivariate Adaptive Regression Spline (MARS), for determination of Uniaxial Compressive Strength (?c) of oporto granite. LSSVM uses a quadratic cost function. MARS is a nonparametric regression technique. Free porosity (N48), dry bulk density (d) and ultrasonic velocity (v) have been used as input of the LSSVM and MARS models. The output of LSSVM and MARS is ?c. The developed LSSVM and MARS give equations for prediction of ?c. A comparative study has been carried out between the developed LSSVM, MARS, Support Vector Machine (SVM) and Artificial Neural Network (ANN) models. The results show that the developed LSSVM and MARS models are efficient tools for determination of ?c of Oporto granite.


    Received on March 21, 2013, revised on April 12 and November 07, 2013
    References: 21
    Coast-Down Time Monitoring for Defect Detection in Rotating Equipment
    PIYUSH GUPTA O. P. GANDHI
    2014, 10(2): 197-210.  doi:10.23940/ijpe.14.2.p197.mag
    Abstract    PDF (838KB)   
    Related Articles

    Rotating equipment under the action of dynamic forces, are prone to defects, such as misalignment, unbalance, change of rotor slope, skewed bearings, etc. These defects, if ignored for prolonged periods can cause sudden outages that may have serious consequences. Therefore, application of an appropriate condition monitoring technique is desirable to assess the health of the equipment and plan its maintenance. In this paper, monitoring of coast-down time (CDT) is undertaken to meet the objective. The CDT is the time elapsed between the instant the power is switched off to the rotor system till it comes to rest. The work demonstrates that, the CDT does detect the defects of the shaft assembly. Experiments were conducted on a specifically fabricated rig. The results revealed that the speed decay pattern followed a second order fitting of percentage speed reduction as a function of time. Defect identification parameter (DIP) is defined, which is a ratio of the polynomial coefficients of the first and the second order terms. The DIP values were found to correlate uniquely with the unbalance and the radial off-set defects in the shaft assembly.


    Received on March 10, 2013, revised on July 22, and November 11, 2013
    References: 21
    A Direct Method for Determining Design and Support Parameters to Meet an Availability Requirement
    T. JAZOULI, P. SANDBORN, and A. KASHANI-POUR
    2014, 10(2): 211-225.  doi:10.23940/ijpe.14.2.p211.mag
    Abstract    PDF (280KB)   
    Related Articles

    Discrete event simulation is usually a preferred approach to model and predict the life-cycle characteristics (cost and availability) of large populations of complex real systems managed over long periods of time with significant uncertainties. However, while using discrete event simulation to predict the availability of a system or a population of systems based on known or predicted system design and support parameters is relatively straightforward; determining the design and support parameters that result in a desired availability is generally performed using search-based methods that can become impractical for designing systems with more than a few variables and when significant uncertainties are present. This paper presents a direct method that is not search based and uses an availability requirement to predict the required logistics, design and operation parameters using discrete event simulation in a time forward direction. This paper also addresses managing availability requirements for systems that include prognostics and health management (PHM) strategies. PHM methods are incorporated into systems to avoid unanticipated failures that can potentially impact system safety and operation, result in additional life-cycle cost, and/or adversely affect the system availability.


    Received on February 27, 2013, revised on November 21, and 26, 2013
    References: 29
    Short Communications
    Multi-Performance Optimization for MAS Based Grid Computing
    SHENGJI YU, YANPING XIANG, ZONGYI XU, SA MENG, and QIANG LI
    2014, 10(2): 226-229.  doi:10.23940/ijpe.14.2.p226.mag
    Abstract    PDF (75KB)   
    Related Articles

    The challenge of multi-performance optimization has been extensively addressed in the literature based on deterministic parameters. In Grid Computing platforms, since resources are geographically separated and heterogeneous, it is rather difficult to apply a uniform distribution algorithm for achieving various optimization goals. This paper proposes a multi-agent system (MAS) based approach for optimal network resource distribution to satisfy requirements of both users and service providers. Moreover, agents’ communication is discussed and simulation is described.


    Received on August 22, 2013; revised on September 13, 2013
    References: 03
    Original articles
    A Greenhouse Gas Baseline Emission Level Reporting System
    XIUQUAN WANG and GUOHE HUANG
    2014, 10(2): 230-234.  doi:10.23940/ijpe.14.2.p230.mag
    Abstract    PDF (190KB)   
    Related Articles

    A web-based reporting system is developed in this study to serve the purpose of establishing facility-based greenhouse gas (GHG) baseline emission levels (BEL). The system, called BELDBS, integrates annual emissions reporting, base year selection, third part verification, BEL application submission and review processes into a general framework. It is equipped with ancillary functions to facilitate the management of BEL reports, such as searching, exporting, map viewing, etc. The BELDBS has been customized for the Saskatchewan Ministry of Environment to support the determination of reduction targets for all regulated facilities.

Online ISSN 2993-8341
Print ISSN 0973-1318