Please wait a minute...
, No 2
  
  • Editorial
    Editorial
    K. B. Misra
    2013, 9(2): 121.  doi:10.23940/ijpe.13.2.p121.mag
    Abstract   
    Related Articles

    As usual, in this first issue of the ninth year of publication of International Journal of Performability Engineering, we bring to our readers 10 new papers and one Short Communication, which together provide several new techniques and applications in the upcoming areas of performability engineering.

    The first paper of this issue provides a new sampling plan for censored life testing when life time follows Weibull distribution, whereas the next paper of the issue proposes a new approach to software performability considering the aspect of collusion status during the binding-phase of open source softwares in open source solution. This is a new paradigm to reduce cost and quick delivery. The next paper provides an an important tool of assessing the condition of a power transformer using Disolved gas Analysis and Partial Discharge measurement. This technique would help predict the incipient breakdown of the power transformer insulation well in advance and help avoid the failure of power transformer. The next paper is on energy saving strategies for a building with glazed fa?ade when it is exposed to Sun. This study can help evolve environmentally favourable and sustainable solutions for energy saving in buildings particularly in tropical zone. The next paper of the issue compares the efficacy of two pattern recognition approaches for fault detection, viz., Artificial Neural Networks (ANN) and Support Vector Machines (SVM) as it is crucial for developing an efficient approach for fault diagnosis.

    The sixth paper of this issue discusses the effect of network size, transmission range, and network coverage area, on the reliability measures of MANETS by modeling them as Geometric Random Graphs. The reliability of these ad hoc network is the most challenging and interesting area because of changing of topology, particularly during emergency situations like human-induced disasters, military conflicts, emergency grounds, commercial applications etc. The next paper discusses the advances of clustering as clustering is a very important business strategy as propagated originally by Porter and is extended in this paper by the authors in enhancing a firm’s performability. The paper demonstrates the usefulness of clustering in case of Samsung Electric Corporation of South Korea. The next paper describes a four quadrant framework for characterizing print shops in the printing industry based on the workflow complexity and utilization of resources. The formalism of heavy tail distributions is used to characterize the extreme levels of variability observed in job size. Methods of quantifying and characterizing the variability are presented. The implementation of the LDP Lean Document Production solution in a large transaction shop is presented and is shown to deliver substantial productivity benefits. In the next paper, a new application of FMEA to an Off-shore Floating Desalination plant is made when the plant is run by an autonomous variable power source such as wind generator. This is a typical application paper. The last paper of the issue is again a typical application paper that optimizes the system availability of a Coal Handling Plant using Genetic Algorithms.

    In addition to the above papers, we also present a short communication that offers a new approach for evaluating reliability of a 1-out-of-(n+1) warm standby system subject to fault level coverage. It is sincerely hoped that this issue will generate considerable interest among our readers, who are looking for new ideas and applications in the field of performability engineering.

    Besides these papers, we have also included in this issue two important announcements –the first one is about a special issue of IJPE on Performance of Space Vehicles being guest-edited by Dr. William E. Vesely, of NASA Headquarters,, who is well-known for his generic contributions in reliability engineering and the other announcement is about starting a book series on Performability Engineering which has already contracted four books from authors of various countries and some more are in the pipeline.


    Original articles
    On the Role of Weibull-type Distributions in NHPP-based Software Reliability Modeling
    XIAO XIAO TADASHI DOHI
    2013, 9(2): 123-132.  doi:10.23940/ijpe.13.2.p123.mag
    Abstract    PDF (131KB)   
    Related Articles

    The non-homogeneous Poisson processes (NHPPs) based software reliability models (SRMs) have gained much popularity in actual software testing phases to assess the software reliability, the number of remaining software faults and the software release scheduling. It is well known that the Weibull distribution plays an important role in reliability applications because of its flexibility in being able to represent various patterns of failure rate functions. In this paper, we introduce some recent generations of Weibull distribution to represent the underlying software fault-detection time distribution of the NHPP-based SRMs. We study the effectiveness of Weibull-type distributions in software reliability modeling through goodness-of-fit test and prediction analysis.


    Received on February 27, 2012, revised on January 04, 2013
    References: 7
    On Selection of Importance Measures in Risk and Reliability Analysis
    THOR ERIK NØKLAND TERJE AVEN
    2013, 9(2): 133-147.  doi:10.23940/ijpe.13.2.p133.mag
    Abstract    PDF (204KB)   
    Related Articles

    In risk and reliability analysis a number of importance measures are used, including traditional measures such as the Birnbaum’s measure, the improvement potential and the risk achievement worth, as well as uncertainty importance measures reflecting how uncertainties on the component level influence the uncertainties on the system level. Two examples of the latter type of measures are the correlation coefficient and the change in the variance of the reliability of the system when ignoring the uncertainties in the component reliability. In practice it is a challenge to select the appropriate importance measure for different applications of risk and reliability analyses. The purpose of the present paper is to provide a guideline for this selection. A structure for the guidance is established based on three areas of application: design, operation and maintenance (testing). For each of these areas some specific types of importance measures are recommended, covering both traditional measures and uncertainty importance measures. An example is presented to show the applicability of the guideline.


    Received on March 06, 2012, revised on January 05, 2013
    References: 24
    An Interface for Enhancing Repeatability in Human Reliability Analysis
    P. BARALDI, A. SOGARO, M. KONSTANDINIDOU, and Z. NIVOLIANITOU
    2013, 9(2): 149-161.  doi:10.23940/ijpe.13.2.p149.mag
    Abstract    PDF (239KB)   
    Related Articles

    In Human Reliability Analysis (HRA) methods, the estimation of Human Error Probabilities (HEPs) usually requires the assessment of the Performance Shaping Factors (PSFs) characterizing the given contextual scenario in which the tasks are performed. The objective of this work is to develop a visual interface to help the safety analysts in the PSFs assessment. Aiming at the increased repeatability of the PSFs assessment, the proposed methodology is based on the use of anchor points that represent prototype conditions of the PSFs. Furthermore, a detailed description of the anchor points in terms of sub-items characterizing the PSFs is adopted to facilitate the assessment and make the whole process more transparent. The interface is proposed with respect to a developed tool for the estimation of Human Error Probabilities (HEPs) and based on the combination of the Cognitive Reliability And Error Analysis Method (CREAM) with fuzzy logic principles.


    Received on February 15, 2012, revised on December 04, 2012, and January 14, 2013
    References: 15
    Proportional Intensity Model considering Imperfect Repair for Repairable Systems
    YUAN FUQING UDAY KUMAR
    2013, 9(2): 163-174.  doi:10.23940/ijpe.13.2.p163.mag
    Abstract    PDF (521KB)   
    Related Articles

    The Proportional Intensity Model (PIM) extends the classical Proportional Hazard Model (PHM) in order to deal with repairable systems. This paper develops a more general PIM model which uses the imperfect model as baseline function. By using the imperfect model, the effectiveness of repair has been taken into account, without assuming an “as-bad-as-old” or an “as-good-as-new” scheme. Moreover, the effectiveness of other factors, such as the environmental conditions and the repair history, is considered as covariant in this PIM. In order to solve the large number parameters estimation problem, a Bayesian inference method is proposed. The Markov Chain Monte Carlo (MCMC) method is used to compute the posterior distribution for the Bayesian method. The Bayesian Information Criterion (BIC) is employed to perform model selection, namely, selecting the baseline function and remove the nuisance factors in this paper. In the final, a numerical example is provided to demonstrate the proposed model and method.


    Received on February 25, 2012, revised on January 07, 2013
    References: 22
    Sustainable Designs of Products and Systems: A Possibility
    KRISHNA B. MISRA
    2013, 9(2): 175-190.  doi:10.23940/ijpe.13.2.p175.mag
    Abstract    PDF (292KB)   
    Related Articles

    The performance of systems, products and services has always been a concern of designers, operations and maintenance engineers since the beginning of the past century. This aspect has been addressed over time in the last Century by devising several performance attributes like quality, reliability, maintainability, safety/ risk of what could be called as attributes of dependability. This was indeed required in order to promote economic and efficient utilization of resources and to optimize performance. But with environmental concern becoming all pervading, the emphasis has to shift to develop products, systems and services that are not only dependable but sustainable as well.
    The aim of this concept paper is to propose a procedure of designing products, systems and services based on an overall index of performability so that it could be used as criteria for developing future designs.


    Received on April 03, 2012, and revised on January 16, 2013
    References: 14
    Optimal Unlimited Free-Replacement Warranty Strategy using Reconditioned Products
    NAVIN CHARI, CLAVER DIALLO, and UDAY VENKATADRI
    2013, 9(2): 191-200.  doi:10.23940/ijpe.13.2.p191.mag
    Abstract    PDF (282KB)   
    Related Articles

    The long-term sustainability of our resources is dependent on reducing the consumption of virgin resources, and one method of achieving this goal is product remanufacturing. It is already established that the production of remanufactured products costs less than that of creating a new one, however the effects of this on warranty costs are now considered. Due to consumer perceptions of the quality of remanufactured products, it cannot be sold for the same price as a new product. In addition, the warranty costs the manufacturer incurs will also be higher. This paper proposes a mathematical model for the optimal one-dimensional unlimited free-replacement warranty policy with replacements carried out with reconditioned products. Numerical optimization is used to compute the optimal warranty and production parameters which maximize the total profit.


    Received on June 20.2012, revised on January 14, 2013
    References: 21
    A Novel Approach for Analyzing the Behavior of Industrial Systems Using Uncertain Data
    MONICA RANI, S.P. SHARMA, and HARISH GARG
    2013, 9(2): 201-210.  doi:10.23940/ijpe.13.2.p201.mag
    Abstract    PDF (213KB)   
    Related Articles

    In some practical cases, it is not an easy to analyze the behavior of any complex repairable industrial system up to desired degree of accuracy due to vague, imprecise and uncertain data collected from the various resources (historical/present records). If somehow it can be done, then they have a high range of uncertainty. So, in order to reduce this uncertainty, to make a more sound decision for expert/decision makers by utilizing available information, this paper presents a hybridized technique namely artificial bee colony based Lambda-Tau (ABCBLT) for analyzing the behavior of complex repairable system stochastic. To strengthen the analysis, various reliability indices named as systems failure rate, repair time, mean time between failures, reliability, availability and maintainability for a time varying failure rates, instead of constant failure rates, are obtained by using Lambda-Tau methodology and artificial bee colony (ABC) optimization has been used to construct their membership function by using ordinary arithmetic rather than fuzzy arithmetic operations. A case study of the bleaching unit of a paper mill situated in a northern part of India, producing approximately 200 tons of paper per day, has been considered to demonstrate the proposed approach. The behavior analysis results computed by ABCBLT technique have a reduced region of prediction in comparison of existing technique region, i.e., uncertainties involved in the analysis are reduced.


    Received on February 06, 2012 and revised on September 10, 2012
    References: 25
    Availability Estimation of a Cooling Tower Using GSPN
    G. THANGAMANI
    2013, 9(2): 211-220.  doi:10.23940/ijpe.13.2.p211.mag
    Abstract    PDF (178KB)   
    Related Articles

    This paper deals with the availability analysis of a Cooling Tower used in an air conditioning system. The system is modeled as a Generalized Stochastic Petri Net (GSPN) and analyzed using Monte Carlo Simulation method. The superiority of this approach over others is demonstrated. The proposed GSPN is a promising tool that can be conveniently used to model and analyze any complex systems. It is a promising tool for modeling and estimation of reliability measures of any process plants.


    Received on March 12, 2012, revised on January 10, 2013
    References: 13
    Fault Diagnosis of Helical Gear Box using Decision Tree through Vibration Signals
    V. SUGUMARAN, DEEPAK JAIN, M. AMARNATH, and HEMANTHA KUMAR
    2013, 9(2): 221-234.  doi:10.23940/ijpe.13.2.p221.mag
    Abstract    PDF (240KB)   
    Related Articles

    This paper uses vibration signals acquired from gears in good and simulated faulty conditions for the purpose of fault diagnosis through machine learning approach. The descriptive statistical features were extracted from vibration signals and the important ones were selected using decision tree (dimensionality reduction). The selected features were then used for classification using J48 decision tree algorithm. The paper also discusses the effect of various parameters on classification accuracy.


    Received on June 05, 2012 and revised on January 14, 2013
    References: 25
    An Algorithm for Obtaining the Prior Information of Bounded Sampling
    SHUANG-WEI XU XIAO-YUE WU
    2013, 9(2): 235-240.  doi:10.23940/ijpe.13.2.p235.mag
    Abstract   
    Related Articles

    Analytical methods face many constraints in evaluating system’s reliability while the crude simulation is inefficient for evaluating the reliability of highly dependable system. Bounded sampling method is an efficient reliability simulation method, and prior information is the key factor that influences its efficiency. In this paper, an algorithm is proposed to obtain the non-intersection partial minimal cut sets and minimal path sets, which can greatly improve the performance of bounded sampling. The steps and pseudo code of this algorithm are given. A numerical example is used to demonstrate the efficiency of bounded sampling with this proposed algorithm.


    Received on February 27, 2012; Revised on July 3, 2012
    References: 12
Online ISSN 2993-8341
Print ISSN 0973-1318