Please wait a minute...
, No 1
  
  • Editorial
    Editorial January 2012
    KRISHNA B. MISRA
    2012, 8(1): . 
    Abstract   
    Related Articles
    This is the first issue of the eighth year of publication of International Journal of Performability Engineering. As usual, we bring to our readers new papers and ideas in this issue. These papers represent to the frontier areas of performability engineering and are widely spread in their theme and range of applications. Starting with the problem of attack and defence of systems, new models in reliability engineering, use of fuzzy similarity approach for fault diagnosis and failure mode identification, Intuitionistic fuzzy methodology for software optimization, use of Bayesian Networks for maintenance strategies applied to safety of metro rails, replacement policies for combining additive and independent damages, and lastly application of Demster-Shafer’s theory to the DGA on power transformer. All this packed in an issue of 110 pages. We believe, this should generate enough interest in International Journal of Performability Engineering. We will continue this year to bring to our readers all that is best in the area.

    The first two papers in this issue relate to an important area of defence and attack of systems (particularly our concern here would be in engineering systems), which has grown rapidly after 9/11 event. Papers that have appeared so far, can be broadly categorized based on system structure, defence measures, and attack tactics and circumstances. In fact looking to the importance and an exponential growth of literature, we intend to publish in this journal a state-of-the-art paper outling the current status of the research in this area for the benefit of our readers.

    The authors of these two papers have contributed considerably to this subject. In the first paper, Optimizing Structure of Parallel Homogeneous Systems under Attack, the two authors suggest an algorithm to determine the optimal system structure under uncertain contest intensity with the assumptions that the defender determines the system structure based on the type and the number of elements in the system and the defender distributes its limited resource between purchasing the elements and protecting them from outside attacks. The attacker chooses the number of elements to attack and distributes its limited resource evenly among all the attacked elements.

    In the second paper, K-Round Duel with Uneven Resource Distribution, the two authors consider the problem of optimal resource distribution between offense and defense and among different rounds in a K-round duel. In each round of the duel, two actors exchange attacks. Each actor allocates resources into attacking the counterpart and into defending itself against the counterpart's attack with the basic assumption that the offense resources are expendable (missiles), whereas the defense resources are not expendable (bunkers). The game ends when at least one target is destroyed or after K rounds.

    In the third paper, The Evolution and History of Reliability Engineering: Rise of Mechanistic Reliability Modelling, the authors trace the transformations that the reliability engineering discipline has undergone since World War II and discuss the emergence of mechanistic-based reliability modeling approaches in reliability engineering in recent years and emphasize the fact that reliability approaches of now are becoming more and more realistic. The authors present chronologically the developments that have taken place in reliability engineering during past several decades. The paper presents a good review of developments in reliability modelling techniques. The authors claim that the physics-based (or mechanistic-based) reliability models have proven to be the most useful and appropriate reliability models of the components.

    The fourth paper, Fault Diagnosis and Failure Mode Estimation by a Data-Driven Fuzzy Similarity Approach, presents a data-driven, fuzzy similarity approach for available Recovery Time (RT) estimation, Fault Diagnosis (FD) and Failure Mode (FM) identification, which is


    supposed to be a useful computerized support tool to be embedded in an operator support system for emergency accident management. The approach is illustrated through number of fault scenarios in the analysis of Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS).

    The fifth paper, An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization, a new technique that is gaining popularity in reliability is described. We have already published a paper in this area in our January 2011 issue of IJPE. The present paper presents a software reliability estimation methodology based on user profile, and optimal decision related to cost-reliability models in classical and intuitionistic fuzzy environments availability of railway infrastructure through diagnostics and preventive maintenance is discussed. The authors have presented a Petri net based modeling method for Monte Carlo simulation and validated it with a case study on French high speed railway line.

    The sixth paper, Optimal Metro-Rail Maintenance Strategy using Multi-Nets Modeling, describes a generic decision support tool VirMaLab, developed by the authors to evaluate complex systems maintenance strategies, is introduced. In this paper an original maintenance strategy modeling is introduced, which is dedicated for the prevention and detection of broken rails, in a context of renewal of the signaling and train control systems for Paris steel-wheel metro lines. Of recent, Baysian Networks (BN) have proved their usefulness to represent complex systems and perform reliability studies. This paper deals with a multi-nets extension of VirMaLab, as applied to the maintenance of metro rails. This, according to authors, helps achieve high-performance levels of safety and availability (which is especially critical at peak hours), the operator needs to estimate, hour by hour its ability to detect broken rails.

    In the seventh paper of this issue, Replacement Models for Combining Additive Independent Damages, the authors present replacement policies for combining additive and independent damages. Systems often degrade with time and total damage accumulated within them by shocks, stress, or environment change. A unit, if subjected to shocks, always suffers some damage due to shocks. Since the total damage due to shocks is additive, a unit fails when it has exceeded a failure level. This is known as the cumulative damage. However, a system may also fail if the damage due to any given shock exceeds failure level. This is called independent damage. The authors consider age replacement policies for combining additive with independent damages, in which the unit is replaced at a planned time or when the total damage exceeds a failure level, whichever occurs first, and undergoes minimal repair when independent damage occurs. The expected costs rates have are obtained by using the techniques of cumulative processes in reliability theory. Optimal policies have been derived analytically and computed numerically. Some areas of application of this policy is indicated in the paper.

    The eighth Paper, Diagnosis Decision-Making using Threshold Interpretation Rule and Expected Monetary Value, uses Dempster-Shafer’s evidence theory to account for various pieces of evidences in the Dissolved Gas Analysis (DGA) in case of power transformers., where there exists lack of information about the exact condition of power transformer whether it is in normal condition of operation or in an early stage of fault besides inability to differentiate between incipient fault condition, and major fault condition that leads to power transformer's failure. Also the authors claim that current recognition methodologies cannot identify a precise fault type for some range of DGA values. Therefore, they feel Dempster-Shafer’s (DS) theoretic approach is most suited for the fusion of DGA pieces of evidences since it can represent various types of ignorance in the knowledge sources.

    In addition to the above papers, we also present reviews of three important books that are considered relevant for the benefit of our readers. It is hoped that this issue will generate considerable interest among our readership in the theme discussed.

Original articles
Optimizing Structure of Parallel Homogeneous Systems under Attack
KJELL HAUSKEN GREGORY LEVITIN
2012, 8(1): 5-17.  doi:10.23940/ijpe.12.1.p5.mag
Abstract    PDF (398KB)   
Related Articles

A system of identical parallel elements has to be purchased and deployed. The cumulative performance of the elements must meet a demand. There are different types of elements characterized by their performance and cost in the market. We consider convex, linear, and concave relationships between performance and cost. The defender determines the system structure by choosing the type and the number of elements in the system. The defender distributes its limited resource between purchasing the elements and protecting them from outside attacks. The attacker chooses the number of elements to attack and distributes its limited resource evenly among all the attacked elements. The vulnerability of each element is determined by a contest success function between the attacker and the defender. The damage caused by the attack is associated with the cost of destroyed elements and the reduction of the cumulative system performance below the demand. The defender tries to minimize the damage anticipating the best attacker's strategy for any system structure. An algorithm for determining the optimal system structure is suggested. Illustrative numerical examples are presented.

References: 29
Received on November 04, 2010, and revised on July 13, 2011
K-Round Duel with Uneven Resource Distribution
GREGORY LEVITIN KJELL HAUSKEN
2012, 8(1): 19-34.  doi:10.23940/ijpe.12.1.p19.mag
Abstract    PDF (491KB)   
Related Articles

The paper considers optimal resource distribution between offense and defense and among different rounds in a K-round duel. In each round of the duel, two actors exchange attacks. Each actor allocates resources into attacking the counterpart and into defending itself against the counterpart's attack. The offense resources are expendable (e.g., missiles), whereas the defense resources are not expendable (e.g., bunkers). Offense distribution across rounds can increase or decrease as determined by a geometric series. The outcomes of each round are determined by a contest success functions which depend on the offensive vs. defensive resources ratio. The game ends when at least one target is destroyed or after K rounds. It is shown that when each actor maximizes its own survivability, then both actors allocate all their resources defensively. Conversely, when each actor minimizes the survivability of the other actor, then both actors allocate all their resources offensively. We then consider two cases of battle for a single target in which one of the actors minimizes the survivability of its counterpart whereas the counterpart maximizes its own survivability. It is shown that in these two cases the minmax survivabilities of the two actors are the same, and the sum of their resource fractions allocated to offense is equal to 1. However, their resource distributions are different. When both actors can choose their offense resource distribution freely, they distribute all offense to the first round. When one actor is constrained to distribute offense resources across multiple rounds, it is not necessarily optimal for the other actor to allocate all offense to the first round. We illustrate how the resources, contest intensities and number of rounds in the duels impact the survivabilities and resource distributions.

References: 19
Received on August 14, 2009, and revised May 27, 2010
The Evolution and History of Reliability Engineering: Rise of Mechanistic Reliability Modeling
M. AZARKHAIL M. MODARRES
2012, 8(1): 35-47.  doi:10.23940/ijpe.12.1.p35.mag
Abstract    PDF (168KB)   
Related Articles

To address the risk and reliability challenges in both private and regulatory sectors, the reliability engineering discipline has gone through a number of transformations during the past few decades. This article traces the evolution of these transformations and discusses the rise of mechanistic-based reliability modeling approaches in reliability engineering applications in recent years. In this paper we discuss the ways reliability models have progressively become more practical by incorporating evidence from the real causes of failure. Replacing constant hazard rate life models (i.e., exponential distribution) with other distributions such as Weibull and lognormal was the first step toward addressing wear-out and aging in the reliability models. This trend was followed by accelerated life testing, through which the aggregate effect of operational and environmental conditions was introduced to the life model by means of accounting for stress agents. The applications of mechanistic reliability models were the logical culmination of this trend. The physics-based (or mechanistic-based) reliability models have proven to be the most comprehensive representation, capable of bringing many influential factors into the life and reliability models of the components. The system-level reliability assessment methods currently available, however, seem to have limited capabilities when it comes to the quantity and quality of the knowledge that can be integrated from their constituent components. In this article, past and present trends as well as anticipated future trends in applications of mechanistic models in reliability assessment of structures, systems, components and products are discussed.

References: 26
Received on October 31, 2010, and revised onJune 30, 2011
Fault Diagnosis and Failure Mode Estimation by a Data-Driven Fuzzy Similarity Approach
ENRICO ZIO and FRANCESCO DI MAIO
2012, 8(1): 49-65.  doi:10.23940/ijpe.12.1.p49.mag
Abstract    PDF (870KB)   
Related Articles

In the present work, a data-driven fuzzy similarity approach is proposed to assist the operators in fault diagnosis tasks. The approach allows: i) prediction of the Recovery Time (RT), i.e., the time remaining until the system can no longer perform its function in an irreversible manner, ii) Fault Diagnosis (FD), i.e., the identification of the component faults and iii) estimation of the system Failure Mode (FM), i.e., the system-level outcome of the failure scenario. The approach is illustrated by way of the analysis of failure scenarios in the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS).

References: 31

Received on December 10, 2010, and revised on September 06, 2011
An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization
HENRIK MADSEN, GRIGORE ALBEANU, and FLORIN POPENTIU-VLADICESCU
2012, 8(1): 67-76.  doi:10.23940/ijpe.12.1.p67.mag
Abstract    PDF (142KB)   
Related Articles

Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints.


Received on December 18, 2010, revised on August 18, 2011
References: 18
Optimal Metro-Rail Maintenance Strategy using Multi-Nets Modeling
LAURENT BOUILLAUT, OLIVIER FRANCOIS, and STEPHANE DUBOIS
2012, 8(1): 77-90.  doi:10.23940/ijpe.12.1.p77.mag
Abstract    PDF (1695KB)   
Related Articles

Reliability analysis has become an integral part of system design and operating. This is especially true for systems performing critical tasks such as mass transportation systems. This explains the numerous advances in the field of reliability modeling. More recently, some studies involving the use of Bayesian Networks (BN) have been proved relevant to represent complex systems and perform reliability studies. In previous works, the generic decision support tool VirMaLab, developed to evaluate complex systems maintenance strategies, was introduced. This approach is based on a specific Dynamic BN, designed to model stochastic degradation processes and allowing any kind of state sojourn distributions along with an accurate context description: the Graphical Duration Models. This paper deals with a multi-nets extension of VirMaLab, dedicated to maintenance of metro rails. Indeed, due to fulfillment of high-performance levels of safety and availability (the latter being especially critical at peak hours), the operator needs to estimate, hour by hour its ability to detect broken rails.


Received on October 10, 2010, revised on August 24, 2011
References: 10
Replacement Models for Combining Additive Independent Damages
XUFENG ZHAO, HAISHAN ZHANG, CUNHUA QIAN, TOSHIO NAKAGAWA, and SYOUJI NAKAMURA
2012, 8(1): 91-100.  doi:10.23940/ijpe.12.1.p91.mag
Abstract    PDF (159KB)   
Related Articles

In some practical situations, most systems would degrade with time and suffer failure finally by both causes of additive and independent damages. From such a viewpoint, this paper considers replacement models for combining with two kinds of damages: The unit is replaced at a planned time or when the total additive damage exceeds a failure level, whichever occurs first, and undergoes minimal repair when independent damage occurs. First, a standard cumulative damage model where the unit suffers some damage due to shocks and the total damage is additive is considered. Second, the total damage is measured at periodic times and increases approximately with time linearly. Using the techniques of cumulative processes in reliability theory, expected cost rates are obtained and optimal policies which minimize them are derived analytically. Finally, optimal policies are computed and compared numerically, and useful discussions for such results are done.


Received on October 21, 2010, revised on August 03, 2011
References: 15
Diagnosis Decision-Making using Threshold Interpretation Rule and Expected Monetary Value
MOHD RADZIAN ABDUL RAHMAN, M. ITOH, and T. INAGAKI
2012, 8(1): 101-110.  doi:10.23940/ijpe.12.1.p101.mag
Abstract    PDF (323KB)   
Related Articles

Lack of information in dissolved gas analysis (DGA) pieces of evidence necessitates Dempster-Shafer theoretic approach for combining these pieces of evidence. The threshold ground probability assignment (THG) that firmly judge major fault condition is determined from DGA dataset prior to year 2009. A threshold interpretation rule is proposed. Four distinct scenarios resulted from the application of the interpretation rule inclusive of a scenario, which the system operator is uncertain about the condition of a power transformer. DGA dataset of all power transformers that experienced electrical and thermal failures in year 2009 is collected to validate the threshold interpretation rule. Six decision policies are introduced to map power transformer condition propositions to decision spaces for decision-making under uncertainties. Expected monetary value is utilized to assess each decision policy and to select the optimal decision policy.


Received on September 28, 2010, revised on August 02, 2011
References: 16
ISSN 0973-1318