Please wait a minute...
, No 4
  • Original articles
    Fuzzy Fault Tree Analysis of Crane Wire Rope
    2007, 3(4): 403-410.  doi:10.23940/ijpe.07.4.p403.mag
    Abstract    PDF (121KB)   
    Related Articles

    Wire rope is a very useful and long lasting structural element when properly used and maintained. For a failure such as that of crane wire rope, it is often very difficult to estimate precise failure rates or failure probabilities of individual components or failure events. Conventional fault tree analysis (FTA) is based on the probability assumption. When the failure probability of a system is extremely small or necessary statistical data from the system is scarce, it is very difficult or impossible to evaluate its reliability and safety with conventional fault tree analysis (FTA) techniques. To overcome this disadvantage, fuzzy sets theory is introduced. The reliability of basic events is considered as type fuzzy numbers with weighted exponents. A weighted exponent represents the assuring measure of the decision-maker to the membership functions of basic events. From the logical relationship between different events in the fault tree and fuzzy operators AND and OR, the fuzzy probability of the top event is obtained. Finally, fuzzy fault tree analysis of crane wire rope is given to illustrate the proposed method.
    Received on November 23, 2006
    References: 18

    Reliability and Availability Analysis of Three-state Device Redundant Systems with Human Errors and Common-cause Failures
    2007, 3(4): 411-418.  doi:10.23940/ijpe.07.4.p411.mag
    Abstract    PDF (178KB)   
    Related Articles

    This paper presents stochastic models representing redundant three-state device systems with critical human errors and common-cause failures. The systems are analysed under two situations: without any repair and with repair. All the system transition rates, i.e., open-mode and short-mode failure rates, critical human error rate, common-cause failure rate, and the repair rates are assumed constant. The Markov method is used to develop generalized expressions for system state probabilities, system reliability, and system mean time to failure. The systems analyzed incorporate commonly used redundant configurations such as parallel, k-out-of-n and standby. A new kind of standby system called the k-out-of-n cold standby system is introduced and analyzed. Upon comparison of the system performance indices such as system reliability, system availability, and system mean time to failure of these systems, it is observed that cold standby has a significant effect on the performance of three-state device systems.
    Received on January 22, 2006
    References: 04

    Time-Dependent Reliability Models of Systems with Common Cause Failure
    2007, 3(4): 419-430.  doi:10.23940/ijpe.07.4.p419.mag
    Abstract    PDF (161KB)   
    Related Articles

    Reliability models of systems with common cause failure are developed through system-level load-strength interference analysis. Based on the statistic meaning of random load action and the failure mechanism of systems under repeated random load, the effect of multiple action of random load on reliability is studied. The probability cumulative distribution function and the probability density function of equivalent load are derived with the order statistic, and the reliability models of systems under repeated random load are derived. Further, the loading process is described by the Poisson stochastic process, the time-dependent reliability models of a series system, a parallel system and a k-out-of-n system are developed, and the relationship between reliability and time and that between the failure rate and time are discussed respectively. Finally, a simulation experiment is carried out to verify the reliability model proposed with the Monte Carlo method.
    Received on January 11, 2007
    References: 28

    Comparing a Failure Probability to an Acceptability Criterion: Decision Theory Rationale and a Space Shuttle Application
    2007, 3(4): 433-440.  doi:10.23940/ijpe.07.4.p433.mag
    Abstract    PDF (94KB)   
    Related Articles

    This paper presents basic decision theory rationale for selecting a particular failure probability value from its uncertainty distribution to compare to a defined acceptability criterion. Since the success probability, or reliability, is one minus the failure probability, the rationale also applies to selecting a particular reliability value to compare to a reliability criterion. The uncertainty in the failure probability estimate is described by a probability distribution which is termed the uncertainty distribution. This is consistent with the Bayesian statistical approach that is commonly used in probabilistic modeling and in quantitative risk assessments. The uncertainty distribution completely characterizes the uncertainty in the estimate, giving all the percentiles, or Bayesian confidence bounds, for the estimate.
    Based on decision theory principles, selection of the failure probability value should be separate from determination of the acceptable failure probability criterion. Selection of the failure probability value considers the losses, or impacts, from underestimating or overestimating the actual value of the failure probability. Selection of the acceptable failure probability criterion considers the consequences of the occurrence of the event. An example application is given for selection of the failure probability value and definition of an acceptability criterion for Composite Overwrap Pressure Vessels (COPVs) on the Space Shuttle. This paper is useful in showing how basic decision theory paradigms can be applied in a practical risk management framework.
    Received on August 1, 2006
    References: 12

    Reliability Analysis of Fault Tolerant Systems with Multi-Fault Coverage
    2007, 3(4): 441-451.  doi:10.23940/ijpe.07.4.p441.mag
    Abstract    PDF (127KB)   
    Related Articles

    Fault-tolerance has been an essential architectural attribute for achieving high reliability in many critical applications of digital systems. Automatic fault and error handling mechanisms play a crucial role in implementing fault tolerance because an uncovered (undetected) fault may lead to a system or a subsystem failure even when adequate redundancy exists. Examples of this effect can be found in computing systems, electrical power distribution networks, pipelines carrying dangerous materials etc. Because an uncovered fault may lead to overall system failure, an excessive level of redundancy may even reduce the system reliability. Therefore, an accurate analysis must account for not only the system structure, but also the system fault & error handling behavior (often called coverage behavior) as well. The appropriate coverage modeling approach depends on the type of fault tolerant techniques used. The recent research literature emphasizes the importance of multi-fault coverage models where the effectiveness of recovery mechanisms depends on the coexistence of multiple faults in a group of elements, which are also called fault level coverage (FLC) groups, that collectively participate in detecting and recovering the faults in that group. However, the methods for solving multi-fault coverage models are limited, primarily because of the complex nature of the dependency introduced by the reconfiguration mechanisms. The paper suggests a modification of the generalized reliability block diagram (RBD) method for evaluating reliability indices of systems with multi-fault coverage. The suggested method based on a universal generating function technique computes the reliability indices of complex systems with multi-fault coverage using a straightforward recursive procedure. The proposed algorithm can be easily used in the case of hierarchical structure of FLC groups. Illustrative examples are presented.
    Received on November 26, 2006
    References: 24

    Dynamic Risk Evaluation of Systems with Multiple Protective Systems
    2007, 3(4): 453-466.  doi:10.23940/ijpe.07.4.p453.mag
    Abstract    PDF (165KB)   
    Related Articles

    To prevent a system accident, several types of protective systems are installed based on the concept of "defence in depth" in such a system as nuclear and chemical plants. In the risk evaluation of a system with multiple independent protective systems, the accident occurrence probability is obtained as the occurrence probability of an abnormal event multiplied by failure probabilities of its related protective systems. Since failure probabilities are conventionally evaluated as its time average unavailability over the operating period independently, their variation during the operation cannot be considered well. This paper proposes a dynamic evaluation method of the accident probability with the consideration of inspections and maintenance. Using the decomposition of a protective system into detection, diagnosis, and execution parts, the on-demand failure can be easily analyzed even for protective systems composed of both hardware and operators. An illustrative example of a simple reactor system with several protective systems including operator recovery actions shows the details and merits of the proposed method.
    Received on March 23, 2006
    References: 10

    System Availability Analysis Considering Failure Severities
    2007, 3(4): 467-480.  doi:10.23940/ijpe.07.4.p467.mag
    Abstract    PDF (130KB)   
    Related Articles

    Model-based analysis is commonly used to assess the influence of different factors on system availability. Most of the availability models reported in the literature consider the impact of redundancy, fault tolerance, and system structure. However, these models treat all system failures to be equivalent or at the same level of severity. In practice, it is well-known that failures are classified into multiple severity levels according to their impact on the system's ability to deliver its services. System availability is thus influenced by only some rather than all failures. To obtain an accurate availability estimate it is then necessary to incorporate failure severities into the analysis. In this paper we present a system availability model which considers failure severities of the hardware and software components of the system in an integrated manner. Based on the model we obtain closed form expressions which relate system availability to the failure and repair parameters of the hardware and software components comprising the system. For a given choice of failure parameters, we discuss how the closed form expressions could be used to select the repair parameters to achieve specified target system availability and to establish bounds on system availability. We illustrate the potential of the model by applying it to the failure data collected during the acceptance testing of a satellite system.
    Received on June 12, 2006
    References: 18

    The Weighted Risk Analysis Applied for Bos & Lommer
    2007, 3(4): 481-497.  doi:10.23940/ijpe.07.4.p481.mag
    Abstract    PDF (1077KB)   
    Related Articles

    Safety and risk assessment are characterized by aspects, like subjectivity and objectivity. In this paper, relations between safety and risk are described. When a risk analysis is performed, it is important to realize that decision making about risks is very complex, and not only technical aspects but also economical, environmental, comfort related, political, psychological and societal acceptance are aspects that play an important role. In order to balance safety measures with aspects, such as environmental, quality, and economical aspects, a weighted risk analysis methodology is proposed in this paper. This paper also provides a theoretical background regarding the scope of safety assessment in relation to the decision-making in complex urban development projects adjacent to or above transport routes of hazardous materials. In Western Europe, such projects are realized due to shortage of space. The weighted risk analysis is an interesting tool comparing different risks, such as investments, economical losses and the loss of human lives, in one dimension (e.g. money), since both investments and risks could be expressed solely in money. Finally, the weighted risk analysis approach is applied in a case study of Bos and Lommer, Amsterdam.
    Received on June 26, 2006
    References: 19

    Short Communications
    A Game Theoretical View of Byzantine Fault Tolerance Design
    Wenbing Zhao
    2007, 3(4): 498-500.  doi:10.23940/ijpe.07.4.p498.mag
    Abstract    PDF (62KB)   
    Related Articles

    In this paper, we investigate the optimal Byzantine fault tolerance (BFT) design strategies from a game theoretical point of view. The problem of BFT is formulated as a constant-sum game played by the BFT system (defender) and its adversary (attacker). The defender resorts to replication to ensure high reliability and availability, while the attacker injects faults to the defender with the purpose of reducing the system's reliability and/or availability. We examine current BFT solutions and propose a number of improvements based on our game theoretical study.
    Received on June 22, 2007
    References: 07

    On the Use of Gaussian Approximation for Reliable Performance Evaluation in Optical DPSK Systems
    Qun Zhang Han-Way Huang
    2007, 3(4): 501-503.  doi:10.23940/ijpe.07.4.p501.mag
    Abstract    PDF (196KB)   
    Related Articles

    In this paper, we propose to extend the Guassian approximation (GA) method for reliable system performance evaluation from the traditional optical on-off keying (OOK) systems to the emerging optical differential phase shift keying (DPSK) systems. The proposed method can be used to guide efficient numerical estimate as well as experimental measurement of the noise-loading back-to-back DPSK system performance where the inter-symbol-interference (ISI) is not significant.
    Received on August 07, 2007
    References: 06

ISSN 0973-1318