Short Communications
    Published in last 1 year |  In last 2 years |  In last 3 years |  All
Please wait a minute...
For Selected:
Enhancing Reliability in Backbone Assisted Wireless Sensor Networks
Int J Performability Eng    2015, 11 (5): 503-509.   doi: 10.23940/ijpe.15.5.p503.mag
Abstract39)      PDF (185KB)(56)       Save

The initial route discovery or the final node to node association is an important metric to determine the performance of any routing protocol. While not remarking on the efficiency of the existing routing protocols, we develop a method to construct an initial backbone structure that can be used for communications. Specific to application domains in a wireless sensor network, the quality of service parameters varies. Our approach is based on a backbone structure that takes care of the robustness of the followed routes by employing a hybrid algorithm ‘Quasi-MST’. Also, it guarantees the communication reliability by maintaining an alternate parent list in case of node failures due to energy depletion. We try to analyze the effect of varying ranges and sink positions on the reliability of the network when subject to node failures. We also put forward a more robust mechanism to counter for route failures.

Received on June 18, 2014; revised on March 15, 2015
References: 8
Related Articles | Metrics
A Simple Analytic Approximation for Entropy of Student-t Distribution and its Relation with Normal Distribution
Int J Performability Eng    2015, 11 (5): 509-512.   doi: 10.23940/ijpe.15.5.p509.mag
Abstract72)      PDF (92KB)(55)       Save

In the expression corresponding to the Shannon’s entropy of the Student-t distribution the gamma and digamma integral functions appear. We propose a simple analytical approximation for its entropy function for all degrees of freedom which assures the continuity between normal and Cauchy distributions. A possible application is to define a normal distribution “equivalent” of the Student-t, usable for any degrees of freedom (integral or fractional) larger than 7.

Received on June 24, 2014, revised on December 15, 2014
References: 10
Related Articles | Metrics
An Inspection Optimization Model Based on a Three-stage Failure Process
Int J Performability Eng    2014, 10 (7): 775-770.   doi: 10.23940/ijpe.14.7.p775.mag
Abstract29)      PDF (111KB)(40)       Save

Inspections are common activities in most preventive maintenance (PM) programs. The models for optimizing the inspection interval using the two-stage delay time have been presented by many researchers. However, the three-stage failure process introduced by Wang is closer to reality corresponding to the actual industrial applications. When the minor defective stage is identified at an inspection, the inspection interval is halved. However, whether this measure is optimal is not explained. In order to solve this problem, an inspection optimization model is proposed to minimize the expected cost per unit time with the inspection interval and shortening proportion of the inspection interval after identifying the minor defective stage as the decision variables. A numerical example is presented to illustrate the applicability of the proposed model.

Received on April 22, 2014, revised on August 7, 2014 and September 7, 2014
References: 07
Related Articles | Metrics
Bearing Remaining Useful Life Prediction Based on an Improved Back Propagation Neural Network
Int J Performability Eng    2014, 10 (6): 653-657.   doi: 10.23940/ijpe.14.6.p653.mag
Abstract42)      PDF (208KB)(46)       Save

Bearings are the key components in most of rotating machineries. Their failures can lead to catastrophic disasters. The accuracy of remaining useful life (RUL) prediction has a great influence on the preventive maintenance activity. RUL prediction based on standard back propagation neural network (BPNN) already exists. However, training standard BPNN needs more time and sometimes it may converge to local optima which can have contrary influence on the accuracy. Existing BPNN improving works used dynamic learning rate, momentum item and utilized genetic algorithms or other random researching algorithm to optimize the adjustment of connect weights in the network. In this paper, an improved BPNN based on Levenberg-Marquardt algorithm and momentum item is proposed. It can predict the bearing’s RUL with a good performance. Finally, the bearing simulation life data sets are used to validate the proposed method. The results show that the prediction accuracy of the proposed method is superior to other existing BPNNs.

Received on March 29, 2014; revised on June 7, 2014
References: 10
Related Articles | Metrics
Control Charts with Runs Rules for Poisson Process Data
Int J Performability Eng    2014, 10 (6): 659-661.   doi: 10.23940/ijpe.14.6.p659.mag
Abstract61)      PDF (58KB)(53)       Save

We use Markov chains to compare run lengths of Poisson process individuals control charts with and without runs rules. Evidence quantifies the advantage of runs rules for certain cost structures z = Ca / Cb, where Ca is the cost of a Type I error, and Cb is the cost of a Type II error, and different shifts from in-control parameter l1 to out-of-control parameter l2.

Received on April 7, 2014; revised on June 20, 2014
References: 07
Related Articles | Metrics
Use of Minimax Probability Machine Regression for Modelling of Settlement of Shallow Foundations on Cohesionless Soil
Int J Performability Eng    2014, 10 (3): 325-328.   doi: 10.23940/ijpe.14.3.p325.mag
Abstract45)      PDF (84KB)(47)       Save

This article examines the performance of Minimax Probability Machine Regression (MPMR) for prediction of settlement(s) of shallow foundation on cohesionless soil. MPMR maximizes the minimum probability that future predicted outputs of the regression model will be within some bound of the true regression function. Width of footing (B), net applied pressure (q), average Standard Penetration Test (SPT) blow count (N), length (L), and embedment depth (Df) have been adopted as inputs of the MPMR. A sensitivity analysis has been carried out to determine the effect of each input. The results of MPMR have been compared with the Artificial Neural Network (ANN).

Received on August 8, 2013; revised on November 8, 2013
References: 19
Related Articles | Metrics
Multi-Performance Optimization for MAS Based Grid Computing
Int J Performability Eng    2014, 10 (2): 226-229.   doi: 10.23940/ijpe.14.2.p226.mag
Abstract20)      PDF (75KB)(43)       Save

The challenge of multi-performance optimization has been extensively addressed in the literature based on deterministic parameters. In Grid Computing platforms, since resources are geographically separated and heterogeneous, it is rather difficult to apply a uniform distribution algorithm for achieving various optimization goals. This paper proposes a multi-agent system (MAS) based approach for optimal network resource distribution to satisfy requirements of both users and service providers. Moreover, agents’ communication is discussed and simulation is described.

Received on August 22, 2013; revised on September 13, 2013
References: 03
Related Articles | Metrics
Reliability of Circular Consecutively Connected Systems
Int J Performability Eng    2013, 9 (4): 462-464.   doi: 10.23940/ijpe.13.4.p462.mag
Abstract22)      PDF (86KB)(37)       Save

This article considers a Circular Consecutively Connected System (CCCS) consisting of N ordered nodes connected in a circle, which fails if any two nodes are disconnected. Previous studies on the reliability of CCCS have mainly assumed that the connection between any pair of nodes is unidirectional. In this article, a Universal Generating Function (UGF) method is proposed to evaluate the reliability of CCCS where the connection between any pair of nodes is bidirectional. An example is presented to illustrate the application of the method.

Received on March 15, 2013; revised on May 1, 2013
References: 6
Related Articles | Metrics
Performance Evaluation of Node Eviction Schemes in Inter-Vehicle Communication
Int J Performability Eng    2013, 9 (3): 345-351.   doi: 10.23940/ijpe.13.3.p345.mag
Abstract34)      PDF (187KB)(39)       Save

This paper assesses the performance of node eviction schemes in vehicular networking. To secure inter-vehicle communication, a misbehaved node's certificate must be revoked to stop it from injecting messages in the network. The evaluation metrics trade off speed, time taken to remove the node, and accuracy, separation of bad from good. Among various factors affecting a scheme’s performance, the model focuses on the percentage of attacker-controlled nodes. The model abstracts the process of node eviction in order to evaluate a variety of node eviction schemes in vehicular ad-hoc networks (VANETs) for safety-critical services. The novel approach of specifying two subnets, without labeling Bad or Good, increases the flexibility of the modeling. The study discovers the potential of exploring a new class of node eviction schemes.

Received on October 24, 2012; revised on November 14, 2012
References: 14
Related Articles | Metrics
Standardization of the Logistic Distribution based on Entropy
Int J Performability Eng    2013, 9 (3): 352-354.   doi: 10.23940/ijpe.13.3.p352.mag
Abstract20)      PDF (59KB)(37)       Save

In order to define an acceptable equivalence between a normal and a logistic distribution, a common standardized way is by the identification of their two first statistical moments. We propose an alternative method based on equality of their differential entropies, which demonstrates the validity of the usual standardization method.

Received on January 4, 2013, revised on February 10, 2013
References: 4
Related Articles | Metrics
Reliability of 1-out-of-(n+1) Warm Standby Systems Subject to Fault Level Coverage
Int J Performability Eng    2013, 9 (1): 117-120.   doi: 10.23940/ijpe.13.1.p117.mag
Abstract37)      PDF (92KB)(39)       Save

Warm standby SParing (WSP) is a commonly-used fault tolerance technique that compromises the system energy consumption and recovery time. Imperfect fault coverage is an important factor that can restrict the reliability of a fault-tolerant system. In this paper, a generalized binary decision diagram (BDD)-based approach is presented to evaluate the reliability of a 1-out-of-(n+1) warm standby system subject to fault level coverage. Examples are presented to illustrate the application of the proposed method.

Received on October 31, 2012, and revised on November 7, 2012
References: 06
Related Articles | Metrics
Optimal Distribution of Software Testing Time Considering Multiple Releases
Int J Performability Eng    2012, 8 (6): 705-707.   doi: 10.23940/ijpe.12.6.p705.mag
Abstract30)      PDF (66KB)(39)       Save

This paper considers a software development scenario where a software development team develops, tests and releases software version by version. A modeling framework is proposed to study the expected number of remaining faults in each version. The optimal development time and testing time for each version are also studied.

Received on June 13, 2012, revised on July 19, 2012
References: 6
Related Articles | Metrics
Optimal Design for Accelerated Life Testing with Simple Step-Stress Plans
Int J Performability Eng    2012, 8 (5): 573-577.   doi: 10.23940/ijpe.12.5.p573.mag
Abstract38)      PDF (189KB)(48)       Save

This paper presents the optimal design for accelerated life testing (ALT) experiments when step-stress plans with Type I censoring are performed. We adopt a generalized Khamis-Higgins model for the effect of changing stress levels. It is assumed that the lifetime of a test unit follows a Weibull distribution, and both its shape and scale parameters are functions of the stress level. The optimal plan chooses the stress changing time to minimize the asymptotic variance (AVAR) of the Maximum Likelihood Estimator (MLE) of reliability at the use stress level and at a pre-specified time.

Received on March 14, 2012, revised on April 22, 2012
References: 09
Related Articles | Metrics
Quality and Replication of Microarray Studies
Int J Performability Eng    2012, 8 (5): 578-582.   doi: 10.23940/ijpe.12.5.p578.mag
Abstract23)      PDF (94KB)(38)       Save

Quality assessment of DNA microarrays uses different spot parameters that contain complete information to describe each microarray and detect corrupted spots. Images obtained through replication should result in improved quality as measured according to parameters. We propose methods to determine the number of replicates required to achieve a certain level of quality, and present an application to the parameter known as Background.

Received on May 11, 2010, revised on July 26 and August 20, 2010, and June 30, 2012
References: 28
Related Articles | Metrics
A Novel Importance Measure for External Factors Based on System Performance
Int J Performability Eng    2012, 8 (4): 447-450.   doi: 10.23940/ijpe.12.4.p447.mag
Abstract26)      PDF (87KB)(38)       Save

Importance measures and analysis have been used to identify weak components to prioritize system upgrading activities, maintenance activities, etc. Traditionally, importance measures do not consider the possible effect due to external environment and phenomena, which however can be causes of system failures and therefore should be taken into consideration. This paper proposes a novel importance measure for multi-state systems with the consideration of external factors. And the proposed importance analysis can effectively quantify the effect of the state of the external factor on the component and system performance.

Received on September 30, 2011; revised on January 4, 2012 and January 20, 2012
References: 06
Related Articles | Metrics
Reliability Model of Tracking, Telemetry, Command and Communication System using Markov Approach
Int J Performability Eng    2012, 8 (4): 451-456.   doi: 10.23940/ijpe.12.4.p451.mag
Abstract44)      PDF (105KB)(39)       Save

For the reliability analysis of tracking, telemetry and command (TT&C) and communication systems, most existing modeling methods can only deal with general TT&C and communication tasks. In this paper, a formal description of TT&C and communication task is given to facilitate the reliability modeling of such systems. A continuous-time Markov chain (CTMC) model is built for an idle task arc. A model for TT&C and communication tasks in consecutive flight cycles is proposed, in which the tasks are combined to a new complicated one. Examples with numerical results show the effectiveness of the proposed approach.

Received on January 11, 2012; Revised on May 1, 2012
References: 12
Related Articles | Metrics
A Theoretically Appropriate Poisson Process Monitor
Int J Performability Eng    2012, 8 (4): 457-461.   doi: 10.23940/ijpe.12.4.p457.mag
Abstract43)      PDF (73KB)(42)       Save

Because the probability of Type I error is not evenly distributed beyond upper and lower three-sigma limits the c chart is theoretically inappropriate for a monitor of Poisson distributed phenomena. Furthermore the normal approximation to the Poisson is of little use when c is small. These practical and theoretical concerns should motivate the computation of true error rates associated with individuals control assuming the Poisson distribution.

Related Articles | Metrics
Reliability of Wireless Sensor Networks with Tree Topology
Int J Performability Eng    2012, 8 (2): 213-216.   doi: 10.23940/ijpe.12.2.p213.mag
Abstract38)      PDF (99KB)(42)       Save

This paper models and analyzes the infrastructure communication reliability of wireless sensor networks (WSN) with tree topology. Reliability metrics are developed for WSN under five different data delivery models, including sink unicast, anycast, multicast, manycast, and broadcast. An example of WSN with tree topology is analyzed to illustrate the application of the proposed reliability metrics. Reliability results for the five data delivery models are compared and discussed.

Received on September 28, 2011, revised on November 30, 2011
References: 06
Related Articles | Metrics
Definition of Multi-state Weighted k-out-of-n: F Systems
Int J Performability Eng    2012, 8 (2): 217-219.   doi: 10.23940/ijpe.12.2.p217.mag
Abstract29)      PDF (71KB)(42)       Save

The Multi-state Weighted k-out-of-n System model is the generalization of the Multi-state k-out-of-n System model, which finds wide applications in industry. However only Multi-state Weighted k-out-of-n: G System models have been defined and studied in most recent research works. The mirror image of the Multi-state Weighted k-out-of-n: G System – the Multi-state Weighted k-out-of-n: F System has not been clearly defined and discussed. In this short communication, the basic definition of the Multi-state Weighted k-out-of-n: F System model is proposed. The relationship between the Multi-state Weighted k-out-of-n: G System and the Multi-state Weighted k-out-of-n: F System is also analyzed.

Received on September 29, 2011, revised on December 5, 2011
References: 07
Related Articles | Metrics
A Diversity Monitor with Known Errors for Process Variability Observed in Categorical Data
Int J Performability Eng    2011, 7 (4): 397-399.   doi: 10.23940/ijpe.11.4.p397.mag
Abstract29)      PDF (119KB)(37)       Save

In this article we extend quality control research into nominal and ordinal data from simple monitors of location to those of variability. Given ordinal data more traditional process control relies on demerit systems that explicitly monitor central location but not the distribution spread. It is quantified here in terms borrowed from ecology. An established index of diversity and its standard error are the basis for a new quality control chart that we have also assessed with respect to error rates.
Received on November 19 2010; revised on March 15 and April 6, 2011
References: 09

Related Articles | Metrics
Recovering Lagging Replicas in a Fault Tolerant System
Int J Performability Eng    2011, 7 (2): 195-197.   doi: 10.23940/ijpe.11.2.p195.mag
Abstract33)      PDF (125KB)(37)       Save

In this paper, we discuss an often-ignored, but very important issue, i.e., how to recover slow replicas quickly in a fault tolerant system. Despite the fact that the replicas are deployed in identically-equipped computing nodes, under heavy load, some replicas would lag behind due to various reasons. Quickly recovering slow replicas is important because not doing so could result in reduced throughput, high jitters in end-to-end latency, and reduced replication degree.
Received on July 14, 2010, revised on November 10, 2010
References: 5

Related Articles | Metrics
Invariants in Hierarchical-System Optimization for Reliability and Maintainability
Int J Performability Eng    2011, 7 (2): 198-200.   doi: 10.23940/ijpe.11.2.p198.mag
Abstract35)      PDF (110KB)(38)       Save

The “invariants” in a process are the non-changing parts. In this paper, invariants in determining the redundancy allocation to optimize system reliability and maintainability are exploited. This article demonstrates how recognizing the computational invariants can lead to efficient system assessments.
Received on August 9, 2010, revised on October 11, 2010
References: 5

Related Articles | Metrics
Reed–Solomon Code based Green & Survivable Communications Using Selective Encryption
Int J Performability Eng    2010, 6 (3): 297-299.   doi: 10.23940/ijpe.10.3.p297.mag
Abstract28)      PDF (68KB)(36)       Save

Reliability and security are two major criteria for survivable communications in error-prone wireless environments. To ensure reliable communications, Forward Error Correcting (FEC) codes such as Reed-Solomon (RS) codes are employed for error detection and correction by adding redundancies into the original data to form code words. Secure data communications based on FEC are achieved in many traditional approaches by encrypting the whole code words, which is not computationally or energy efficient. In this paper, we propose a new selective encryption approach based on FEC code words to effectively sustain both green and survivable communications in wireless networking systems.
Received on November 10, 2009; revised March 09, 2010
References: 06

Related Articles | Metrics
Reliability Analysis for Multiple Dependent Failure Processes: An MEMS Application
Int J Performability Eng    2010, 6 (1): 100-102.   doi: 10.23940/ijpe.10.1.p100.mag
Abstract56)      PDF (87KB)(45)       Save

Widespread acceptance of micro-electro-mechanical systems (MEMS) depends highly on their reliability, both for large-volume commercialization and for critical applications. The problem of multiple dependent failure processes is of particular interest to MEMS researchers. For MEMS devices subjected to both wear degradation and random shocks that are dependent and competing, we propose a new reliability model based on the combination of random-shock and degradation modeling. The models developed in this research can be applied directly or customized for most current and evolving MEMS designs with multiple dependent failure processes.

References: 10"
Received on March 23, 2009, revised on May 26, 2009

Related Articles | Metrics
Evaluating Network Robustness based on Failure Event Possibility
Int J Performability Eng    2009, 5 (4): 387-392.   doi: 10.23940/ijpe.09.4.p387.mag
Abstract32)      PDF (94KB)(41)       Save

The robustness of a network is the ability to maintain a satisfactory performance level when there may be system endogenous random failures plus possible failures caused by external attacks. A new approach for determining network robustness is presented based on the difference between the possibilistic and probabilistic network dependability estimates. Both the probabilistic and possibilistic estimates are derived here using a simple approximation method proposed by von Collani [9], but with different operations for the possibility estimate in some system structures. The proposed robustness estimation method is demonstrated for a sample of network architectures.
Received on July 04, 2008, revised March 1, 2009
References: 11

Related Articles | Metrics
Optimizing Sensor Count in Layered Wireless Sensor Networks
Int J Performability Eng    2009, 5 (3): 296-298.   doi: 10.23940/ijpe.09.3.p296.mag
Abstract25)      PDF (469KB)(39)       Save

Due to severely constrained resources, sensor nodes are subject to frequent failures. Therefore, wireless sensor networks (WSN) are typically designed with a large number of redundancies to achieve fault tolerance and to maintain the desired network lifetime and coverage. This work proposes an equation to determine the optimal number of redundant sensor nodes required in each layer of a WSN with the layered structure. Matlab simulations are used to verify the proposed equation.
Received on May 16, 2008, revised on December 30, 2008
References: 01

Related Articles | Metrics
Parametric Uncertainty Analysis of Complex System Reliability
Peng Wang Tongdan Jin
Int J Performability Eng    2009, 5 (2): 197-199.   doi: 10.23940/ijpe.09.2.p197.mag
Abstract33)      PDF (71KB)(41)       Save

This paper studies the uncertainties of component reliability parameters and their impact on system lifetime distribution. Monte Carlo simulation was applied to investigate the correlation between the system complexity and its Weibull shape parameter when component reliability parameters are estimated with uncertainties. Results show the system lifetime approaches the exponential distribution when the number of components becomes large.
Received on August 1, 2008, revision available on October 21, 2008
References: 06

Related Articles | Metrics
A Simple Descrete Reliability Growth Model and its Application in Project Selection
Rong Pan
Int J Performability Eng    2008, 4 (3): 293-295.   doi: 10.23940/ijpe.08.3.p293.mag
Abstract27)      PDF (65KB)(50)       Save

Most existing reliability growth models ignore the reliability test/improvement process, where engineers identify distinct failure modes through tests and redesign the product/system to remove these modes. In this paper, we present a discrete reliability growth model, which is similar to the Crow's projection and extended reliability growth models but without the doubtful assumptions implied by these models. We demonstrate the use of our model in the project selection of reliability improvement projects during system redesign.
Received on March 13, 2008
References: 02

Related Articles | Metrics
Performance Comparison for APRZ on Strongly and Weakly Managed Dispersion Maps for 40 Gb/s WDM Transmission
Abhijeet Shirgurkar, Qun Zhang, and M. I. Hayee
Int J Performability Eng    2008, 4 (2): 193-195.   doi: 10.23940/ijpe.08.2.p193.mag
Abstract17)      PDF (92KB)(38)       Save

In this short communication, we have explored and compared the performance of Alternate Phase Return-to-Zero (APRZ) modulation format on both strongly and weakly managed dispersion maps with varying path average dispersion values. Our findings show that, as opposed to 0 or 180 deg APRZ, 90 deg APRZ is more efficient for both strongly and weekly managed dispersion maps in minimizing Intra Channel Four-wave Mixing (IFWM) for reliable 40Gb/s transmission.
Received on October 09, 2007
References: 03

Related Articles | Metrics
Reliability Assessment Using a Likelihood Ratio Test
Huairui Guo Adam Mettas
Int J Performability Eng    2008, 4 (2): 196-198.   doi: 10.23940/ijpe.08.2.p196.mag
Abstract35)      PDF (63KB)(87)       Save

One-way ANOVA (analysis of variance) is widely used in quality engineering for quality characteristics comparison. The basic assumption in applying ANOVA is that the response is normally distributed. However, in life tests, the times to failure usually do not follow this assumption. In this paper, a method similar to the regular one-way ANOVA is proposed for reliability assessment. A generalized linear model together with a likelihood ratio test is developed. The proposed method can be used to compare the reliability of different designs. It also can be applied to study whether a factor has an effect on product life.
Received on November 05, 2007
References: 04

Related Articles | Metrics
A Game Theoretical View of Byzantine Fault Tolerance Design
Wenbing Zhao
Int J Performability Eng    2007, 3 (4): 498-500.   doi: 10.23940/ijpe.07.4.p498.mag
Abstract23)      PDF (62KB)(46)       Save

In this paper, we investigate the optimal Byzantine fault tolerance (BFT) design strategies from a game theoretical point of view. The problem of BFT is formulated as a constant-sum game played by the BFT system (defender) and its adversary (attacker). The defender resorts to replication to ensure high reliability and availability, while the attacker injects faults to the defender with the purpose of reducing the system's reliability and/or availability. We examine current BFT solutions and propose a number of improvements based on our game theoretical study.
Received on June 22, 2007
References: 07

Related Articles | Metrics
On the Use of Gaussian Approximation for Reliable Performance Evaluation in Optical DPSK Systems
Qun Zhang Han-Way Huang
Int J Performability Eng    2007, 3 (4): 501-503.   doi: 10.23940/ijpe.07.4.p501.mag
Abstract21)      PDF (196KB)(37)       Save

In this paper, we propose to extend the Guassian approximation (GA) method for reliable system performance evaluation from the traditional optical on-off keying (OOK) systems to the emerging optical differential phase shift keying (DPSK) systems. The proposed method can be used to guide efficient numerical estimate as well as experimental measurement of the noise-loading back-to-back DPSK system performance where the inter-symbol-interference (ISI) is not significant.
Received on August 07, 2007
References: 06

Related Articles | Metrics
Updating Time for Dependable Secure Computing Systems
Li Bai, Saroj Biswas, and Musoke Sendaula
Int J Performability Eng    2007, 3 (3): 379-381.   doi: 10.23940/ijpe.07.3.p379.mag
Abstract29)      PDF (84KB)(37)       Save

In this paper, we investigate an important and interesting problem in a dependable secure computing system. The problem is to determine an optimal time that the secret shares should be updated on a (k, n) threshold-based secret sharing system with the proactive secret sharing (PSS) capability. In an earlier survivability study for a reconfigurable system, we developed a new definition for the survivability assessment. We extend this new definition for the survivability of the dependable secure computing system. From the survivability assessment perspective, we can easily determine an appropriate updating time for safeguarding secret information on the dependable secure computing system.
Received on April 12, 2007
References: 04

Related Articles | Metrics
Approximation of Mean Time Between Failures with Maintenance
Wendai Wang, Michael Dell'Anno, and Carl Zeh
Int J Performability Eng    2007, 3 (3): 382-384.   doi: 10.23940/ijpe.07.3.p382.mag
Abstract30)      PDF (73KB)(44)       Save

Mean Time Between Failures (MTBF) is a commonly used metric to indicate the reliability of a reparable item. For items with an increasing failure rate (wear-out failure), periodic maintenance is often performed to improve their operational reliability or increase the operational MTBF. This paper develops a very simple but highly accurate approximation of MTBF for items subjected to periodic maintenance, upon which engineers can easily do a quick calculation and perform design-for-reliability analyses.
Received on April 17, 2007
References: 04

Related Articles | Metrics
A Hierarchical Availability Analysis of Multi-tiered Web Applications
Jijun Lu Swapna S. Gokhale
Int J Performability Eng    2007, 3 (3): 385-387.   doi: 10.23940/ijpe.07.3.p385.mag
Abstract23)      PDF (57KB)(44)       Save

We propose a hierarchical availability analysis methodology for multi-tiered Web applications. The methodology partitions the analysis into three levels, namely, server, request and session, and considers only the relevant factors at each level. The levels are connected using a hierarchical approach; the results obtained from one level are propagated for use in the analysis at the next one. The methodology thus decouples the different factors that influence availability and yet provides an integrated framework to consider them simultaneously.
Received on May 8, 2007
References: 07

Related Articles | Metrics
Steady-State Availability and MTBF of Systems Subjected to Suspended Animation
Int J Performability Eng    2007, 3 (2): 282-284.   doi: 10.23940/ijpe.07.2.p282.mag
Abstract22)      PDF (66KB)(38)       Save

In most practical cases, during a system failure or downtime, all non-failed components are kept idle. This phenomenon is known as suspended animation (SA). In this paper, we provide a simple and efficient method to compute the availability indices of repairable systems subjected to suspended animation. An important aspect of the proposed method is that it is not restricted to exponential failure and repair distributions. Further, the proposed method can be applied to any system configuration with embedded hierarchical k-out-of-n subsystems subjected to suspended animation.
Received on December 27, 2006
References: 04

Related Articles | Metrics
Component replacement analysis for electricity transmission and distribution systems with heterogeneous assets subject to annual budget constraints
Jose F. Espiritu David W. Coit
Int J Performability Eng    2007, 3 (2): 288-290.   doi: 10.23940/ijpe.07.2.p288.mag
Abstract25)      PDF (189KB)(41)       Save

A component replacement methodology for electricity transmission and distribution systems was developed to solve equipment replacement problems for systems composed of sets of heterogeneous assets subject to annual budgetary constraints over a finite planning horizon. The proposed methodology is based on an integrated dynamic and integer programming approach. First, a dynamic programming algorithm is solved for each individual component in the system. Then, two different integer programming models are applied. The first one is used to check whether a feasible system-level solution can be obtained and it is also used to identify infeasibilities for the original problem. The second integer programming model is used to find the system-level replacement schedule with the minimum cost. The method developed can potentially be applied to any replacement problem composed of sets of heterogeneous assets subject to constraints imposed on the system. However, in this work, the method is demonstrated on the replacement analysis of a common electricity distribution system configuration. The objective is to obtain the minimum cost policy such that the Net Present Value of maintenance, purchase and opportunity costs is minimized.
Received on January 29, 2007
References: 12

Related Articles | Metrics