The initial route discovery or the final node to node association is an important metric to determine the performance of any routing protocol. While not remarking on the efficiency of the existing routing protocols, we develop a method to construct an initial backbone structure that can be used for communications. Specific to application domains in a wireless sensor network, the quality of service parameters varies. Our approach is based on a backbone structure that takes care of the robustness of the followed routes by employing a hybrid algorithm ‘Quasi-MST’. Also, it guarantees the communication reliability by maintaining an alternate parent list in case of node failures due to energy depletion. We try to analyze the effect of varying ranges and sink positions on the reliability of the network when subject to node failures. We also put forward a more robust mechanism to counter for route failures.
In the expression corresponding to the Shannon’s entropy of the Student-t distribution the gamma and digamma integral functions appear. We propose a simple analytical approximation for its entropy function for all degrees of freedom which assures the continuity between normal and Cauchy distributions. A possible application is to define a normal distribution “equivalent” of the Student-t, usable for any degrees of freedom (integral or fractional) larger than 7.
Inspections are common activities in most preventive maintenance (PM) programs. The models for optimizing the inspection interval using the two-stage delay time have been presented by many researchers. However, the three-stage failure process introduced by Wang is closer to reality corresponding to the actual industrial applications. When the minor defective stage is identified at an inspection, the inspection interval is halved. However, whether this measure is optimal is not explained. In order to solve this problem, an inspection optimization model is proposed to minimize the expected cost per unit time with the inspection interval and shortening proportion of the inspection interval after identifying the minor defective stage as the decision variables. A numerical example is presented to illustrate the applicability of the proposed model.
Bearings are the key components in most of rotating machineries. Their failures can lead to catastrophic disasters. The accuracy of remaining useful life (RUL) prediction has a great influence on the preventive maintenance activity. RUL prediction based on standard back propagation neural network (BPNN) already exists. However, training standard BPNN needs more time and sometimes it may converge to local optima which can have contrary influence on the accuracy. Existing BPNN improving works used dynamic learning rate, momentum item and utilized genetic algorithms or other random researching algorithm to optimize the adjustment of connect weights in the network. In this paper, an improved BPNN based on Levenberg-Marquardt algorithm and momentum item is proposed. It can predict the bearing’s RUL with a good performance. Finally, the bearing simulation life data sets are used to validate the proposed method. The results show that the prediction accuracy of the proposed method is superior to other existing BPNNs.
We use Markov chains to compare run lengths of Poisson process individuals control charts with and without runs rules. Evidence quantifies the advantage of runs rules for certain cost structures z = Ca / Cb, where Ca is the cost of a Type I error, and Cb is the cost of a Type II error, and different shifts from in-control parameter l1 to out-of-control parameter l2.
This article examines the performance of Minimax Probability Machine Regression (MPMR) for prediction of settlement(s) of shallow foundation on cohesionless soil. MPMR maximizes the minimum probability that future predicted outputs of the regression model will be within some bound of the true regression function. Width of footing (B), net applied pressure (q), average Standard Penetration Test (SPT) blow count (N), length (L), and embedment depth (Df) have been adopted as inputs of the MPMR. A sensitivity analysis has been carried out to determine the effect of each input. The results of MPMR have been compared with the Artificial Neural Network (ANN).
The challenge of multi-performance optimization has been extensively addressed in the literature based on deterministic parameters. In Grid Computing platforms, since resources are geographically separated and heterogeneous, it is rather difficult to apply a uniform distribution algorithm for achieving various optimization goals. This paper proposes a multi-agent system (MAS) based approach for optimal network resource distribution to satisfy requirements of both users and service providers. Moreover, agents’ communication is discussed and simulation is described.
This article considers a Circular Consecutively Connected System (CCCS) consisting of N ordered nodes connected in a circle, which fails if any two nodes are disconnected. Previous studies on the reliability of CCCS have mainly assumed that the connection between any pair of nodes is unidirectional. In this article, a Universal Generating Function (UGF) method is proposed to evaluate the reliability of CCCS where the connection between any pair of nodes is bidirectional. An example is presented to illustrate the application of the method.
This paper assesses the performance of node eviction schemes in vehicular networking. To secure inter-vehicle communication, a misbehaved node's certificate must be revoked to stop it from injecting messages in the network. The evaluation metrics trade off speed, time taken to remove the node, and accuracy, separation of bad from good. Among various factors affecting a scheme’s performance, the model focuses on the percentage of attacker-controlled nodes. The model abstracts the process of node eviction in order to evaluate a variety of node eviction schemes in vehicular ad-hoc networks (VANETs) for safety-critical services. The novel approach of specifying two subnets, without labeling Bad or Good, increases the flexibility of the modeling. The study discovers the potential of exploring a new class of node eviction schemes.
In order to define an acceptable equivalence between a normal and a logistic distribution, a common standardized way is by the identification of their two first statistical moments. We propose an alternative method based on equality of their differential entropies, which demonstrates the validity of the usual standardization method.
Warm standby SParing (WSP) is a commonly-used fault tolerance technique that compromises the system energy consumption and recovery time. Imperfect fault coverage is an important factor that can restrict the reliability of a fault-tolerant system. In this paper, a generalized binary decision diagram (BDD)-based approach is presented to evaluate the reliability of a 1-out-of-(n+1) warm standby system subject to fault level coverage. Examples are presented to illustrate the application of the proposed method.
This paper considers a software development scenario where a software development team develops, tests and releases software version by version. A modeling framework is proposed to study the expected number of remaining faults in each version. The optimal development time and testing time for each version are also studied.
This paper presents the optimal design for accelerated life testing (ALT) experiments when step-stress plans with Type I censoring are performed. We adopt a generalized Khamis-Higgins model for the effect of changing stress levels. It is assumed that the lifetime of a test unit follows a Weibull distribution, and both its shape and scale parameters are functions of the stress level. The optimal plan chooses the stress changing time to minimize the asymptotic variance (AVAR) of the Maximum Likelihood Estimator (MLE) of reliability at the use stress level and at a pre-specified time.
Quality assessment of DNA microarrays uses different spot parameters that contain complete information to describe each microarray and detect corrupted spots. Images obtained through replication should result in improved quality as measured according to parameters. We propose methods to determine the number of replicates required to achieve a certain level of quality, and present an application to the parameter known as Background.
Importance measures and analysis have been used to identify weak components to prioritize system upgrading activities, maintenance activities, etc. Traditionally, importance measures do not consider the possible effect due to external environment and phenomena, which however can be causes of system failures and therefore should be taken into consideration. This paper proposes a novel importance measure for multi-state systems with the consideration of external factors. And the proposed importance analysis can effectively quantify the effect of the state of the external factor on the component and system performance.
For the reliability analysis of tracking, telemetry and command (TT&C) and communication systems, most existing modeling methods can only deal with general TT&C and communication tasks. In this paper, a formal description of TT&C and communication task is given to facilitate the reliability modeling of such systems. A continuous-time Markov chain (CTMC) model is built for an idle task arc. A model for TT&C and communication tasks in consecutive flight cycles is proposed, in which the tasks are combined to a new complicated one. Examples with numerical results show the effectiveness of the proposed approach.
Because the probability of Type I error is not evenly distributed beyond upper and lower three-sigma limits the c chart is theoretically inappropriate for a monitor of Poisson distributed phenomena. Furthermore the normal approximation to the Poisson is of little use when c is small. These practical and theoretical concerns should motivate the computation of true error rates associated with individuals control assuming the Poisson distribution.
This paper models and analyzes the infrastructure communication reliability of wireless sensor networks (WSN) with tree topology. Reliability metrics are developed for WSN under five different data delivery models, including sink unicast, anycast, multicast, manycast, and broadcast. An example of WSN with tree topology is analyzed to illustrate the application of the proposed reliability metrics. Reliability results for the five data delivery models are compared and discussed.
The Multi-state Weighted k-out-of-n System model is the generalization of the Multi-state k-out-of-n System model, which finds wide applications in industry. However only Multi-state Weighted k-out-of-n: G System models have been defined and studied in most recent research works. The mirror image of the Multi-state Weighted k-out-of-n: G System – the Multi-state Weighted k-out-of-n: F System has not been clearly defined and discussed. In this short communication, the basic definition of the Multi-state Weighted k-out-of-n: F System model is proposed. The relationship between the Multi-state Weighted k-out-of-n: G System and the Multi-state Weighted k-out-of-n: F System is also analyzed.
In this article we extend quality control research into nominal and ordinal data from simple monitors of location to those of variability. Given ordinal data more traditional process control relies on demerit systems that explicitly monitor central location but not the distribution spread. It is quantified here in terms borrowed from ecology. An established index of diversity and its standard error are the basis for a new quality control chart that we have also assessed with respect to error rates.Received on November 19 2010; revised on March 15 and April 6, 2011References: 09
In this paper, we discuss an often-ignored, but very important issue, i.e., how to recover slow replicas quickly in a fault tolerant system. Despite the fact that the replicas are deployed in identically-equipped computing nodes, under heavy load, some replicas would lag behind due to various reasons. Quickly recovering slow replicas is important because not doing so could result in reduced throughput, high jitters in end-to-end latency, and reduced replication degree.Received on July 14, 2010, revised on November 10, 2010References: 5
The “invariants” in a process are the non-changing parts. In this paper, invariants in determining the redundancy allocation to optimize system reliability and maintainability are exploited. This article demonstrates how recognizing the computational invariants can lead to efficient system assessments.Received on August 9, 2010, revised on October 11, 2010References: 5
Reliability and security are two major criteria for survivable communications in error-prone wireless environments. To ensure reliable communications, Forward Error Correcting (FEC) codes such as Reed-Solomon (RS) codes are employed for error detection and correction by adding redundancies into the original data to form code words. Secure data communications based on FEC are achieved in many traditional approaches by encrypting the whole code words, which is not computationally or energy efficient. In this paper, we propose a new selective encryption approach based on FEC code words to effectively sustain both green and survivable communications in wireless networking systems.Received on November 10, 2009; revised March 09, 2010References: 06
Widespread acceptance of micro-electro-mechanical systems (MEMS) depends highly on their reliability, both for large-volume commercialization and for critical applications. The problem of multiple dependent failure processes is of particular interest to MEMS researchers. For MEMS devices subjected to both wear degradation and random shocks that are dependent and competing, we propose a new reliability model based on the combination of random-shock and degradation modeling. The models developed in this research can be applied directly or customized for most current and evolving MEMS designs with multiple dependent failure processes.
References: 10"Received on March 23, 2009, revised on May 26, 2009
The robustness of a network is the ability to maintain a satisfactory performance level when there may be system endogenous random failures plus possible failures caused by external attacks. A new approach for determining network robustness is presented based on the difference between the possibilistic and probabilistic network dependability estimates. Both the probabilistic and possibilistic estimates are derived here using a simple approximation method proposed by von Collani [9], but with different operations for the possibility estimate in some system structures. The proposed robustness estimation method is demonstrated for a sample of network architectures.Received on July 04, 2008, revised March 1, 2009References: 11
Due to severely constrained resources, sensor nodes are subject to frequent failures. Therefore, wireless sensor networks (WSN) are typically designed with a large number of redundancies to achieve fault tolerance and to maintain the desired network lifetime and coverage. This work proposes an equation to determine the optimal number of redundant sensor nodes required in each layer of a WSN with the layered structure. Matlab simulations are used to verify the proposed equation.Received on May 16, 2008, revised on December 30, 2008References: 01
This paper studies the uncertainties of component reliability parameters and their impact on system lifetime distribution. Monte Carlo simulation was applied to investigate the correlation between the system complexity and its Weibull shape parameter when component reliability parameters are estimated with uncertainties. Results show the system lifetime approaches the exponential distribution when the number of components becomes large.Received on August 1, 2008, revision available on October 21, 2008References: 06
Most existing reliability growth models ignore the reliability test/improvement process, where engineers identify distinct failure modes through tests and redesign the product/system to remove these modes. In this paper, we present a discrete reliability growth model, which is similar to the Crow's projection and extended reliability growth models but without the doubtful assumptions implied by these models. We demonstrate the use of our model in the project selection of reliability improvement projects during system redesign.Received on March 13, 2008References: 02
In this short communication, we have explored and compared the performance of Alternate Phase Return-to-Zero (APRZ) modulation format on both strongly and weakly managed dispersion maps with varying path average dispersion values. Our findings show that, as opposed to 0 or 180 deg APRZ, 90 deg APRZ is more efficient for both strongly and weekly managed dispersion maps in minimizing Intra Channel Four-wave Mixing (IFWM) for reliable 40Gb/s transmission.Received on October 09, 2007References: 03
One-way ANOVA (analysis of variance) is widely used in quality engineering for quality characteristics comparison. The basic assumption in applying ANOVA is that the response is normally distributed. However, in life tests, the times to failure usually do not follow this assumption. In this paper, a method similar to the regular one-way ANOVA is proposed for reliability assessment. A generalized linear model together with a likelihood ratio test is developed. The proposed method can be used to compare the reliability of different designs. It also can be applied to study whether a factor has an effect on product life.Received on November 05, 2007References: 04
In this paper, we investigate the optimal Byzantine fault tolerance (BFT) design strategies from a game theoretical point of view. The problem of BFT is formulated as a constant-sum game played by the BFT system (defender) and its adversary (attacker). The defender resorts to replication to ensure high reliability and availability, while the attacker injects faults to the defender with the purpose of reducing the system's reliability and/or availability. We examine current BFT solutions and propose a number of improvements based on our game theoretical study.Received on June 22, 2007References: 07
In this paper, we propose to extend the Guassian approximation (GA) method for reliable system performance evaluation from the traditional optical on-off keying (OOK) systems to the emerging optical differential phase shift keying (DPSK) systems. The proposed method can be used to guide efficient numerical estimate as well as experimental measurement of the noise-loading back-to-back DPSK system performance where the inter-symbol-interference (ISI) is not significant.Received on August 07, 2007References: 06
In this paper, we investigate an important and interesting problem in a dependable secure computing system. The problem is to determine an optimal time that the secret shares should be updated on a (k, n) threshold-based secret sharing system with the proactive secret sharing (PSS) capability. In an earlier survivability study for a reconfigurable system, we developed a new definition for the survivability assessment. We extend this new definition for the survivability of the dependable secure computing system. From the survivability assessment perspective, we can easily determine an appropriate updating time for safeguarding secret information on the dependable secure computing system.Received on April 12, 2007References: 04
Mean Time Between Failures (MTBF) is a commonly used metric to indicate the reliability of a reparable item. For items with an increasing failure rate (wear-out failure), periodic maintenance is often performed to improve their operational reliability or increase the operational MTBF. This paper develops a very simple but highly accurate approximation of MTBF for items subjected to periodic maintenance, upon which engineers can easily do a quick calculation and perform design-for-reliability analyses.Received on April 17, 2007References: 04
We propose a hierarchical availability analysis methodology for multi-tiered Web applications. The methodology partitions the analysis into three levels, namely, server, request and session, and considers only the relevant factors at each level. The levels are connected using a hierarchical approach; the results obtained from one level are propagated for use in the analysis at the next one. The methodology thus decouples the different factors that influence availability and yet provides an integrated framework to consider them simultaneously.Received on May 8, 2007References: 07
In most practical cases, during a system failure or downtime, all non-failed components are kept idle. This phenomenon is known as suspended animation (SA). In this paper, we provide a simple and efficient method to compute the availability indices of repairable systems subjected to suspended animation. An important aspect of the proposed method is that it is not restricted to exponential failure and repair distributions. Further, the proposed method can be applied to any system configuration with embedded hierarchical k-out-of-n subsystems subjected to suspended animation.Received on December 27, 2006References: 04
A component replacement methodology for electricity transmission and distribution systems was developed to solve equipment replacement problems for systems composed of sets of heterogeneous assets subject to annual budgetary constraints over a finite planning horizon. The proposed methodology is based on an integrated dynamic and integer programming approach. First, a dynamic programming algorithm is solved for each individual component in the system. Then, two different integer programming models are applied. The first one is used to check whether a feasible system-level solution can be obtained and it is also used to identify infeasibilities for the original problem. The second integer programming model is used to find the system-level replacement schedule with the minimum cost. The method developed can potentially be applied to any replacement problem composed of sets of heterogeneous assets subject to constraints imposed on the system. However, in this work, the method is demonstrated on the replacement analysis of a common electricity distribution system configuration. The objective is to obtain the minimum cost policy such that the Net Present Value of maintenance, purchase and opportunity costs is minimized.Received on January 29, 2007References: 12