Please wait a minute...
, No 2
  
  • Original articles
    Performance Evaluation of Infrastructure Networks with Multistate Reliability Analysis
    SARINTIP SATITSATIAN KAILASH C. KAPUR
    2006, 2(2): 103-121.  doi:10.23940/ijpe.06.2.p103.mag
    Abstract    PDF (169KB)   
    Related Articles

    The purpose of this paper is to present a multistate reliability approach to evaluate the performance of the infrastructure networks from the viewpoint of reliability. There are many infrastructure networks that follow the classical two-terminal network reliability problem. We present a relatively efficient algorithm to find a subset of lower boundary points and then use them to develop lower bounds on the reliability of the network. A supply chain network reliability example is given where the performance of the supply chain network is related to the lead time to meet the demands of the customer. We formulate this problem in terms of two-terminal network reliability with Multistate where higher values of the states mean shorter lead times. The methodology can be used to evaluate design of supply chain networks. An example is given to evaluate two supply chain networks and their performances are compared using the proposed multistate reliability measures.
    Received on October 12, 2005
    References: 23

    Performance Computing Failure Frequency of Noncoherent Systems
    SUPRASAD V. AMARI
    2006, 2(2): 123-133.  doi:10.23940/ijpe.06.2.p123.mag
    Abstract    PDF (131KB)   
    Related Articles

    Noncoherent systems can model a wide range of practical systems. The non-monotonic property of the structure function makes their analysis difficult. Therefore, despite the general nature of noncoherent systems, there are far fewer publications on noncoherent systems than coherent systems. Particularly limited are the methods for computing the failure frequency of noncoherent systems. In this paper, we present a simple and efficient rule-based method to compute the failure frequency of noncoherent systems. Several examples are provided to demonstrate the simplicity and efficiency of the proposed method. Using the failure frequency of noncoherent systems, we provide a method to compute the reliability of complex temporal failure logic systems, where the order of component failures plays an important role.
    Received on May 6, 2005
    References: 21

    Weibull and Exponential Renewal Models in Spare Parts Estimation: A Comparison
    BEHZAD GHODRATI
    2006, 2(2): 135-147.  doi:10.23940/ijpe.06.2.p135.mag
    Abstract    PDF (246KB)   
    Related Articles

    Providing the required spare parts is an important issue of product support, which is important for system/machine utility improvement. Required spare parts estimation can be performed through different approaches, one of the realistic and well-founded spare parts estimation method is based on the system's reliability characteristics and taking into consideration the system operating environment. In this paper we study and compare two renewal models namely exponential and Weibull models (constant versus non-constant failure rate assumptions) used in estimation of spare parts for non-repairable components. We also estimate the differences between the two models and calculate the percentage of error. Furthermore a case study is conducted on the hydraulic system of LHD machines in Kiruna Mine in Sweden to find out which factor has a significant impact on the estimation of the number of required spare parts.
    Received on April 24, 2005
    References: 15

    Methods for Binning and Density Estimation of Load Parameters for Prognostic Health Monitoring
    NIKHIL M.VICHARE, PETER RODGERS, and MICHAEL G. PECHT
    2006, 2(2): 149-161.  doi:10.23940/ijpe.06.2.p149.mag
    Abstract    PDF (1002KB)   
    Related Articles

    Environmental and usage loads experienced by a product in the field can be remotely monitored in-situ using autonomous sensor systems. This data is useful for assessing the product degradation, predicting remaining useful life, developing load histories for future product designs, and hence minimizing the life cycle cost. One of the major challenges in such a load monitoring activity is the reduction in power and memory consumption of the sensor system for enabling long uninterrupted monitoring. This necessitates reducing the monitored data and storing it in a condense form (in-situ) without sacrificing the load information required for subsequent damage and life assessments.
    This paper assesses non-parametric density estimation methods such as histograms and kernel estimators for use in-situ load monitoring. An experiment was conducted where-in an electronics printed circuit board (PCB) was exposed to field temperature conditions. The temperatures on the PCB were measured in-situ using sensor module with embedded processor and limited memory. The raw sensor data was pre-processed to extract cyclic mean temperatures. During monitoring the extracted cyclic mean temperature values were stored in bins with pre-calculated optimal bin widths, based on estimates of standard deviation and sample size. Assessment of density estimation techniques was conducted based on the comparison of pdf obtained from the binned data versus the complete data set. Sensitivity of the derived load parameter pdf to variations in the estimated and actual standard deviation and sample size were studied and a new method to account for these variations was demonstrated. Compared to using the complete data set, kernel functions resulted in more than 78% data reduction per day with an accurate estimate of density of the monitored parameter. The histogram provided more than 85% data reduction but a less accurate density estimate.
    Received on September 23, 2005
    References: 29

    A Study on Tool Wear Process Control and Tool Quality Life
    MIN ZHANG, ZHEN HE, and YAN-FENG DON
    2006, 2(2): 163-173.  doi:10.23940/ijpe.06.2.p163.mag
    Abstract    PDF (198KB)   
    Related Articles

    The factors that govern the tool wear vary with the time rather being constant. In this paper, a model for the residual tool wear process is developed using Glejser test for determining the variations in heteroskedasticity of the process. On the basis of this test, a control chart of the tool wear is developed and process capability indices are calculated. The tool quality life is determined once the minimum process capability indices are known. A method is also developed to determine the tool quality life of automatic machine tools by integrating tool cutting life and knowing the desired quality life.
    Received on July 3, 2005
    References: 15

    A Condition-based Preventive Maintenance Policy for Markov Deteriorating Systems
    P. NAGA SRINIVASA RAO V.N. ACHUTHA NAIKAN
    2006, 2(2): 175-189.  doi:10.23940/ijpe.06.2.p175.mag
    Abstract    PDF (205KB)   
    Related Articles

    A condition based preventive maintenance (CBPM) policy is proposed for a continuously operating device whose condition deteriorates with time in service. The model incorporate both deterioration and random failures. Deterioration is modeled as discrete states process. The system undergoes random inspections to know the condition, mean time between inspections are exponentially distributed. If the observed condition at an inspection exceeds the threshold deterioration level, Preventive maintenance (PM) is performed. Else no action takes place, continue to run the system. Each PM makes the system t stages () younger. The proposed models consider an accumulated deterioration based increasing intensity for the random failures. A continuously increasing failure rate (for example Weibull) is converted into a stepwise increasing failure rate using stair-step approximation. An exact recursive algorithm computes the steady-state probabilities of the system. An operating unit time (hour, day, week, etc.) based cost function is defined using different cost rates for the different types of outages. Based on maximum availability or minimum costs, optimal solution of the model is derived.
    Received on July 26, 2005
    References: 16

    Performance Studies of Some Similarity-Based Fuzzy Clustering Algorithms
    S. CHATTOPADHYAY, D.K. PRATIHAR, and S.C. DE SARKAR
    2006, 2(2): 192-200.  doi:10.23940/ijpe.06.2.p192.mag
    Abstract    PDF (182KB)   
    Related Articles

    Performance testing of an algorithm is necessary to ascertain its applicability in real data and in turn, to evolve software. Clustering of a data set could be either fuzzy (having vague boundaries among the clusters) or crisp (having well-defined fixed boundaries) in nature. The present work is focused on the performance measure of some similarity-based fuzzy clustering algorithms, where three methods and each method having three different approaches are developed. In the first method, cluster centers are decided based on the minimum of entropy (probability) values of different data points [10]. In the second method, cluster centers are selected based on the maximum of total similarity values of data points and in the third method, a ratio of dissimilarity to similarity is considered to determine the cluster centers. Performances of these methods and approaches are compared on three standard data sets, such as IRIS, WINES, and OLITOS. Experimental results show that entropy-based method is able to generate better quality clusters but at the cost of little more computations. Finally, the best sets of clusters are mapped to 2-D using a self-organizing map (SOM) for visualization.
    Received on October 3, 2005
    References: 14

Online ISSN 2993-8341
Print ISSN 0973-1318