Please wait a minute...
, No 7

■ Cover page(PDF 3.15 MB) ■  Table of Content, Jul 2021  (PDF 35 KB) 

  
  • Optimization of PM Intervals of an Oil Pump using a Generalized Proportional Intensity Model GPIM
    Sidali Bacha, Ahmed Bellaouar, Jean-Paul Dron, and Houssam Lala
    2021, 17(7): 569-578.  doi:10.23940/ijpe.21.07.p1.569578
    Abstract    PDF (337KB)   
    References | Related Articles
    Reliability models are effective tools that can be used to improve the operational safety of complex repairable systems (CRS) by scheduling optimal PM intervals to help avoid unforeseen and dangerous system failures. In this study, we applied the generalized proportional intensity model (GPIM) to an actual history of an oil pump that has operated for nearly eight years. This realistic model allows the predictive reliability of the system to be modeled by incorporating several predictive variables such as the effect of preventive (PM) and corrective (CM) maintenance, time since last maintenance action (TSLM), and failure criticality (FC). Based on the maximum likelihood approach, the best fit model was used to plan PM intervals at fixed and variable time horizons. The best PM interval, simulated using MATLAB programming, was chosen based on an economic criterion reflecting the average costs that may result from following this PM interval.
    Kubernetes Virtual Warehouse Placement based on Reinforcement Learning
    Haoran Li, Dongcheng Li, W. Eric. Wong, Deze Zeng, and Man Zhao
    2021, 17(7): 579-588.  doi:10.23940/ijpe.21.07.p2.579588
    Abstract    PDF (671KB)   
    References | Related Articles
    As a method to build and run applications, cloud native can make frequent and predictable major changes to the system, is closely correlated to the fast iteration and automated deployment, and is adaptive to the network era with high-speed changes in large amounts of data. Nevertheless, cloud native is still growing and there are still many problems to be solved. This paper selected Kubernetes, the cornerstone of the cloud native ecosystem, and Docker, the huge orchestration system that manages containers, to deploy a Virtual Warehouse for managing mirror resources. With the rapid development of artificial intelligence (AI), the reinforcement learning (RL) algorithm has been extensively applied in AI by virtue of two features: trial-and-error and the value of long-term rewards. The RL algorithm can be applied to multiple problem scenarios. The study object in this paper meets the requirements of the RL algorithm and has constantly updated environmental conditions. In consideration of the implementation of automated container operations by k8s, it was proposed to learn in the existing environment through RL and with changes in demand as the environment updates, until the Virtual Warehouse converges to the optimum location. In this paper, simulation modeling was made on the cloud native process. The model data were trained according to the model environment and by the RL algorithm to obtain the optimum warehouse placement. The warehouse location parameters obtained were substituted into the simulation environment. In addition, the abstract task class was pulled according to the extended image to obtain the delay time of different tasks, thus verifying the superiority of the RL algorithm in the K8S warehouse placement.
    Applying Slicing-based Testability Transformation to Improve Test Data Generation with Symbolic Execution
    Hsin-Yu Chien, Chin-Yu Huang, and Chih-Chiang Fang
    2021, 17(7): 589-599.  doi:10.23940/ijpe.21.07.p3.589599
    Abstract    PDF (506KB)   
    References | Related Articles
    Symbolic execution techniques are widely adopted in diverse fields, including software quality analysis, software defect detection and so on. Symbolic execution engines assist people to apply the technique on exploiting and test data generation conveniently. However, symbolic execution is quite hard to scale to large and complicated programs which have massive paths and conditional statements due to path explosion. Moreover, for the development cycle of software systems, the necessity to generate test data frequently and rapidly makes them tough to apply symbolic execution technique on software testing. In this paper, we propose three modes of slicing-based testability transformations to change the semantics of programs while maintaining or improving the code coverage of generated test data by the symbolic execution tool: KLEE. The concept of these testability transformations is in decreasing the execution load of KLEE through shortening execution paths in programs. Our proposed testability transformation Mode 1 (TTM1) slices programs with an automated program slicing tool and takes functions in the program as the slicing criterion. On the other hand, our proposed testability transformations Mode 2 (TTM2) and Mode 3 (TTM3) slice programs manually and limit the max depth of paths in programs to a specific value as the slicing criterion. Experimental results show TTM1 could decrease 4.5% of solver queries while increasing 6.6% of code coverage on average. TTM2 could decrease 7.7% of solver queries while increasing 5.4% of coverage on average. TTM3 could decrease 17.6% of solver queries while increasing 3.6% of coverage on average. It can be observed that though the semantics of programs change due to slicing, the coverage of test data generated by KLEE could maintain the same or even become higher. Our finding also indicates that there is room to design slicing-based testability transformations to improve quality of test generation with symbolic execution.
    Using Deep Neural Networks to Evaluate the System Reliability of Manufacturing Networks
    Yi-Fan Chen, Yi-Kuei Lin, and Cheng-Fu Huang
    2021, 17(7): 600-608.  doi:10.23940/ijpe.21.07.p4.600608
    Abstract    PDF (474KB)   
    References | Related Articles
    This paper focuses on the system reliability evaluation for a stochastic-flow manufacturing network by a Deep Learning approach. Knowing the capability of the manufacturing system in real time is a critical issue because the manufacturing industry conducts mass production through automated machines. In existing algorithms, system reliability cannot be calculated in a short time when the network model is complex. Hence, an efficient algorithm based on the Deep Neural Network is developed to predict the system reliability instantly. According to the experimental results, the proposed algorithm can predict system reliability with a Root-Mean-Square Error of 0.002. Compared with existing algorithms, the proposed algorithm can evaluate the reliability of a system in only one-tenth of the time.
    Multi-Objective Power Grid Interdiction Model Considering Network Synchronizability
    Claudio M. Rocco, Kash Barker, and Jose E. Ramirez-Marquez
    2021, 17(7): 609-618.  doi:10.23940/ijpe.21.07.p5.609618
    Abstract    PDF (706KB)   
    References | Related Articles
    Interdiction problems deal with the identification of the links and/or nodes of a network that, if disturbed (i.e., destroyed, failed, removed), can lead to a reduction in network performance. Generally, interdiction is assumed to be immediate and the damage is evaluated without considering any dynamical effect. In particular for electric power system interdiction, previous studies assumed that the performance of the network was related to the load to be served and evaluated assuming a steady state condition of the system. Thus, the effects of sudden actions, like the one from a malevolent attack, are not properly modeled. The modeling of an electric power system from a network science perspective allows the assessment of interesting properties, such as the synchronizability of the network or the ability or ease of a network in synchronizing its individual dynamical units, especially when disturbance occurs. To this aim, the main goal of this paper is to propose an interdiction model as the solution of a novel tri-objective model that (i) minimizes network synchronizability, (ii) minimizes the cost of interdiction, and (iii) minimizes a network performance function. A NSGA-II procedure is used as a convenient tool for approximating the Pareto-optimal frontier of solutions for these competing objectives. Finally, a Kuramoto-based model is used to verify the synchronizability of selected Pareto-optimal solutions. The proposed tri-objective model along with the dynamic simulation is illustrated using the topology of three power systems (the Venezuela high-voltage power system and two IEEE networks). The results suggest that our proposed model offers a substantial contribution: a simple model produces dramatic results.
    Estimation of Wire Rope Reliability by Two Analytical Approach
    Youssef Bassir, Achraf Wahid, Abdelkarim Kartouni, and Mohamed ELghorba
    2021, 17(7): 619-626.  doi:10.23940/ijpe.21.07.p6.619626
    Abstract    PDF (768KB)   
    References | Related Articles
    The purpose of our work is to find an analytical model capable of estimating the degradation state of a 19x7 non-rotating wire rope. This model is based on experimental data obtained from static tensile tests on artificially damaged specimens (wire and strand). The wire rope performance is linked to a statistical parameter called reliability. For this purpose, two reliability models have been proposed. Indeed, the first is a majority logic parallel series model (block system) that takes into consideration the wire damage that influences the strand, which in turn affects the behaviour of the whole rope. The second is a majority logical series model where only the layer arrangement and the deterioration of the constituent strands are considered. The results of the two analytical models indicate a significant agreement with the experimental results.
    Optimized Deep Learning Framework for Detecting Pitting Corrosion based on Image Segmentation
    Sanjay Kumar Ahuja, Manoj Kumar Shukla, and Kiran Kumar Ravulakollu
    2021, 17(7): 627-637.  doi:10.23940/ijpe.21.07.p7.627637
    Abstract    PDF (539KB)   
    References | Related Articles
    Pitting corrosion detection is getting huge attention due to its ability to enable an effective diagnosing mechanism. Despite existing techniques trying to resolve issues in this area, it is not yet considered as effective in terms of results. Therefore, the development of an enhanced pitting corrosion diagnosing scheme that resolves the problems of the existing diagnosing system by enabling a novel approach is proposed. In this work, a deep learning strategy is adopted for effective prediction. The Residual U-Net is considered where the encoder and decoder execute the segmentation process. Then, the adaptation of the Bidirectional Conv-LSTM technique can provide better classification results by analyzing various images. Moreover, the size of the pitting corrosion is determined based on its bytes' values. Finally, the implementation of the proposed work is done on the platform of MATLAB. Performance analysis metrics such as accuracy, precision, specificity, sensitivity, and F-measure, etc. are considered, proving the obtained results are better than existing techniques. Therefore, the proposed technique can be considered an effective platform for corrosion detection with enhanced modeling.
    Multilevel Image Threshold Estimation using Teaching Learning-based Optimization
    S. Anbazhagan, and S. Karthikumar
    2021, 17(7): 638-646.  doi:10.23940/ijpe.21.07.p8.638646
    Abstract    PDF (487KB)   
    References | Related Articles
    With a rapid expansion of image segmentation over the past decades, the growth of the mathematical optimization in the form of image thresholding has been enormous on segmentation. A need to organize image thresholding has risen in order to help medical imaging, detection, and recognition in making informed decisions about the images. Image thresholding based on soft computing approaches are used online to cluster the medical imaging into positive or negative diagnoses. The proposed teaching learning based optimization (TLBO) is relied upon maximizing between class variance. Different from previous optimization techniques, TLBO has been utilized as a prime optimization method and its execution is straightforward including less computational exertion. This technique has been tried on standard benchmark test images and sample images with the increase in threshold. Numerical outcomes on examination show that this method is a promising option for the multilevel image thresholding issue.
    Reliability Analysis of Layered Soil Slope Stability using ANFIS and MARS Soft Computing Techniques
    Rahul Ray, Shiva Shankar Choudhary, and Lal Bahadur Roy
    2021, 17(7): 647-656.  doi:10.23940/ijpe.21.07.p9.647656
    Abstract    PDF (688KB)   
    References | Related Articles
    Soil slope stability is an important concern for constructing any structure on a soil slope. As soil is heterogeneous in nature due to its formation process, soil slope stability analysis cannot be done without considering the variability in the properties of soil. To consider the variability in soil properties, the research approach has shifted towards a probabilistic approach. In this paper, two soft computing techniques i.e., Adaptive Network based Fuzzy Inference System (ANFIS) and Multivariate Adaptive Regression Spline (MARS), are used for reliability analysis of layered soil slope stability. As the soil slope stability mainly depends on three properties of soil i.e., unit weight (ϒ), cohesion (c) and angle of shear resistance (ϕ), these are taken as input variables for the models and factor of safety of slope was taken as output. The stability analysis is carried out using the Morgenstern-Price method. The models were accessed using statistical parameters like NS, RPD, RMSE, R2, PI, GPI. The results showed that although both models performed well, the model MARS outperformed the other. Therefore, MARS can be used as a reliable soft computing technique for analyzing soil slope stability.
Online ISSN 2993-8341
Print ISSN 0973-1318