Please wait a minute...
, No 9

■ Cover page(PDF 3.17 MB) ■  Table of Content, September 2021  (PDF 34 KB) 

  
  • Effect of Class Imbalance on the Performance of Machine Learning-based Network Intrusion Detection
    Ngan Tran, Haihua Chen, Janet Jiang, Jay Bhuyan, Junhua Ding
    2021, 17(9): 741-755.  doi:10.23940/ijpe.21.09.p1.741755
    Abstract    PDF (627KB)   
    References | Related Articles
    Class imbalance is a common issue in real-world machine learning datasets. This problem is more obvious in intrusion detection since many attack types only have very few samples. Ignoring the imbalance issue or constructing the machine learning classifier on partial classes will lead to bias in the model performance. Motivated by a recent study that addressing the real-world class imbalance problem in dermatology, we explore the effectiveness of different techniques in handling class imbalance in a Network-based Intrusion Detection System (NIDS). Experiments on the NSL-KDD dataset show that downsampling + upsampling + SMOTE (DUS) is the best re-sampling technique in imbalanced data. In addition, compared to other machine learning classifiers, the Ensemble model with DUS achieves the highest performance. We also design experiments to validate how the number of classes affects the NIDS model performance, finding that more imbalance classes will negatively impact the model performance. Our experiment demonstrates that many of the existing machine learning-based NIDS systems which yield very high performances might be misleading. The results in the article provide insights on the effect of class imbalance on the machine learning performance in NIDS and guide researchers on how to improve the NIDS performance in real-world imbalanced data.
    Parallel Planning of Marine Observation Tasks based on Threading Building Blocks
    Zhi Zhang, Dongcheng Li, Man Zhao, Yao Yao, Shou-Yu Lee
    2021, 17(9): 756-765.  doi:10.23940/ijpe.21.09.p2.756-765
    Abstract    PDF (1033KB)   
    References | Related Articles
    Marine monitoring has diverse targets. How to complete the observation over marine targets to the largest extent by reasonably and efficiently allocating various observation platform nodes deployed on the sea, and by designating observation tasks for platform devices has become the focus of current research. Targeting at the problem of time-consuming for algorithm-planning involving a large number of observation meta-tasks, this study combined differential evolution and parallel architecture in a master/slave pattern to improve the differential evolution (DE) algorithm. In this way, Intel TBB was applied to the parallel realization of this algorithm. Furthermore, the multiprogramming mode of traditional Windows was used for parallel realization of the algorithm to compare effects of the improved Intel TBB parallel algorithm. Through data testing of diverse problem sizes, both validity and efficiency of the parallel master/slave strategy proposed based on the threading building blocks (TBB) were further verified.
    Preventive Maintenance Optimization Regarding Large-Scale Systems based on the Life-Cycle Cost
    Ruiqi Wang, Guangyu Chen, Na Liang, Zheng Huang
    2021, 17(9): 766-778.  doi:10.23940/ijpe.21.09.p3.766778
    Abstract    PDF (674KB)   
    References | Related Articles
    Unit degradation complicates the comprehensive optimization of reliability design and preventive maintenance (PM) policies for large-scale systems considering the life-cycle. Based on the unit failure rate obeying the Weibull distribution, we propose a cost optimization model for large-scale systems under reliability constraints from the life-cycle perspective. In this study, we consider a simple multi-unit preventive joint maintenance policy where units are assessed and fixed only during planned inspections. However, nonlinear optimization becomes increasingly difficult due to the exponential combination growth caused by numerous units. To overcome this challenge, a genetic algorithm (GA) program is adopted to obtain the global optimal solution covering the unit reliability in the design and manufacturing stages and system PM period in the operation stage. Through real-world example analysis, the correctness and effectiveness of the proposed model and algorithm are verified. The relationship among decision variables, such as maintenance improvement factor, unit reliability, and PM period, is examined. The results simplify the reliability design process for system engineers and enrich reliability theory and applications.
    Reliability Assessment of the Planning and Perception Software Competencies of Self-Driving Cars
    Surbhi Gupta, H.D. Aroraa, Anjali Naithania, Anil Chandrab
    2021, 17(9): 779-786.  doi:10.23940/ijpe.21.09.p4.779786
    Abstract    PDF (853KB)   
    References | Related Articles
    Self-driving cars, presently under on road testing stage, contain software to drive the vehicle without intervention of a human driver. The two major competencies of this software are perception and planning. When any discrepancy is detected in any of these competencies, it causes “failure in operation” or “disengagement” i.e., control is handed over to human driver present as a back-up arrangement. In this paper, on-road testing data is considered for self-driving cars in California (USA) for eight manufacturers whose vehicles have been tested for at least 10,000 miles. The various reasons for reported disengagement or failure have been attributed to error in two software competencies of self-driving cars - perception competency or planning competency. The “number of miles driven” has been taken as a dependent parameter while cumulative failure due to perception or due to planning have been taken as an independent variable. The seven NHPP software reliability failure-count models have been compared to identify the best-fit model for number of miles driven against failure due to perception discrepancy and number of miles driven against failures due to planning discrepancy for the above-mentioned eight self-driving car manufacturers. The type of debugging has been identified based on the nest fit model and future predicted values of the respective failures are estimated.
    Performance of Genetic Programming-based Software Defect Prediction Models
    Mahesha Pandit, Deepali Gupta
    2021, 17(9): 787-795.  doi:10.23940/ijpe.21.09.p5.787795
    Abstract    PDF (313KB)   
    References | Related Articles
    The performance of software defect prediction (SD) suffers from the problem of dataset imbalance and noisy attributes. Genetic programming (GP) based techniques can boost the performance of SDP models by performing a global search on the complete solution space to locate an optimal solution. With the help of a novel diagram, this paper explains the operations of a typical GP process. Examining the literature, this paper presents a summary of 26 GP based SDP techniques along with the datasets that they have worked on, features that they have examined, performance measures, and their performance metrics. The review finds that most of the GP based SDP techniques have reported performance above the mean performance score of 71%. The paper also finds inadequacy in the literature about the empirical description of GP based SDP techniques. Many GP techniques are not well described in an individual empirical study along with the theoretical foundation of the technique. The paper contributes a novel graphical summary of the GP algorithm and a comprehensive listing of pure GP techniques.
    Randomly Selected Heterogenic Bagging with Cognitive Entity Metrics for Prediction of Heterogeneous Defects
    F. Leo John, Jose Prabhu Joseph John
    2021, 17(9): 796-803.  doi:10.23940/ijpe.21.09.p6.796803
    Abstract    PDF (519KB)   
    References | Related Articles
    Software failure prediction is a supervised learning approach that plays an essential part in deciding the degree of software testing resources to be allocated. Due to data unavailability and the unbalanced nature of the data, other problems occur during this procedure. This study offers a model for heterogeneous prediction for transfer-learning samples of heterogeneous bagging. The concern of data not being available is handled using transfer learning and data balance and is done using the integrated sampling module. The approach suggested uses the design process for a replicated bag and the selection of cognitive metrics to increase prediction efficiency. Experiments demonstrate successful levels of prediction indicate increased performance compared to current literary work. The results showed a 19 percent increase in the overall forecasts, and a 25 percent decrease in the falsified prediction ratio, thus showing effective predictions.
    Improvement of Overall Performance by Implementation of Different Lean Tools - A Case Study
    D. Sobya, S. Nallusamy, Partha Sarathi Chakraborty
    2021, 17(9): 804-814.  doi:10.23940/ijpe.21.09.p7.804814
    Abstract    PDF (995KB)   
    References | Related Articles
    Manufacturing enhancement is a core strategy to obtain manufacturing excellence and is necessary to obtain a good financial and operational performance of small and medium scale manufacturing industries. Lean engineering tools help in reducing unwanted wastes and cycle times and also in identifying the proper alternative layout in the industry. The objective of this research is to study and identify the time taken and bottleneck stations for manufacturing the product. The bottleneck process has been addressed using line balancing which includes work load leveling. The line balancing results by distributing the workload and the result has been reached to 97% effective use of workers. 5S auditing and implementation was carried out for effective housekeeping and the scores before and after implementations were 2.54 and 3.54 respectively. The layout of the industry was analyzed and found that the current layout has more unwanted transportation, improper communication between necessary departments, and more cycle time for producing a product. The layout has been optimized and the best alternative was selected. From the simulation study, it inferred that the total cycle time was reduced by about 155 minutes and the value added time was reduced from 534 to 378 minutes.
    A Comprehensive Review on Performance Improvement of Diesel and Biodiesel fueled CI Engines using Additives
    Kiran Chaudhari, Nilesh P. Salunke, Vijay R. Diware
    2021, 17(9): 815-824.  doi:10.23940/ijpe.21.09.p8.815824
    Abstract    PDF (255KB)   
    References | Related Articles
    Due to rising demand and prices of fossil fuels and their adverse impacts on health and environment, research is now conducted on alternative fuels. Biodiesel is an alternative fuel derived from various edible and non-edible feedstocks. It is utilized as either purely biodiesel or as part of a diesel mixture. However, it has a few disadvantages such as inferior cold flow properties leading to poor cold starting, lower combustion quality, and higher nitrogen oxide emission. These drawbacks can be overcome by using fuel additives. Additives that can be used are metallic, antioxidant, oxygenated, carbon based, organic, or a combination of these. Recent research demonstrated that use of additives improves thermal and physical properties of fuel, enhances combustion characteristics (flame temperature, heat transfer rate, and ignition delay), and increases emission performance. The research available are vast, uncategorized and to some extent inconsistent. Recent research on nano sized particles used as additives for diesel and biodiesel fuel in CI engines is summarized in this article. The future scope underlines the necessity of an environment friendly and economically feasible nano particle additive for CI engines.
    State-of-Health Estimation and End of Life Prediction for the Lithium-Ion Battery by Correlatable Feature-based Machine Learning Approach
    Himadri Sekhar Bhattacharyya, Sindhu Seethamraju, Amalendu Bikash Choudhury, chandan Kumar Chanda
    2021, 17(9): 825-836.  doi:10.23940/ijpe.21.09.p9.825836
    Abstract    PDF (803KB)   
    References | Related Articles
    A robust and straightforward prognostic framework is proposed to estimate the state-of-health (SOH) and accurate prediction of the end of life (EOL) for lithium-ion batteries. Two commonly used machine learning (ML) models, feed-forward neural network (FNN) and long short-term memory (LSTM), are used to estimate the SOH. Firstly, some features which are easy to calculate on every discharge cycle are observed, and their correlation with SOH is calculated. Secondly, two scenarios with two inputs and three inputs respectively are created to provide the inputs to these models where SOH is the output. Thirdly, the model's optimal structure is derived based on testing mean absolute percentage error (MAPE). Finally, SOH estimation is done by the model, which shows the highest accuracy. Two models considering both the scenarios are used for EOL prediction, and one is chosen as it shows early forecast. Compared with other ML-based methods, it is easier to implement as the input features are totally based on the initial and final status of the discharge cycle. This methodology is applied to the NASA battery dataset, which shows an average MAPE of 1.86% for SOH estimation and an early prediction of EOL for most of the batteries.
Online ISSN 2993-8341
Print ISSN 0973-1318