Please wait a minute...
, No 11

■ Cover page(PDF 3224 KB) ■  Table of Content, November 2023(PDF 33 KB)

  
  • A Local Outlier Factor-Based Automated Anomaly Event Detection of Vessels for Maritime Surveillance
    R. Hari Kumar, Saikat Bank, R. Bharath, S. Sumati, and C. P. Ramanarayanan
    2023, 19(11): 711-718.  doi:10.23940/ijpe.23.11.p1.711718
    Abstract    PDF (357KB)   
    References | Related Articles
    Detecting the anomaly events in maritime traffic is key to vessel maritime situational awareness. Automatic Identification System (AIS) data, initially envisioned for collision avoidance can also be used for detecting the anomaly vessel patterns due to its rich information content. The officers sitting at the Vessel Traffic Service (VTS), will monitor the vessel traffic and behavior of the vessels based on the received AIS data from the vessels. The VTS monitoring system receives large volumes of AIS data at the base stations, and manual detection of the anomaly patterns is nearly infeasible. To address the issue, we propose a novel approach based on the Local Outlier Factor (LOF) algorithm trained with customized features extracted from the AIS data to automatically detect the anomaly events during the vessel voyage. The proposed algorithm helps the officers to give attention only to the anomaly AIS instances, which will be comparatively less compared to the normal AIS instances. The difference in the statistical nature of the proposed extracted features from AIS data corresponding to normal and anomaly samples signified the relevance of the extracted features in anomaly detection. Experiments on a real-time dataset consisting of 1042002 AIS messages demonstrate the validity of the proposed method.
    Hybrid Ensemble Stacking Model for Gauging English Transcript Readability
    Namrata Sukhija, Rashmi Priya, Vaishali Arya, Neha Kohli, and Ashima Arya
    2023, 19(11): 719-727.  doi:10.23940/ijpe.23.11.p2.719727
    Abstract    PDF (916KB)   
    References | Related Articles
    Readability has been a hotly debated topic of study for years. A realistic future route for readability categorization and rating has been made possible by the current explosion in data-driven learning algorithms. In today's comprehensive environment, the investigation of textual readability is a well-established topic that has increased in importance. The challenge of assessing readability for the English language is covered in this essay. The goal is to forecast a sentence's readability based on the provided phrases, which conforms to the intended audience's expected comprehension ability. This readability factor is essential to the writing and comprehension phases of learning English. Current studies aim to improve the accuracy of classifiers by integrating ensemble learning with a range of machine learning (ML) models. This work introduces a stacked ensemble for assessing English readability, employing four classifiers as fundamental classifiers, including k-nearest neighbor, support vector machine, stochastic gradient descent, logistic regression, and linear discriminant analysis (LDA) as a meta classifier. In the present investigation, we conducted tests with twenty-five thousand phrases in English. They were also granted Flesch-Kincaid labeled into 7 distinct comprehension categories. The suggested model's successful operation was examined using several prognosis assessment indicators, comprising precision, accuracy, recall, F1 score, and area under the curve. The findings showed that the stacked model outperformed traditional ML models in terms of performance with a 98.66% accuracy rate.
    Strategies for Data Backup and Recovery in the Cloud
    Amanpreet Singh and Jyoti Battra
    2023, 19(11): 728-735.  doi:10.23940/ijpe.23.11.p3.728735
    Abstract    PDF (272KB)   
    References | Related Articles
    The research in this study examines modern data backup and recovery techniques in cloud computing settings. An in-depth literature analysis highlights the changing environment by examining the effects of various techniques. This study offers empirical insights into strategy choices and difficulties by employing a rigorous methodology that involves data collecting from diverse cloud service providers and enterprises. Results show that choosing a cloud provider has an impact on how a plan is implemented as well as perennial concerns about data security, compliance, and cost management. There is also discussion of new technologies, like blockchain-based data integrity and AI-driven anomaly detection.
    Deep Learning-Powered Corneal Endothelium Image Segmentation with Attention U-Net
    Kamireddy Vijay Chandra, Kala Praveen Bagadi, Kalapala Vidya Sagar, R. Manjula Sri, and K. Sudha Rani
    2023, 19(11): 736-743.  doi:10.23940/ijpe.23.11.p4.736743
    Abstract    PDF (436KB)   
    References | Related Articles
    In the realm of medical image analysis, the accurate and effective segmentation of corneal features is of paramount importance. Our latest research presents an innovative approach to corneal image segmentation, the Attention U-Net design, supported by compelling statistical evidence. Our method harnesses the capabilities of the U-Net architecture, known for its adept feature extraction and contextual information retention. What sets our approach apart is the integration of attention mechanisms into this architecture, enhancing the model's ability to focus on crucial regions within corneal images. This fusion results in a robust segmentation model that captures intricate corneal layer details. To validate our method, we conducted experiments using a substantial dataset of corneal confocal images, encompassing diverse anatomical and clinical variations. Our quantitative assessments reveal significant superiority over traditional segmentation techniques, with the Dice coefficient and other key metrics demonstrating substantial performance gains. Qualitatively, our model accurately delineates complex corneal structures and challenging surroundings. Our approach begins with a comprehensive U-shaped encoder-decoder structure, extracting information from input corneal images. The subsequent integration of attention mechanisms empowers the model to allocate varying levels of importance to different visual regions. This adaptability significantly enhances segmentation accuracy, particularly in regions with anomalies or fine structures. Our pioneering use of the Attention U-Net design for corneal image segmentation yields remarkable results. This advancement equips medical professionals and researchers with a powerful tool for precise and efficient corneal structure assessment. Furthermore, integrating attention mechanisms within the U-Net architecture is promising for enhancing medical image segmentation methodologies, ultimately improving patient treatment and advancing ophthalmological research. The dataset comprises 1050 corneal confocal images, covering various anatomical and clinical variations. The Attention U-Net achieves an impressive Dice coefficient score of 0.92, surpassing traditional segmentation techniques by an average margin of 12%. Comparative assessments demonstrate a notable 15% increase in segmentation accuracy compared to typical U-Net models. Qualitative evaluations highlight the model's exceptional ability, achieving a 93% success rate in accurately identifying complex corneal regions, even in challenging cases.
    An Improved Firefly-Based Feature Selection Method for Software Fault Identification and Classification
    Ashima Arya and Sanjay Kumar Malik
    2023, 19(11): 744-752.  doi:10.23940/ijpe.23.11.p5.744752
    Abstract    PDF (329KB)   
    References | Related Articles
    With the increase in the number of software components, it becomes difficult to identify the faults manually. Automated fault detection has gained much attention in the last couple of years. This paper presents an improved firefly algorithm for detecting and evaluating software faults by incorporating an improved firefly algorithm for feature selection. The tuned algorithm has a better classification rate due to its novel fitness function. The objective is to minimize the losses and maximize the classification accuracy. The proposed algorithm is also compared with other state-of-the-art algorithms and has shown significant improvement in evaluating quantitative parameters. The proposed work has been evaluated for precision, recall and f-measure and is significantly better than the comparisons.
    Result Analysis of the Improved Contiguous Memory Allocation (ICMA) Approach in the Linux Kernel Research
    Anmol Suryavanshi and Sanjeev Kumar Sharma
    2023, 19(11): 753-761.  doi:10.23940/ijpe.23.11.p6.753761
    Abstract    PDF (545KB)   
    References | Related Articles
    The ICMA (Improved Contiguous Memory Allocation) approach represents a transformative rethinking of the traditional Contiguous Memory Allocation (CMA) method, aiming to optimize memory utilization, diminish allocation errors, reduce system overhead, and alleviate latency issues. It introduces a novel virtual memory remapping strategy that challenges conventional assumptions. ICMA adopts a deferred mapping approach, reserving mapping until explicitly required, necessitating a dedicated device driver for its operation. This research critically assesses ICMA's practical implications within the Linux Kernel, primarily focusing on memory allocation, overall system performance, and efficiency enhancements. The technical implementation details presented in this study underscore its effectiveness in addressing memory allocation failures and latency bottlenecks. Empirical testing on a Raspberry Pi 3 highlights the real-world applicability of ICMA. These findings contribute valuable insights into the potential benefits and challenges associated with ICMA, providing essential guidance for devising memory management strategies to significantly enhance system efficiency and performance.
    Optimization of Preventive Maintenance to Maximize the Availability of Aircraft under Resource Constraints
    Om Prakash Bohrey and A. S. Chatpalliwar
    2023, 19(11): 762-770.  doi:10.23940/ijpe.23.11.p7.762770
    Abstract    PDF (659KB)   
    References | Related Articles
    The object of the research work was to evolve a methodology to maximize the availability of aircraft. Analysis of repairable systems, including the selection of critical systems, the goodness of fit tests, and optimization of preventive maintenance, was formulated. The rotables that underwent periodic preventive maintenance and preventive replacement, contributing towards major downtime of aircraft were investigated for scope of improvement to maximize availability. The applicability and suitability of data were tested and validated. Estimation of parameters was carried out using Crow’s Maximum Likelihood Estimator. The optimized preventive maintenance model was implemented on one of the aircraft fleets. The proposed methodology of optimizing the preventive maintenance, mean time between failure, and forecast of critical rotables was found effective and resulted in significant improvement in the availability of the aircraft.
Online ISSN 2993-8341
Print ISSN 0973-1318