Please wait a minute...
, No 5

■ Cover page(PDF 3224 KB) ■  Table of Content, May 2025(PDF 122 KB)

  
  • Fusion Mutation-Based Test Generation and XGBoost-Driven Prioritization for Image Classification DNNs
    Qian Zhang and Dongcheng Li
    2025, 21(5): 235-248.  doi:10.23940/ijpe.25.05.p1.235248
    Abstract    PDF (1139KB)   
    References | Related Articles
    With deep learning increasingly employed in safety-critical domains, ensuring the reliability of deep neural networks has become paramount. Although traditional software testing can detect model errors, the substantial costs of assembling large, manually annotated test sets remain a key challenge. To address this, we propose: (1) a fusion mutation-based test case generation technique and (2) a test case prioritization algorithm based on feature analysis. The fusion mutation method enriches test diversity through both data mutation and model mutation. By designing a hyperparameter optimization space for image distortion and employing an improved Bayesian optimization algorithm, our approach rapidly identifies optimal mutation parameters and adaptively generates test sets from minimal data. These mutated images simulate various distortion scenarios, forming the basis for priority ranking. The priority sorting algorithm leverages differential, rule, and effectiveness features, combined with an XGBoost-based strategy that prioritizes the most error-prone test cases and restricts ineffective mutations. This ensures expedited identification of potential DNN defects, improving testing efficiency. Experiments using popular image classification networks on multiple datasets demonstrate that our method outperforms other state-of-the-art approaches in 50% of tested scenarios, achieving a 2%-9.2% performance gain. These findings validate our method’s effectiveness in uncovering diverse error types in DNNs and generating high-quality test sets while maintaining a balance between test data efficiency and diversity.
    Real-Time AI in Surgery: A Review of Precision, Innovation, and Future Directions in Surgical Assistance
    Syed Mohsin Bukhari and Ishan Kumar
    2025, 21(5): 249-258.  doi:10.23940/ijpe.25.05.p2.249258
    Abstract    PDF (633KB)   
    References | Related Articles
    The convergence of real-time AI technologies in surgery is transforming the profession by making it more precise, reducing human errors and improving patient outcomes. These technologies leverage advancements in machine learning, computer vision, and predictive analytics to assist surgeons during key methodologies. In this study, a detailed analysis of the existing reality of AI in surgery is presented, with an emphasis on their intraoperative use in preoperative planning counselling as well as follow-up postoperatively. The review emphasizes the most promising AI-driven approaches such as robotic-assisted surgery, image enhanced navigation, and real-time predictive modeling poses great challenges regarding data privacy, interoperability, algorithmic bias, and ethical considerations. The implications concern the widespread use of artificial intelligence in medical environments. In addition, this research identifies emerging trends, which includes the addition of haptic feedback and autonomous decision models, and is mindful of potential implications of AI in revolutionizing surgical procedures, resulting in enhanced patient safety and shorter recovery times.
    Multi Object Image Captioning via CNNs and Transformer Model
    Latika Pinjarkar, Devanshu Sawarkar, Pratham Agrawal, Devansh Motghare, and Nidhi Bansal
    2025, 21(5): 259-268.  doi:10.23940/ijpe.25.05.p3.259268
    Abstract    PDF (554KB)   
    References | Related Articles
    Image captioning has shown significant advancements by applying deep learning techniques such as CNNs and transformers, where humans generate textual descriptions for given images. CNNs specialize in digging out the most crucial visual information from images. At the same time, Transformers provide dependencies over the long lengths and parallel processing for effective sentences and series modeling. This paper offers a systematic view of current developments in image captioning, which successfully utilize the power of CNNs and Transformers simultaneously. Going through the strengths and weaknesses of this combined approach, our train of thought has architectural advancements such as attention mechanisms, vision-language pre-training, and multimodal reasoning enhancers, which will be straightforward. Besides this, open research spots and chances are also discussed, including enhancing the model’s explainability and user control, providing domain-specific adaptation, introducing commonsense logic, facing image challenges, and ensuring fairness, bias mitigation, and privacy safeguarding. Besides, we are finding applications in new challenges, such as multimodal captioning from videos and other data sources. By reviewing the past techniques and current trajectory of advancement, this paper aims to become a guide in future image captioning systems that are more sophisticated, open, and reliable toward a wide variety of outputs.
    Causality Extraction and Reasoning from Text
    Isha, Anjali, Karuna Sharma, Kirti, and Vibha Pratap
    2025, 21(5): 269-277.  doi:10.23940/ijpe.25.05.p4.269277
    Abstract    PDF (422KB)   
    References | Related Articles
    Automatic extraction of causal linkages from textual data has become important for knowledge-based reasoning and decision-support system applications. Identifying cause and effect relationships in unstructured text is used to enhance decision-making and textual knowledge interpretability. The goal of this research is to combine rule-based frameworks with deep learning methods to provide a novel method for reasoning and causality extraction from text. Specifically, it improves causal relation identification using the RoBERTa transformer model. Our approach entails extending RoBERTa to identify advanced causal relations, situational dependences, and cause-effect pairs from text data. Moreover, to improve the cognitive capacity and performance of text-derived knowledge, we’ve created a method of reasoning that reveals implicit causal knowledge. Our approach improves current causality extraction methods in terms of accuracy and durability, as shown by thorough evaluation on standard datasets. The model effectively identifies complex causal relationships, outperforming conventional approaches. This research advances the field of textual reasoning by providing a scalable and efficient framework for causality extraction and automated reasoning. The proposed method not only enhances accuracy but also opens new possibilities for real-world applications in decision support and knowledge-based systems.
    Enhancing Cloud Load Balancing with Multi-Objective Optimization in Task Scheduling
    Suman Lata, Dheerendra Singh, and Gaurav Raj
    2025, 21(5): 278-287.  doi:10.23940/ijpe.25.05.p5.278287
    Abstract    PDF (604KB)   
    References | Related Articles
    In cloud computing, efficient workload management is essential for improving resource utilization, service availability, and reliability. While extensive research exists on cloud task scheduling, there remains a gap in addressing multi-objective optimization and load balancing. This study introduces a hybrid optimization method for cloud scheduling aimed at optimizing resource use and improving user service. Specifically, a combined ABC and PSO approach is employed for load balancing optimization. To further enhance the performance of this metaheuristic optimization framework, the SJF heuristic is utilized to generate the initial population. Simulations are conducted using the CloudSim cloud simulator. The primary goals are minimizing makespan and cost while maximizing cloud resource utilization. Performance is evaluated by comparing the proposed work with existing techniques like ABC, PSO, GA, and ACO, using turnaround, waiting time, makespan, throughput, cost, and resource utilization to demonstrate the effectiveness of the proposed work.
    Geographical Energy-Aware Data Aggregation using Mobile Sinks (GEADAMS) Algorithm in Wireless Sensor Networks to Minimize Latency
    S. Divya Bharathi and S. Veni
    2025, 21(5): 288-297.  doi:10.23940/ijpe.25.05.p6.288297
    Abstract    PDF (895KB)   
    References | Related Articles
    Wireless Sensor Networks (WSNs) are critically vital in real-time data transmission and acquisition. Unfortunately, they are often afflicted with high latency, because of inefficient approaches for data routing and aggregation. To resolve the above issues, we propose the Geographical Energy-Conscious Data Aggregation Using Mobile Sinks (GEADAMS) Algorithm for maximal data aggregation with minimal delay. GEADAMS invokes geographic information and energy-conscious routing for dynamically choosing rendezvous points so as to reduce long-distance transmission and share energy consumption among sensor nodes. Adding the mobile sinks, introduced by this approach, greatly facilitates efficient data collection in handling data collected from several sources, thus preventing the congestion and thereby ensuring seamless delivery of data. The protocol has been compared with other existing protocols, viz., LEACH, GEAR, and RPM, through performance parameters like throughput, energy consumption, Packet Delivery Ratio (PDR), and Latency. Simulation results confirm that GEADAMS outperforms other approaches under similar testing scenarios with high throughput, energy savings, and low latency working efficiently without compromising on reliability. The novel method increases overall network life through reduced selection of nodes and reduced transmission flooding. The research paves the way for making the WSN even more efficient for high-speed routing in more dynamic environments such as environment monitoring, rescue during disasters, and smart cities. Future work will investigate machine learning approaches that will assist further in improvements in adaptive routing design in dynamic WSN networks.
Online ISSN 2993-8341
Print ISSN 0973-1318