Please wait a minute...
, No 10

■ Cover page(PDF 3223 KB) ■  Table of Content, October 2023(PDF 33 KB)

  • Alternative Ranking Distance Metrics for Fault-Focused Clustering in Parallel Fault Localization
    Yihao Li, Pan Liu, W. Eric Wong, Nicholas Chau, and Chih-Wei Hsu
    2023, 19(10): 633-643.  doi:10.23940/ijpe.23.10.p1.633643
    Abstract    PDF (719KB)   
    References | Related Articles
    Generating fault-focused clusters is a common practice used in parallel fault localization where fault-focused rankings that are likely leading to the same firstly identified faulty statement are grouped together. With respect to the performance of fault-focused clustering and fault localization cost, one critical impact factor is the metric used to measure the distance between two rankings. Current work prefers using Kendall tau distance for its fitness in computing the ranking disagreement. In this paper, we tend to apply two other well-established ranking distance metrics, Spearman’s Footrule and a set intersection-based measure, to fault-focused clustering and cross compare the parallel fault location performance of using these three distinct distance metrics.
    The Future of Employment Verification: Verifiable Credentials for a Seamless Verification Process
    Surekha Thota and Shantala Devi Patil
    2023, 19(10): 644-653.  doi:10.23940/ijpe.23.10.p2.644653
    Abstract    PDF (390KB)   
    References | Related Articles
    Blockchain-based verifiable credentials are poised to revolutionize the employment verification process. They enable employers to issue and verify the credentials of their employees in a secure and seamless way. In this paper, various benefits of utilizing verifiable credentials by employers, employees, and verifiers are discussed. The proposed solution employs decentralized identifiers (DIDs) and blockchain for employment verification. A proof of concept for employment verification has been implemented using Trinsic CLI, which defines the credential schema, issues, stores, and verifies employees' credentials. The benefit of our implementation is to ensure a seamless verification of credentials directly by the verifier without the need for trusted third parties and minimize the need for manual intervention. This solution ensures privacy by not disclosing the actual credential details to the verifier; rather it provides a Boolean result stating whether the credential is valid or not. The proposed employment verification solution offers a secure, privacy-preserving, tamper-proof, and efficient approach, eliminating the reliance on trusted third parties and time-consuming paperwork.
    Classification of Web Services for Efficient Performability
    Jitender Tanwar, Sanjay Kumar Sharma, Mandeep Mittal, and Ashok Kumar Yadav
    2023, 19(10): 654-662.  doi:10.23940/ijpe.23.10.p3.654662
    Abstract    PDF (560KB)   
    References | Related Articles
    The use of the Internet for business is increasing day by day, as is the use of web services. Web services are the main components of business over the Internet. Several service providers are competing for their web services to be used. The enormous growth in the number of online services has created difficulty with web service classification. Manually classifying and choosing web services for an application is a highly challenging job that often leads to ineffective, error-prone outcomes. To use online services effectively, automatic and precise categorization is necessary. The primary objective of this research is to classify online services using various machine learning models, compare them, and determine which model is the most effective. To better comprehend the application of these classifiers on various web services, the outcomes of several machine learning classifiers are compared based on various characteristics. The end results are realized in a table suggest that in terms of Accuracy, F1 Score, and MCC metrics, SVM and NN classifiers are shown to be equally best. However, in terms of Execution time, NB is best with 0.036 seconds and NN is worst with 3.41 seconds.
    Customer Churn Analysis using Spark and Hadoop
    Priyanshu Verma, Ishan Sharma, Sonia Deshmukh, and Rohit Vashisht
    2023, 19(10): 663-675.  doi:10.23940/ijpe.23.10.p4.663675
    Abstract    PDF (939KB)   
    References | Related Articles
    Predicting Customer churn is one of the telecommunication industry's biggest challenges. Why did their customers quit using their product, site, service, or subscription? Machine learning with Spark and Hadoop has considerably increased the ability to predict customer behaviours. The most popular predictive models, such as logistic regression, Binary Classification Evaluator, and Multi Classification Evaluator, have been used in the prediction process. Enhancing and outfit approaches are used on the training dataset to examine the impact on model effectiveness. Additionally, to further optimize the hyperparameters and produce the models, a K-fold cross-validation method is utilized to train the dataset. Finally, the test data were examined by the AUC-ROC curve and confusion matrix. In this research, an adaptation of Spark and Hadoop frameworks is made to predict customer churn. The data is pre-processed, feature analyses are performed, and the feature selection is carried out using the Vector Assembler algorithm. This study aims to analyse customer behaviors by using a dataset.
    Ensemble Techniques for Classification of Brain Tumor Images Based on Weighting Average of Various Deep Learning-Based Components Models
    Sachin Jain and Vishal Jain
    2023, 19(10): 676-686.  doi:10.23940/ijpe.23.10.p5.676686
    Abstract    PDF (1616KB)   
    References | Related Articles
    Brain tumors are aberrant cells. Medical imaging helps diagnose and classify brain tumors. MRI-based brain tumor categorization is a potential medical imaging research area. Patient's tumor sizes and features vary in brain images. Radiologists struggle to classify tumors from many photos. This research proposes an efficient Deep Learning (DL)-based tumor classification system. This research presents three CNN models for brain tumor classification. VGG16, VGG19, and SqueezeNet. Image compression reduces storage space and allows detailed analysis of brain images, which must be stored for long periods for research and medicinal purposes. For medical brain picture storage, this study uses JPEG2000. Classification is done with and without compression to determine how reduction affects classification performance. The classification models reveal that VGG16 has 98.5% accuracy, VGG19 98.80%, and SqueezeNet 98.7% without reduction. The weighted average model outperforms all other base models at a 20 K-fold value of 98.8%.
    An Interval-Probability Hybrid Structural Reliability Calculation Method Based on CSSA-BR-BP
    Yonghua Li, Shujian Liu, Xiaoning Bai, and Yufeng Wang
    2023, 19(10): 687-699.  doi:10.23940/ijpe.23.10.p6.687699
    Abstract    PDF (1623KB)   
    References | Related Articles
    For the problem of reliability analysis of hybrid structures containing interval variables and probabilistic variables, a hybrid structure reliability calculation method based on chaotic sparrow search algorithm (CSSA) and Bayesian regularized (BR) optimization algorithm optimized BP neural network is proposed. First, a reliability analysis model is constructed based on the mixed uncertainty variables, evidence theory is introduced to characterize the uncertainty of the interval variables, and the interval variables are transformed into probabilistic variables by using uniform probability processing method, which in turn decouples the two-layer nested reliability solving problem into a single-layer reliability solving problem. Next, the weights and thresholds of the BP neural network are optimized using CSSA and BR techniques, leading to the development of the CSSA-BR-BP neural network surrogate model. Finally, the HL-RF method in Advanced First Order Second Moment (AFOSM) is employed to efficiently compute the structural reliability index. The results demonstrate that the proposed method exhibits excellent fitting accuracy and enhances the efficiency of reliability analysis for hybrid structures, confirming the feasibility of the approach.
    Reliability Analysis and Optimization of Forage Crushers Based on Bayesian Network
    Jinxin Wang, Zhiping Zhai, Yuezheng Lan, Xiaoyi Zhai, and Lixiang Zhao
    2023, 19(10): 700-709.  doi:10.23940/ijpe.23.10.p7.700709
    Abstract    PDF (581KB)   
    References | Related Articles
    Forage crushers are used to process forage into soft filaments to improve the feeding intake rate of the forage and livestock digestibility. High failure rate and low reliability of the whole machine during forage-crushing operations are a concern. To improve the reliability of crushers, the Bayesian network model was applied to evaluate the reliability of forage crushers and clarify the root cause of forage crushers failure. Multi-Island genetic algorithm was used for multi-objective optimization according to the main causes of crusher failures. The study revealed that the main causes of the failures of forage crushers were resonance caused by uneven wear of hammers, the insufficient fatigue strength of throwing blades or hammers, and wear failure of the hammers. After optimization, the reliability of the forage crusher increased from 0.739 to 0.912, which satisfied the reliability requirements of forage crushers. This study can provide a reference for the fault maintenance and reliability optimization design of forage crushers.
ISSN 0973-1318