Please wait a minute...
, No 10

■  Cover Page (JPG 2.68 MB) ■ Editorial Board (PDF 72.8 KB) ■ Table of Contents, Oct 2019 (PDF 307 KB)

  • Orginal Article
    A 3D Risk Assessment Model based on GIS Layers Technique
    A.F. Awwad, M. H. Gobran, R. M. Kamal, and Mohammed A. Boraey
    2019, 15(10): 2563-2569.  doi:10.23940/ijpe.19.10.p1.25632569
    Abstract    PDF (586KB)   
    References | Related Articles

    This paper proposes a new three-dimensional risk assessment model which is based on the traditional two-dimensional risk matrix by adding a new degree of freedom representing the number of overlapped or interacted risk layers. The proposed model uses a geographical information system to managing health and safety through different stages like prevention through design, working safely and responding to undesired situations through providing the required information about spatial based risks. The new model helps the assessor to make realistic hypotheses, avoid uncertainty and improve the accuracy of determining the likelihood and consequences. In addition to the new risk assessment model, a new electronic geographic and safety information communication system is recommended in order to make the new risk assessment model more effective and improve the management of occupational health and safety. The electronic system integrates geographical based data and safety related information to produce risk maps in different places inside and around the workplace. Such a system will facilitate access to data anywhere at all times, enhancing dynamic risk assessment. Finally, the new risk assessment model and the electronic information system are expected to contribute in improving the management of health and safety, assessing risks and preventing losses.

    Secure Electronic Voting Machine using Multi-Modal Biometric Authentication System, Data Encryption, and Firewall
    Jasdev Bhatti, Satvik Chachra, Ansh Walia, and Abhishek Vishal
    2019, 15(10): 2570-2577.  doi:10.23940/ijpe.19.10.p2.25702577
    Abstract    PDF (633KB)   
    References | Related Articles

    Electronic voting machines have replaced paper ballot systems, which were being used in early Indian elections. But, with the advancement of technology, a series of security issues have been raised regarding the present voting system, such as EVM tampering in order to register fraudulent votes. The proposed system attempts to solve the problem of bogus voting by introducing a multi-modal biometric authentication system. It makes the voting system more secure by using data encryption and firewalls to protect the voter database. It increases accessibility by allowing voters to cast their vote in the elections of their respective constituency from any polling booth across the country. It also increases transparency in the election process by notifying voters on successful casting of their vote. This paper proposes a Biometric Voting Machine with a robust system architecture that is able to withstand malicious attacks and fraudulent behaviours.

    Proposed Intelligent Software System for Early Fault Detection
    Manu Banga, Abhay Bansal, and Archana Singh
    2019, 15(10): 2578-2588.  doi:10.23940/ijpe.19.10.p3.25782588
    Abstract    PDF (868KB)   
    References | Related Articles
    The major challenge in designing an Intelligent Information Software System for fault detection is to detect faults at an early stage unless it becomes a failure. This can be achieved by using feature selection and effective classification applied on failure datasets. Support Vector Machines (SVM) are used for efficient and accurate feature selection by finding unknown model parameters using local and global kernel parameter optimization. Research shows that previously many attempts were made using classical classifiers as decision tree, naïve bayes, and k-NN for software fault prediction. In earlier research, class imbalance problems in software fault datasets were not addressed. In this paper, we propose an intelligent hybrid algorithm that is based on feature selection hybrid kernel function SVM and entropy-based bagging for efficient classification to reduce the class imbalance problem. The proposed model is compared with traditional approaches. The improved hybrid algorithm based on entropy-based bagging and mixed kernel SVM can effectively improve the classification accuracy of NASA Metric Data Program (MDP) faulty datasets. This paper presents an empirical study on using the proposed hybrid algorithm and results showed that our proposed approach enhances the classification accuracy when compared with existing methods.
    Improving the Performance of Multi-Mode SM4 Block Cipher
    Guangyong Hu and Rui Chen
    2019, 15(10): 2589-2596.  doi:10.23940/ijpe.19.10.p4.25892596
    Abstract    PDF (597KB)   
    References | Related Articles

    In this paper, a low-cost multi-mode hardware architecture is proposed for data confidentiality of resource-constrained IoT (Internet of things) devices. The proposed architecture adopts resource sharing technologies to reduce the number of s-boxes, eliminate memory requirement of expanded keys, and realize flexible reconfigurability of various operation modes. The FPGA implementation results show that only 1,326 LUTs and 1,000 registers are needed at 50MHz. Compared with related works, resource costs are reduced significantly.

    Optimization of Mine Down-Hole Equipment Maintenance Strategy based on Fault Data
    Zhenzhen Jin, Qinghe Yuan, Yingqian Sun, Shun Jia, and Zhaojun Li
    2019, 15(10): 2597-2607.  doi:10.23940/ijpe.19.10.p5.25972607
    Abstract    PDF (587KB)   
    References | Related Articles

    ing at the problems of poor operating conditions, low operating efficiency, and improper maintenance of mine down-hole equipment, a new maintenance strategy model for cycle prevention is proposed. The model comprehensively considers the effect of down-hole environment on the equipment failure rate and the effect of preventive maintenance and minor repairs to the cost. Based on the historical fault data, the equipment failure rate function parameters are estimated by the least square method and correlation coefficient method. Considering the down-hole environment, the fuzzy rate theory is used to correct the failure rate function, and the cycle prevention maintenance strategy model aiming at optimizing the equipment maintenance cost is established. Finally, an example is given. The research shows that the maintenance period is shortened to 23.64 h after optimization, and the total cost of maintenance is reduced from 574,621 yuan to 378,942.44 yuan, saving maintenance costs of 195,678.56 yuan. The maintenance strategy is in line with the actual situation of down-hole equipment in mining enterprises, and it can provide effective maintenance solutions for down-hole equipment in mine enterprises.

    Performance Analysis on Features of Headlines and Media Organizations about Terrorist Attacks: based on a Combinatory Algorithm
    Junchao Feng, Ping Liu, Jianjun Miao, Ruilun Liu, and Dongbo Wang
    2019, 15(10): 2608-2617.  doi:10.23940/ijpe.19.10.p6.26082617
    Abstract    PDF (646KB)   
    References | Related Articles

    This paper aims to study the relationship between the geographical distribution of terrorist attacks and features of headlines and media organizations reporting terrorist attacks. Data are derived from the Global Terrorism Database (GTD) and are established into terrorist attacks news corpus along One Belt One Road. This paper adopted a combinatory algorithm with the edit distance algorithm and the longest common subsequence algorithm to calculate the similarity headlines of media organizations across the world. Also, this paper further explores the relationship between the geographical distribution of terrorist attacks and that of global media organizations with co-occurrence analysis and social network analysis. The results show that the mainstream media organizations always imitate or copy news reports of media organizations with regional natures where terrorist attacks often happened. In regards to the geographical distribution of terrorist attacks, the results show that there is a positive correlation between the geographical distribution frequencies of terrorist attacks and that of media organizations in areas prone to suffering from terrorist attacks. The proposed combinatory algorithm for features of media coverage about terrorist attacks along One Belt One Road could provide a very significant performance increase in the terrorist events studies. The findings of this research could help relevant institutions early warn and guard against hazards of terrorist attacks.

    Multi-Classification Method for Determining Coastal Water Quality based on SVM with Grid Search and KNN
    Guoqiang Xie, Yi Zhao, Shiyi Xie, Miaofen Huang, and Ying Zhang
    2019, 15(10): 2618-2627.  doi:10.23940/ijpe.19.10.p7.26182627
    Abstract    PDF (432KB)   
    References | Related Articles

    To address the problem of multi-classification of coastal water quality, this work envisioned the establishment of a multi-classification model of coastal water quality that uses an improved support vector machine. Inorganic nitrogen, active phosphate, chemical oxygen demand, pH, and dissolved oxygen were the input parameters of the model. The parameters of the support vector machine (SVM) model were optimized by cross-validation and the grid search optimization method, and the optimal parameters of the classifier were obtained. Subsequently, the KNN method was combined, and the optimized model was used to classify the water quality. The optimal parameters for the classifier were finally obtained. The experimental results showed that compared with SVM before optimization, the accuracy of the optimized model was improved by up to 10%, and the sample size was less.

    Software Trustworthiness Evaluation using Structural Equation Modeling
    Rumei Deng, Yixiang Chen, Hengyang Wu, and Hongwei Tao
    2019, 15(10): 2628-2635.  doi:10.23940/ijpe.19.10.p8.26282635
    Abstract    PDF (515KB)   
    References | Related Articles

    Software trustworthiness evaluation results can provide guidance for existing software development. Therefore, it is of great significance to study software trustworthiness evaluation. Weights and the relationship of trustworthy attributes have a huge effect on the results of software trustworthiness evaluation. However, few studies have considered weights and the quantitative relationship among trustworthy attributes at the same time. Structural equation modeling (SEM) provides a more scientific and reasonable method to obtain weights, where the objectification of subjective weights is guaranteed. In this paper, we propose a SEM-based method to evaluate software trustworthiness. Firstly, we establish the trustworthy evaluation indicator system for software. Based on the survey data, we construct the SEM for software trustworthiness evaluation to obtain the weights of trustworthy attributes and the relationship among them. Lastly, we apply the trustworthy measurement model to calculate the trustworthiness value of software to be surveyed. These trustworthiness values demonstrate the reasonability of our method.

    An Adaptive Cooperative Dual Particle Swarm Optimization Algorithm with Chaotic Mutation and Quantum Behavior
    Tianfei Chen, Lijun Sun, Xiaodong Song, and Haixu Niu
    2019, 15(10): 2636-2644.  doi:10.23940/ijpe.19.10.p9.26362644
    Abstract    PDF (736KB)   
    References | Related Articles

    An adaptive cooperative dual particle swarm optimization algorithm with chaotic mutation and quantum behavior is proposed to solve the contradiction between global search and local refinement search for basic particle swarm optimization algorithms. The strategy of adaptive cooperative evolution for two subgroups is used to parallel search, the subgroup with the chaotic mutation operator modifies the historical optimal position of particles and the subgroup optimal position using the principle of chaotic randomly ergodicity, and the chaotic mutation radius is increasing with the iterative evolution to enhance the global search ability. Additionally, in order to improve the local refinement search ability, the subgroup with quantum behavior, which casts off the searching orbital, updates the average optimal position of the subgroup and the subgroup optimal position during evolution. Finally, the numerical simulation results demonstrate that the proposed algorithm not only has fast convergence speed and high convergence accuracy, but also has significant advantages in dimension expansion.

    An Improved Focused Web Crawler based on Hybrid Similarity
    Songtao Shang, Huaiguang Wu, and Jiangtao Ma
    2019, 15(10): 2645-2656.  doi:10.23940/ijpe.19.10.p10.26452656
    Abstract    PDF (322KB)   
    References | Related Articles

    Web crawler is an efficient strategy for downloading data automatically from the Internet. Focused web crawler is a special kind of web crawler that is responsible for getting certain information from webpages and making them available to users. The most important problem of focused web crawler is to confirm the similarity between the target webpages and the topics. Therefore, this paper proposes an improved focused web crawler algorithm, whose similarity calculating methods derive from three aspects: anchor text, content, and structure of the webpages. This improved algorithm is called hybrid similarity. If the anchor text similarity is bigger than the threshold, the target webpages are downloaded directly; otherwise, the target webpages' similarity is analyzed by using the TF-Gini feature weighting algorithm and the improved cosine similarity algorithm. The experimental results in this paper have proven that the hybrid similarity algorithm is more effective than the traditional algorithm. The precision increases by nearly 10% compared with the traditional algorithm.

    Local and Global SR for Bearing Sensor-based Vibration Signal Classification
    Shaohui Zhang, Man Wang, Canyi Du, and Edgar Estupinan
    2019, 15(10): 2657-2666.  doi:10.23940/ijpe.19.10.p11.26572666
    Abstract    PDF (521KB)   
    References | Related Articles

    Spectral regression (SR) is a method of feature extraction that realizes dimension reduction by the least squares method and can avoid eigen-decomposition of dense matrices. However, it only considers the affinity graph and misses the global information. In this paper, a novel feature extraction algorithm, called local and global spectral regression (LGSR), is proposed and applied to extract fault features from frequency-domain and time-domain features of vibration signals of bearing sensors. LGSR, which is the development of SR, is able to discover both local and global information of data manifold. Compared with other similar approaches (such as NPE, PCA, and SR), experiments of bearing defect classification validate that LGSR shows better ability to extract identity information for machine defect classification.

    Automatic Software Testing Target Path Selection using K-Means Clustering Algorithm
    Yan Zhang, Li Qiao, Xingya Wang, Jingying Cai, and Xuefei Liu
    2019, 15(10): 2667-2674.  doi:10.23940/ijpe.19.10.p12.26672674
    Abstract    PDF (686KB)   
    References | Related Articles

    Path testing is an effective method of software testing. It is not realistic to achieve coverage for all paths during complex software testing. Selecting the correct paths as target paths is a key problem. A method of selecting target paths based on the K-means algorithm is presented in this study. First, we divide paths into different groups using the K-means algorithm, so that paths having high similarity are divided into the same group. Then, we choose the cluster centers as targets and ensure that the selected target paths have more considerable differentiation, which guarantees the adequacy of later testing. The experimental results demonstrate the effectiveness of the proposed method.

    FDFuzz: Applying Feature Detection to Fuzz Deep Learning Systems
    Jie Wang, Kefan Cao, Chunrong Fang, and Jinxin Chen
    2019, 15(10): 2675-2682.  doi:10.23940/ijpe.19.10.p13.26752682
    Abstract    PDF (534KB)   
    References | Related Articles

    In the past years, many resources have been allocated to research on deep learning networks for better classification and recognition. These models have higher accuracy and wider application contexts, but the weakness of easily being attacked by adversarial examples has raised our concern. It is widely acknowledged that the reliability of many safety-critical systems must be confirmed. However, not all systems have sufficient robustness, which makes it necessary to test these models before going into service. In this work, we introduce FDFuzz, an automated fuzzing technique that exposes incorrect behaviors of neural networks. Under the guidance of the neuron coverage metric, the fuzzing process aims to find those examples to let the network make mistakes via mutating inputs, which are then correctly classified. FDFuzz employs a feature detection technique to analyze input images and improve the efficiency of mutation by features of keypoints. Compared with TensorFuzz, the state-of-the-art open source library for neural network testing, FDFuzz demonstrates higher efficiency in generating adversarial examples and makes better use of elements in corpus. Although our mutation function consumes more time to generate new elements, it can generate 250% more adversarial examples and save testing time.

    Code Similarity Detection using AST and Textual Information
    Wu Wen, Xiaobo Xue, Ya Li, Peng Gu, and Jianfeng Xu
    2019, 15(10): 2683-2691.  doi:10.23940/ijpe.19.10.p14.26832691
    Abstract    PDF (412KB)   
    References | Related Articles

    In the teaching process of computer language courses, a large amount of programming experimental content needs to be supplemented. Students sometimes copy codes from each other, which seriously reduces the teaching quality of computer language courses and makes it difficult to improve students' programming abilities. To solve this problem, this paper proposes a novel code similarity detection algorithm based on code text and AST. By removing comments, blank characters, and other "cleaning" processes from the code text, the normalized code text is obtained. Then, word segmentation, word frequency statistics, weight calculation, and other operations are carried out. The code fingerprint is obtained by using the Simhash algorithm. According to the specification of computer language grammar, lexical analysis and syntax analysis are conducted to extract the AST (abstract syntax tree), and redundant information is eliminated. According to the Zhang-Shasha algorithm, the AST edit distance is calculated and then compared to the AST. Finally, the similar degree between the text similarity and AST similarity is calculated. In order to verify the effectiveness of this method, taking Python code as an example, the code on the open source programming platform and LeetCode is used to build the test data set according to the common code plagiarism method. Experimental results show that this method is capable at detecting several common means of plagiarism, and low similarity can be obtained for the experimental detection of unrelated codes and non-plagiarized codes. Therefore, we believe that this algorithm can effectively be used for the code similarity detection of experimental code in computer language courses.

    Using Non-Subjective Approximation Algorithm of D-S Evidence Theory for Improving Data Fusion
    Ning Zhang, Peng Chen, Kai He, Zhao Li, and Xiaosheng Yu
    2019, 15(10): 2692-2700.  doi:10.23940/ijpe.19.10.p15.26922700
    Abstract    PDF (675KB)   
    References | Related Articles

    The paper efficiently processes the issue of "focal element explosion" produced when many focal elements are fused according to D-S evidence theory. The effectiveness of subjective approximation algorithms is low since they heavily involve artificial participation. In addition, the accuracy of the results calculated by the non-subjective approximation algorithm is better. In this paper, a non-subjective approximation algorithm based on evidence levels is proposed to address the above-mentioned problem. First, the evidence level is mainly determined by the cumulative mass value of the main focal element, and the number of focal elements is reduced by approximate treatment according to the corresponding initial standard determined by the levels of evidence. Second, to further increase the accuracy of the results, the levels of evidence are used to determine the order of fusion and the discounts of evidence. It obvious that even if there is erroneous or uncertain evidence in the fused evidence, it will not affect the results significantly. The experimental results show that the algorithm outperforms others in terms of adaptability and accuracy.

    Active Learning using Uncertainty Sampling and Query-by-Committee for Software Defect Prediction
    Yubin Qu, Xiang Chen, Ruijie Chen, Xiaolin Ju, and Jiangfeng Guo
    2019, 15(10): 2701-2708.  doi:10.23940/ijpe.19.10.p16.27012708
    Abstract    PDF (709KB)   
    References | Related Articles

    In the process of software defect prediction dataset construction, there are problems such as high labeling costs. Active learning can reduce labeling costs when using uncertainty sampling. Samples with the most uncertainty will be labeled, but samples with the highest certainty will always be discarded. According to cognitive theory, easy samples can promote the performance of the model. Therefore, a hybrid active learning query strategy is proposed. For the sample with lowest information entropy, query-by-committee will analyze it again using vote entropy. Empirical studies show that the proposed HIVE approach outperforms several state-of-the-art active learning approaches.

    Comparing Minimal Failure-Causing Schema and Probabilistic Failure-Causing Schema on Boolean Specifications
    Ziyuan Wang, Xueyao Li, Yang Li, and Yuqing Dai
    2019, 15(10): 2709-2717.  doi:10.23940/ijpe.19.10.p17.27092717
    Abstract    PDF (643KB)   
    References | Related Articles

    Both the model of minimal failure-causing schema (MFS) and the model of probabilistic failure-causing schema (PFS) were proposed to describe characteristics of failure test cases in input-domain testing. To improve the efficiency of software debugging, input variables that are related to failure-causing schemas should be closer to the real fault-relevant input variables. In order to examine which model (MFS or PFS) can help software engineers localize fault-relevant input variables more preciously, we conduct an experiment on general-form Boolean specifications extracted from the well-known TCAS system. For each mutant of a general-form Boolean expression, the set of input variables localized by the MFSs, the set of input variables localized by the PFSs, and the set of actual input variables involved in the fault are compared. Experimental results suggest that the MFS model usually has an advantage in terms of recall, while the PFS model usually has an advantage in terms of precision. Overall, the latter has a slight advantage in terms of f-measure.

    A Context Model for Code and API Recommendation Systems based on Programming Onsite Data
    Zhiyi Zhang, Chuanqi Tao, Wenhua Yang, Yuqian Zhou, and Zhiqiu Huang
    2019, 15(10): 2718-2725.  doi:10.23940/ijpe.19.10.p18.27182725
    Abstract    PDF (208KB)   
    References | Related Articles

    Code and application programming interface (API) recommendation systems are important guarantees for efficient and accurate code reuse to improve the efficiency of software development. Context data plays a key role in code and API recommendation. A large amount of programming onsite data has been generated, but existing code and API recommendation systems rarely consider the context based on programming onsite data, which leads to low efficiency and poor accuracy of code and API recommendation. In this paper, we proposed a context model for code and API recommendation systems. Our context model is based on programming onsite data collected during programming. It includes four aspects: developer, project, time, and environment. Developer data is labeled data abstracted from information according to developers' programming habits and abilities, project data is information about the project, time data is information about temporal aspects of developers interacting with the project, and environment data is all environment elements used by developers during programming. Then, we collected programming onsite data in three ways: explicit collection, implicit collection, and reasoning. Lastly, we built the context model using a coarse-grained abstract model for recommendation. Our context model retains the key information in the code while eliminating redundant information that may affect the accuracy of the recommend task, and it can theoretically improve the efficiency and accuracy of recommendation.

    Usability Evaluation and Improvement of Mission Planner UAV Ground Control System's Interface
    Huibin Jin, Yawei Liu, Xiaomeng Mu, Mingxia Ma, and Jing Zhang
    2019, 15(10): 2726-2734.  doi:10.23940/ijpe.19.10.p19.27262734
    Abstract    PDF (624KB)   
    References | Related Articles

    This paper aims to improve the usability of Mission Planner UAV ground control system's interface through interface improvement. The questionnaire survey and user interview were utilized to evaluate the usability of the UAV ground control system's interface, and the C programming language was used to improve the usability problems. Then, the eye-tracking experiment was designed to compare the difference in usability between the improved system interface and the original system interface, which was used to verify that the improvement to the system interface was effective. The result showed that the usability problems mainly existed in flexibility, minimal user action, minimal memory burden, and user guidance dimension, and the analysis result of multiple indexes such as task completion time, mouse click and fixation points indicate that the usability of the improved system interface was better than the original system.

    Optimal Control of Spraying and Drying Temperature in Production-Line based on Active Disturbance Rejection Control Technique
    Shengyong Lei
    2019, 15(10): 2735-2743.  doi:10.23940/ijpe.19.10.p20.27352743
    Abstract    PDF (554KB)   
    References | Related Articles

    The problems of large time delay, high inertia, and parameter uncertainty of the spraying and drying temperature control system in the automatic production line can easily lead to the instability of the system and the deterioration of anti-interference ability. Therefore, a control method based on the active disturbance rejection control (ADRC) is presented to improve the temperature control performance of the spraying and drying process in the production line. In this paper, a nonlinear ADRC temperature controller is designed, consisting of the tracking-differentiator (TD), the extended state observer (ESO), and the non-linear state error feedback (NLSEF). The ADRC controller does not rely on accurate spraying and drying process models, and it can solve the problem that the spraying and drying temperature is affected by various internal or external disturbance factors on the controlled system. Therefore, the temperature control performance of the system is improved. Simulation results show that compared with the traditional PID controller, the designed ADRC controller achieves high performance in dynamic performance, robustness, and disturbance rejection.

    Intelligent Regulation Algorithm of Automobile Rear View Mirror based on Eye Location
    Lihua Wu, Xu Bai, Dianshuang Zheng, and Jianxin Gai
    2019, 15(10): 2744-2752.  doi:10.23940/ijpe.19.10.p21.27442752
    Abstract    PDF (648KB)   
    References | Related Articles

    For the defects in the automobile rear view mirror regulation method, an intelligent regulation algorithm of automobile rear view mirrors is studied in this paper. The algorithm utilizes drivers' images in vehicles gathered by a single camera to determine the practical spatial positions of drivers' binocular centers in the vehicles. Then, through the regulation relationship algorithm between eye location and the rear view mirror, the intelligent regulation of optimal view of automobile rear view mirrors is achieved. The experimental results indicate that the algorithm has the advantages of fast regulation, convenience, and accuracy.

    Pilot Decontamination in Massive MIMO Systems based on Pilot Design and Allocation
    Jingwei Dong, Zhiyu Han, and Chuang Han
    2019, 15(10): 2753-2761.  doi:10.23940/ijpe.19.10.p22.27532761
    Abstract    PDF (552KB)   
    References | Related Articles

    Pilot contamination has become an important factor affecting the development of massive MIMO. To mitigate pilot contamination, a new scheme of joint polar coordinate pilot assignment and pseudo random code pilot design is proposed in this paper. The scheme allocates the pilot according to the size of the user's polar angle, which is marked with polar coordinates, and uses pseudo random codes with different time delays to add disturbance to users that share the same pilot sequence and users of neighboring cells so as to improve the accuracy of channel estimation. Through theoretical derivation and data simulation analysis of minimum mean square error (MMSE) channel estimation of the system, the scheme can significantly reduce the mean square error (MSE) of channel estimation and improve the signal-to-noise ratio and average transmission rate of uplink transmission. Thus, the adverse effect of pilot contamination on channel estimation is reduced, and the channel throughput is improved.

    Robustness Analysis of Urban Rail Transit Network
    Hui Xu and Yang Li
    2019, 15(10): 2762-2771.  doi:10.23940/ijpe.19.10.p23.27622771
    Abstract    PDF (545KB)   
    References | Related Articles

    The urban rail transit system is threatened by various kinds of risk events, so it is necessary to explore system robustness and guarantee safe operation. Considering the overall topological structures of urban rail transit network (URTN), the complex network theory is adopted. After calculating the topological parameters and robustness analysis parameters, the attack strategies for URTN are established. The proposed methodology is applied to the Singapore Mass Rapid Transit System. The results demonstrate that malicious attacks have more serious impacts on system robustness than random attacks. In addition, the nodes that link to different sections of the network, connect a cluster of nodes to the main network, or connect to more than one rail transit line are identified as important nodes of the network. The obtained results could supply valuable references for the safety management of URTN. The proposed methodology could also be used for other URTN robustness analyses.

    Improved Grid Task Scheduling Model Algorithm
    Feng Liu
    2019, 15(10): 2772-2782.  doi:10.23940/ijpe.19.10.p24.27722782
    Abstract    PDF (537KB)   
    References | Related Articles

    On the basis of analyzing the current status and the key technology of grid workflow scheduling, in-depth research on the grid workflow scheduling algorithm under the restraint of time QOS and trust QOS is conducted in this paper. A grid workflow task scheduling algorithm (GWTS) based on critical tasks under the constraints of trust is designed. Firstly, backward depth of tasks is calculated in GWTS, and critical tasks are ascertained according to the execution time on candidate resources. Secondly, the trust of grid resources is computed based on direct experience and recommendation experience synthetically. Finally, tasks are scheduled by decreasing backward depth, and resources are closed to meet the integrated function of execution time and trust and are allocated for critical tasks as a priority. Experiments show that the workflow completion time is reduced, the success rate of task execution is increased by 6-15%, and the GWTS algorithm can effectively guarantee grid scheduling resource optimization and improve the scheduling efficiency.

    Analysis of Asymmetric Polling Control System in Wireless Sensor Networks
    Zhijun Yang, Zheng Liu, Yangyang Sun, and Hongwei Ding
    2019, 15(10): 2783-2793.  doi:10.23940/ijpe.19.10.p25.27832793
    Abstract    PDF (745KB)   
    References | Related Articles

    To solve the problem of the priorities of network traffic and improve the applicability of multi-access for communication systems, a two-level asymmetric polling control system was proposed, in which the central node adopts an exhaustive access policy and the normal nodes adopt an asymmetric 1-limited (K = 1) access policy. This model can distinguish network traffic priorities as well as guarantee fairness of the system. Using a probability generating function and embedded Markov chain, the system model was established. Through the accurate mathematical analysis of the system model, the mathematical analytical formulas of important system performance parameters, such as the mean queue length of nodes and the mean delay of information packets, were obtained. The model provides high-quality services for high-priority nodes while maintaining the quality of service at low-priority nodes. Finally, the polling access control strategy was implemented by Tinyos.

    Image Recognition and Classification based on Elastic Model
    Mingzhu Liu, BOF Algorithm, Xue Bao, and Lu Pang
    2019, 15(10): 2794-2804.  doi:10.23940/ijpe.19.10.p26.27942804
    Abstract    PDF (899KB)   
    References | Related Articles

    To solve the problems of mosaic effect and block distortion in the process of image enlargement or reduction, which decrease the accuracy of image recognition and classification, a new fusion algorithm is proposed in this paper. It is a method that integrates an elastic model with the BOF algorithm. Firstly, the distortion phenomenon of static images is studied in depth. Then, the basic principle of the classical BOF algorithm is studied. At the same time, the method and steps of generating feature descriptors using the BOF algorithm are deeply analyzed, and the clustering of feature elements is realized by using the spatial pyramid method. Aiming at the problem of low classification accuracy of scaled images with block distortion, an elastic model method is proposed. It introduces elasticity parameters to scaled image processing and combines the new features obtained by the BOF algorithm to further compensate the block distortion, so as to improve the image recognition accuracy.

    A Distributed Frequent Itemset Mining Algorithm for Uncertain Data
    Jiaman Ding, Haibin Li, Yang Yang, Lianyin Jia, and Jinguo You
    2019, 15(10): 2805-2816.  doi:10.23940/ijpe.19.10.p27.28052816
    Abstract    PDF (923KB)   
    References | Related Articles

    With the rapidly expansion of big data in all domains, it has become a major research topic to improve the performance of mining frequent patterns in massive uncertain datasets in recent years. Most conventional frequent pattern mining approaches take expect, probability, or weight as one single factor of item support, and algorithms that consider both probability and weight are unable to balance execution efficiency under the circumstances of big data. Therefore, we propose a distributed frequent itemset mining algorithm for uncertain data: Dfimud. Firstly, Dfimud calculates the maximum probability weight value of 1-items and prunes the items whose value is less than the given threshold. Secondly, to reduce the times of scanning the datasets, a distributed Dfimud-tree structure inspired by FP-Tree is designed to mine frequent patterns. Finally, experiments on publicly available UCI datasets demonstrate that Dfimud achieves more optimal results than other related approaches across various metrics. In addition, the empirical study also shows that Dfimud has good scalability.

    Social Impact Assessment of Storm Surge Disaster Through Dynamic Neural Network Model
    Cheng Cheng, Qingtian Zeng, Hua Zhao, Wenyan Guo, and Hua Duan
    2019, 15(10): 2817-2825.  doi:10.23940/ijpe.19.10.p28.28172825
    Abstract    PDF (563KB)   
    References | Related Articles

    Storm surges are one of the most serious marine disasters in the world. Storm surge disasters bring not only sudden loss of life and property, but also a series of invisible social impacts. The social impact of storm surge disasters varies with time and is difficult to assess. In this paper, we firstly analyze the social impact index system in storm surge disasters using big data. Secondly, we extract the content of social impact factors from storm surge disasters. Finally, we use the back propagation neural network (BPNN) model to assess the level of social impact. The input of the model are factors after linear weighted fusion, and the output is the social impact assessment level of storm surge disasters. Experimental results show that our approach can accurately assess the level of social impact at different times, and the assessment model can assist disaster managers in making dynamic solving countermeasures.

    Network Learning Platform Usability Evaluation Modeling
    Yu Sun, Yamei Yao, and Yaowen Xia
    2019, 15(10): 2826-2834.  doi:10.23940/ijpe.19.10.p29.28262834
    Abstract    PDF (452KB)   
    References | Related Articles

    To improve the effectiveness of network learning platforms, this paper attempts to evaluate the usability of network learning platforms based on the perspective of users' experience and usability theory. With the support of the theory of usability, three dimensions to evaluate network learning platforms were given, and the relations between these three evaluation dimensions and usability were explained. Then, the three-level availability evaluation index system of network learning platform was established and discussed in detail. Finally, the theoretical model of the network learning platform usability evaluation system was proposed.

ISSN 0973-1318