Please wait a minute...
, No 5
 ■ Cover Page (PDF 4,749 KB) ■ Editorial Board (PDF 145 KB)  ■ Table of Contents, May 2018 (270 KB)
  • Original articles
    Reliability Assessment of Non-Repairable k-Out-Of-n System using Belief Universal Generating Function
    Seema Negi, Namita Jaiswal, and S. B. Singh
    2018, 14(5): 831-840.  doi:10.23940/ijpe.18.05.p1.831840
    Abstract    PDF (450KB)   
    References | Related Articles

    Some research has been developed to handle aleatory and epistemic uncertainty in various engineering systems. But in this paper, we have established two methods, namely mass distribution and fuzzy reliability theory, to deal with these uncertainties in non-repairable k-out-of-n: G (F) systems, which has not been seen in the past. In the presented methodology, the failure rate (λ) is taken as a trapezoidal fuzzy number. Using the trapezoidal fuzzy number, expression for α-cut of fuzzy failure rate of every component and corresponding fuzzy reliability function has been calculated. Then, masses to the components are distributed with the help of these fuzzy reliability functions. By using these masses, reliability and MTTF of the considered systems have been computed. At last, a numerical example is taken to demonstrate the present approach.

    Submitted on February 13, 2018; Revised on March 22, 2018; Accepted on April 25, 2018
    References: 27
    Temporal Multiscale Consumption Strategies of Intermittent Energy based on Parallel Computing
    Huifen Chen, Yiming Zhang, Feng Yao, Zhice Yang, Fang Liu, Yi Liu, Zhiheng Li, and Jinggang Wang
    2018, 14(5): 841-848.  doi:10.23940/ijpe.18.05.p2.841848
    Abstract    PDF (767KB)   
    References | Related Articles

    Fossil energy is non-renewable energy. The use of fossil energy power generation not only increases the consumption of energy, but also emissions will also bring environmental pollution. The resources of wind, light and other renewable energy are rich in China. The use of new energy power generation is conducive to energy security and sustainable development. Due to the characteristics of regional, intermittent, random and unpredictable of wind and light, large-scale wind power and photovoltaic power in national grid are seriously challenged by the overall dispatch of the power system. Firstly, the paper analyzes and improves the evaluation model of intermittent energy generation capacity by analyzing and processing the real data of a province Power Network. In the model, the photovoltaic unit output constraint and the pumped storage unit constraint of pumping stage and drawing-off stage and the storage capacity of pumped storage power station are added. Secondly, the input data of a province Power Network, which is in accordance with the input condition, is input into the proposed optimization model and the results are credible. In addition, the input data and output results are displayed in visualization formed by graphs, which provides effective guidance to the management of a province Power Network.

    Submitted on February 8, 2018; Revised on March 16, 2018; Accepted on April 23, 2018
    References: 23
    Decision Tree Incremental Learning Algorithm Oriented Intelligence Data
    Hongbin Wang, Ci Chu, Xiaodong Xie, Nianbin Wang, and Jing Sun
    2018, 14(5): 849-856.  doi:10.23940/ijpe.18.05.p3.849856
    Abstract    PDF (500KB)   
    References | Related Articles

    Decision tree is one of the most popular classification methods because of its advantages of easy comprehension. However, the decision tree constructed by existed methods is usually too large and complicated. So, in some applications, the practicability is limited. In this paper, combining NOLCDT with IID5R algorithm, an improved hybrid classifier algorithm, HCS, is proposed. HCS algorithm consists of two phases: building initial decision tree and incremental learning. The initial decision tree is constructed according to the NOLCDT algorithm, and then the incremental learning is performed with IID5R. The NOLCDT algorithm selects the candidate attribute with the largest information gain and divides the node into two branches, which avoids generating too many branches. Thus, this prevents the decision tree is too complex. The NOLCDT algorithm also improves on the selection of the next node to be split, which computes the corresponding nodal splitting measure for all candidate splits, and always selects the node which has largest information gain from all candidate split nodes as the next split node, so that each split has the greatest information gain. In addition, based on ID5R, an improved algorithm IID5R is proposed to evaluate the quality of classification attributes and estimates a minimum number of steps for which these attributes are guaranteed such a selection. HCS takes advantage of the decision tree and the incremental learning method, which is easy to understand and suitable for incremental learning. The contrast experiment between the traditional decision tree algorithm and HCS algorithm with UCI data set is proposed; the experimental results show that HCS can solve the increment problem very well. The decision tree is simpler so that it is easy to understand, and so the incremental phase consumes less time.

    Submitted on January 29, 2018; Revised on March 12, 2018; Accepted on April 23, 2018
    References: 11
    Optimization of Wear Properties in Aluminum Metal Matrix Composites using Hybrid Taguchi-GRA-PCA
    Narinder Kaushik Sandeep Singhal
    2018, 14(5): 857-870.  doi:10.23940/ijpe.18.05.p4.857870
    Abstract    PDF (1005KB)   
    References | Related Articles

    The present work is based on the formation of aluminum alloy AA6063/SiCp metal matrix composites by an enhanced liquid metallurgy stir casting route and optimization of wear properties by utilizing Taguchi based GRA integrated with a PCA approach. The AMCs (having 37μm SiC particle size) are manufactured in three different wt. % (3.5 wt%, 7 wt% and 10.5 wt%) of SiC reinforcement particles. The exploratory runs to examine the wear performance are executed as per L9 Taguchi plan to acquire the wear information in a controlled way. The wear loss data in terms of height loss is acquired using pin-on-disc tribometer attached to LVDT arrangement. Impact of three control factors, viz, Load (N), Sliding distance (m) and Wt. % of SiC on performance characteristics such as wear rate, frictional force and specific wear rate in dry slippery conditions are inspected to obtain the optimum level of process parameters. ANOVA is likewise completed to assess the impact of three control factors on wear rate, frictional force, and specific wear rate. Experimental analysis revealed that the wear behavior enhanced under optimum experimental states. Optical microscopic examinations of the worn-out samples are also conducted to describe the wear mechanism of the as-cast matrix composites.

    Submitted on February 2, 2018; Revised on March 15, 2018; Accepted on April 19, 2018
    References: 30
    An Improved Convex Programming Model for the Inverse Problem in Intensity-Modulated Radiation Therapy
    Yihua Lan, Xingang Zhang, Jianyang Zhang, Yang Wang, and Chih-Cheng Hung
    2018, 14(5): 871-884.  doi:10.23940/ijpe.18.05.p5.871884
    Abstract    PDF (587KB)   
    References | Related Articles

    Intensity modulated radiation therapy technology (IMRT) is one of the main approaches in cancer treatment because it can guarantee the killing of cancer cells while optimally protecting normal tissue from complications. Inverse planning, which is the core component of the entire IMRT system, is mainly based on accurate mathematical modeling and associated fast solving methods. In inverse planning, the fluence map optimization, which considers the multi-leaf collimator (MLC) modulation, is the current research focus. Although the hitting constrain problem with the unidirectional movement of leaf-sweeping has been solved, our goal is to solve the hitting constrain problem with the bidirectional movement of leaf-sweeping. In this study, we propose a non-synchronized type to solve the hitting constrain problem with the bidirectional movement of leaf-sweeping schemes for IMRT. In solving this problem, a new mathematical model is proposed under the framework of convex programming. The advantage of the convex model is to avoid the uncertainty and inaccuracy that occurs in the non-convex programming solving process. Experimental results for two clinical testing cases show that under the same condition of total number of monitoring units, the new proposed model produces better dose distribution than those of the total variance and quadratic models.

    Submitted on February 14, 2018; Revised on March 21, 2018; Accepted on April 17, 2018
    References: 17
    Edge Detection Algorithm based on Color Space Variables
    Chengxiang Shi Jiayuan Luo
    2018, 14(5): 885-890.  doi:10.23940/ijpe.18.05.p6.885890
    Abstract    PDF (484KB)   
    References | Related Articles

    In view of the large number of environmental influence factors in complex and varied backgrounds, a color image feature extraction method based on color space variables is proposed. According to the method of maximum variance between classes, color space variable values are used to classify images, and filter operators are used to denoise different types of images. The preprocessed image again calculates the foreground segmentation threshold and combines the canny operator, the multiscale theory, and the morphological operator to extract the edge. The results show that this method can effectively process various background color images and provide a new idea and method for intelligent processing of color images.

    Submitted on January 29, 2018; Revised on March 12, 2018; Accepted on April 23, 2018
    References: 10
    Improved NSGA-II for the Job-Shop Multi-Objective Scheduling Problem
    Xiaoyun Jiang Yi Li
    2018, 14(5): 891-898.  doi:10.23940/ijpe.18.05.p7.891898
    Abstract    PDF (369KB)   
    References | Related Articles

    Job-shop scheduling is essential to advanced manufacturing and modern management. In light of the difficulty of obtaining the optimal solution using simple genetic algorithms in the process of solving multi-objective job-shop scheduling problems, with maximum customer satisfaction and minimum makespan in mind, we constructed a multi-objective job-shop scheduling model with factory capacity constraints and propose an improved NSGA-II algorithm. This algorithm not only uses an improved elitism strategy to dynamically update the elite solution set, but also enhances the Pareto sorting algorithm to make density computations more accurate, thereby ensuring population diversity. An example is given to verify that this algorithm can effectively enhance global search capabilities, save computing resources, and lead to a better optimal solution. Using this algorithm for job-shop scheduling optimization oriented towards multi-objective decision-making can provide corporate executives with a scientific quantitative basis for management and decision-making, thereby enhancing their companies’ competitiveness.

    Submitted on February 1, 2018; Revised on March 16, 2018; Accepted on April 27, 2018
    References: 24
    Simulation on the Optical Field Characteristics of Grating Inscription in a Hybrid Microstructured Optical Fiber
    Kaiwei Jiang, Jinrong Liu, Guanjun Wang, Yutian Pan, and Mengxing Huang
    2018, 14(5): 899-906.  doi:10.23940/ijpe.18.05.p8.899906
    Abstract    PDF (847KB)   
    References | Related Articles

    A method that utilizes a fluid-filled elliptical hole configuration to improve the efficiency and quality of grating inscription in a microstructured optical fiber was proposed and analyzed. The quantitative influence of inscription beam, elliptical size, fiber parameter, and fluid index on the grating inscription was analyzed. In addition, the feasibility of utilizing a symmetrical elliptical hole configuration and a rotating inscription technique to modify the inscription energy distribution near the core region during inscription was also discussed. Simulation results show that the optimized hybrid microstructured optical fiber configuration could achieve three times of the inscription efficiency than that of a single-mode fiber.

    Submitted on January 15, 2018; Revised on February 8, 2018; Accepted on April 2, 2018
    References: 16
    Calculation Method of Short Term Flicker Severity Pst for Power System based on Atomic Decomposition and Real-Coded Quantum Evolutionary Algorithm
    Hui Gao, Qichao Song, Rui Zhang, and Jun Huang
    2018, 14(5): 907-916.  doi:10.23940/ijpe.18.05.p9.907916
    Abstract    PDF (1772KB)   
    References | Related Articles

    Short term flicker severity Pst is an important index to measure the power quality in the IEC standard, and the accurate calculation of Pst is the precondition to improve power quality. In order to improve the calculation accuracy of Pst, a novel method based on atomic decomposition and the real-coded quantum evolutionary algorithm is proposed to calculate the short time flicker severity Pst. Its core is that, firstly, on the basis of the Gabor complete atomic library, the real-coded quantum evolutionary algorithm is used to optimize the atomic parameters instead of using the matching pursuits algorithms, thereby improving search efficiency. Secondly, atomic decomposition technique based on real-coded quantum evolutionary algorithms is adopted to analyze harmonic components of the voltage fluctuation signal of power systems, which improves analysis ability. Finally, the proposed method is used to calculate Pst to improve calculation accuracy. Simulation experiment shows that based on the atomic decomposition and real-coded quantum evolutionary algorithm, the calculation results of the short time flicker severity Pst for power systems have higher precision compared with the results of other methods. This proves the validity and applicability of the proposed method.

    Submitted on February 5, 2018; Revised on March 15, 2018; Accepted on April 21, 2018
    References: 20
    A Novel Color Encoding Fringe Projection Profilometry based on Wavelet Ridge Technology and Phase-Crossing
    Yang Wang, Yankee Sun, Tianqi Zhang, Deyun Chen, and Xiaoyang Yu
    2018, 14(5): 917-926.  doi:10.23940/ijpe.18.05.p10.917926
    Abstract    PDF (821KB)   
    References | Related Articles

    There are two challenges in three-dimensional (3D) profilometry, such as real-time and accuracy. Color-encoding fringe projection profilometry (CEFPP) can
    solve these challenges to some extent. It encodes in the red, green, and blue color channels, and CEFP can use three completely different fringe patterns. In
    this paper, a novel CEFP that only uses one color fringe image acquired by experiential 3CCD cameras is presented. First, the wavelet transform
    coefficients phase at the ridge position under the Morlet wavelet is theoretically clarified. A simple and quick method that acquires the scaling coefficient
    is introduced. The wrapped phases in the three color channels of the color fringe image are obtained by wavelet bridge position. The phase original of color
    fringe pattern is defined as one white color. Using an evolution function defined in this paper, the phase-crossing is located. An absolute phase is acquired
    by a three coding pitch method in a two coding direction. In order to verify the presented method, some profilometry experiments are carried out in the
    constructed 3D profilometry system using one projector and one 3CCD camera. The experimental results showed that a maximum standard deviation of
    measurement error is 1.42mm, and the reconstruction surface of a gypsum head portrait can be determined.

    Submitted on February 16, 2018; Revised on March 21, 2018; Accepted on April 28, 2018
    References: 40
    Collaborative Filtering Recommendation Algorithm based on Cluster
    Zhiyong Li
    2018, 14(5): 927-936.  doi:10.23940/ijpe.18.05.p11.927936
    Abstract    PDF (764KB)   
    References | Related Articles

    The traditional collaborative filtering recommendation method suffers from sparse datasets, cold starts, and efficiency problems. Furthermore, recommend accuracy decreases with an increase in the amount of data. Therefore, we improved the traditional collaborative filtering recommendation method by increasing the same rating between users when calculating their similarity and running it on a cluster. Because of the above actions, the collaborative filtering recommendation method obtains a better accuracy. Through experiments, we saw that the method we proposed has higher accuracy and efficiency compared to traditional collaborative filtering recommendation methods.

    Submitted on February 13, 2018; Revised on March 22, 2018; Accepted on April 25, 2018
    References: 12
    A Correlative Study of the Influence of Higher Vocational Students’ Learning Behavior on English Effective Learning
    Lei Chen, Xia Liu, and Qinghui Zhu
    2018, 14(5): 937-944.  doi:10.23940/ijpe.18.05.p12.937944
    Abstract    PDF (470KB)   
    References | Related Articles

    This study aims to explore the correlation between learning behavior and English effective learning. 1,758 answers to a questionnaire designed from the perspective of learning behavior are analyzed. The influence of English effective learning is summarized as constructive learning and destructive learning. Using SPSS to analyze the data and model a construction, we study the correlation between learning behavior and the main influencing factors, including constructive learning, destructive learning, mutual influence, learning burnout and employment pressure. The structural model results show that the alpha coefficients are above 0.6, and the corresponding variables load factor values are above 0.3, which proves that the questionnaire is valid and reliable. Correlation analysis is used to explore the impact of the major variables, indicating that significant correlations exist among the major variables. The regression analysis shows that constructive learning can significantly predict the learning behavior, while the destructive behavior can significantly negatively predict the learning behavior. The mutual influence cannot significantly predict the learning behavior. Through a structural equation model fitting analysis, the influence of classmates can significantly predict the learning behavior and employment pressure can significantly negatively predict the learning burnout. Furthermore, learning behavior plays an intermediary role in constructive learning, destructive learning and learning burnout. This study may provide reference for higher vocational English teaching reform from the data analysis.

    Submitted on January 18, 2018; Revised on March 5, 2018; Accepted on April 15, 2018
    References: 27
    A New Supervised Learning for Gene Regulatory Network Inference with Novel Filtering Method
    Bin Yang, Wei Zhang, and Jiaguo Lv
    2018, 14(5): 945-954.  doi:10.23940/ijpe.18.05.p13.945954
    Abstract    PDF (503KB)   
    References | Related Articles

    Gene regulatory network (GRN) inference from gene expression data plays an important role in understanding the intricacies of the complex biological regulations for researchers. In this paper, a new hybrid supervised learning method (HSL) is proposed to infer gene regulatory network. In HSL, according to the data imbalance ratio, three different supervised learning methods: direct classification, K-Nearest Neighbor (KNN) method and complex-valued version of flexible neural tree (CVFNT) model are chosen to classify. A novel filtering method based on integration of mutual information (MI) and maximum information coefficient (MIC) is proposed to eliminate the redundant regulations inferred by HSL. Benchmark data from DREAM 5 are used to test the performance of our approach. The results show that our approach performs better than the popular unsupervised Learning methods and supervised Learning methods.

    Submitted on January 29, 2018; Revised on March 2, 2018; Accepted on April 23, 2018
    References: 15
    Spark-based Ensemble Learning for Imbalanced Data Classification
    Jiaman Ding, Sichen Wang, Lianyin Jia, Jinguo You, and Ying Jiang
    2018, 14(5): 955-964.  doi:10.23940/ijpe.18.05.p14.955964
    Abstract    PDF (716KB)   
    References | Related Articles

    With the rapid expansion of Big Data in all science and engineering domains, imbalanced data classification become a more acute problem in various real-world datasets. It is notably difficult to develop an efficient model by using mechanically the current data mining and machine learning algorithms. In this paper, we propose a Spark-based Ensemble Learning for imbalanced data classification approach (SELidc in short). The key point of SELidc lies in preprocessing to balance the imbalanced datasets, and to improve the performance and reduce fitting for the big and imbalanced data by building distributed ensemble learning algorithm. So, SELidc firstly converts the original imbalanced dataset into resilient distributed datasets. Next, in the sampling process, it samples by comprehensive weight, which is obtained in accordance with the weight of each class in majority class and the number of minority class samples. After that, it trains several classifiers with random forest in Spark environment by the correlation feature selection means. Experiments on publicly available UCI datasets and other datasets demonstrate that SELidc achieves more prominent results than other related approaches across various evaluation metrics, it makes full use of the efficient computing power of Spark distributed platform in training the massive data.

    Submitted on February 4, 2018; Revised on March 5, 2018; Accepted on April 25, 2018
    References: 18
    Quality Assessment of Sport Videos
    Zhenqing Liu
    2018, 14(5): 965-974.  doi:10.23940/ijpe.18.05.p15.965974
    Abstract    PDF (590KB)   
    References | Related Articles

    Considering that in sport videos the adjacent frames tend to have great similarity, this paper mainly extracted and analyzed the video frames which are most important for the user to perceive quality as a test sequence and propose a fully reference assessment method based on the temporal features and spatial features. Sports videos contain more details, and pictures change sharply. According to this characteristic, the method mainly used the SI (Spatial perceptual Information) and TI (Temporal perceptual Information) to analyze the feature of every frame of ESPN sport videos. Through the analysis of SI and TI, this paper extracted frames with high temporal perceptual Information and high spatial perceptual information as a test sequence. Then, every frame in the sequence would be test referring to its original corresponding frame to calculate PSNR (Peak signal-to-noise ratio). Finally, this paper calculated the average PSNR as the video quality assessment standards. This paper took rugby, basketball and hockey as experimental subjects. Through analyzing the PSNR of videos corresponding to different quality levels (better quality, general quality and poor quality), this paper determined the PSNR scopes of different quality levels that can be used practically. The experimental results showed that the analysis method put forward in this paper based on the characteristics of SI and TI could be used on ESPN sports video network platforms and others like it. It automatic analyzed and judged sports video quality of different bit rates in real time. It has a high Spearman rank order correlation coefficient (SROCC) with the subjective quality assessment.

    Submitted on February 9, 2018; Revised on March 15, 2018; Accepted on April 15, 2018
    References: 9
    Sport Products and Services in Sport Demand Model based on Heckman Model
    Hongbo Zhang
    2018, 14(5): 975-984.  doi:10.23940/ijpe.18.05.p16.975984
    Abstract    PDF (438KB)   
    References | Related Articles

    On the basis of an in-depth analysis of the connotation and characteristics of sport demand, according to family Becker production function theory, through economic model analysis of the impact mechanism of age, income, leisure time, sport skills, wage rates and prices and other factors on the sport demand and consumption. Take Beijing city as an example. The collection of domestic sport demand and sport related consumption of the actual data, using Heckman two stage estimation of the econometric test on the theoretical expectations model, and then put forward the expansion of sport demand, increase the countermeasures and suggestions of sport consumption.

    Submitted on February 9, 2018; Revised on March 15, 2018; Accepted on April 15, 2018
    References: 10
    Classification Decision based on a Hybrid Method of Weighted kNN and Hyper-Sphere SVM
    Peng Chen, Guoyou Shi, Shuang Liu, Yuanqiang Zhang, and Denis Špelič
    2018, 14(5): 985-994.  doi:10.23940/ijpe.18.05.p17.985994
    Abstract    PDF (1528KB)   
    References | Related Articles

    Hyper-sphere Support Vector Machine (SVM) is very effective for solving multi-class classification problems. Considering data distribution is very important for convergence of solving support vectors, a weight factor is imported into the original hyper-sphere SVM. After computing data for each training class, this weight factor is decided by its center-distance ratio. In the training process, data with bigger weight is put into the data processing thread first and is then followed by smaller ones. To save computation cost, a parallel genetic algorithm based SMO multi-threading is adopted. For a test sample, its class decision is based on its position with each classification of hyper-sphere. If all class-specific hyper-spheres are independent of each other, a new test sample can be classified correctly. But, if some hyper-spheres have common spaces, that is, one hyper-sphere intersects with one or more hyper-spheres, it is hard to decide the class of the test sample. Based on detailed analysis of three decision rules for the intersection data classification, one decision rule that combines the kNN method is put forward in this paper. For other simple inclusion cases, the simple decision rule is defined. Through two real experimental results of navigation tracking and ship meeting situations classification, our new proposed algorithm has a higher classification accuracy and boasts a lower computation cost than other algorithms.

    Submitted on February 8, 2018; Revised on March 12, 2018; Accepted on April 23, 2018
    References: 11
    Mathematical Morphology and Deep Learning-based Approach for Bearing Fault Recognition
    Yang Ge Xiaomei Jiang
    2018, 14(5): 995-1003.  doi:10.23940/ijpe.18.05.p18.9951003
    Abstract    PDF (716KB)   
    References | Related Articles

    A fault feature extraction method for rolling element bearings based on mathematical morphology is proposed in this paper. In order to obtain more useful features, this paper attempts to mix mathematical fractal features into time-frequency domain features and wavelet packet energy features. Using the mixed features, support vector machine and deep learning are performed to recognize operation conditions of bearings. It is found that mixed features can improve the conditions recognition accuracy. The comparison results show that deep learning performs better than support the vector machine and is able to predict bearing conditions with a mean accuracy of 99.19%. Therefore, it is concluded that the mixed features and deep learning method are effective for bearing operation conditions recognition.

    Submitted on February 8, 2018; Revised on March 12, 2018; Accepted on April 23, 2018
    References: 27
    An Improved Algorithm based on Time Domain Network Evolution
    Guanghui Yan, Qingqing Ma, Yafei Wang, Yu Wu, and Dan Jin
    2018, 14(5): 1004-1013.  doi:10.23940/ijpe.18.05.p19.10041013
    Abstract    PDF (667KB)   
    References | Related Articles

    Community evolution is the highlight in the field of complex network. The current typical tracking community algorithms largely focus on adopting the traditional similarity functional measurements to capture the similarity between communities at temporal snapshots. However, it doesn't take into account the actions accumulated with the events and the effects of community members in evolutionary networks. Meanwhile, different communities use traditional tracking methods with a simple similarity function, and as a result, many analogous communities cannot be effectively extracted in the network. To address these shortcomings, in this paper, we propose a much more powerful similarity function to catch and evaluate communities or groups in a successive time frame. We implement a community tracking method in our new function on the basis of previous research, in which we improve accuracy in network structure by taking the diversity corresponding to the active node in network-evolution into consideration. Finally, we find an interesting phenomenon and give a new method to weigh out the relationships involving active nodes within community evolution over time frames. Eventually, the performance of our algorithm is measured by applying it to real datasets and it is tested on tracking community structure and assessing the experimental results that inhibit active nodes extracted from the community. The experimental results show that our algorithm can effectively keep track of community structure and outperform other algorithms.

    Submitted on January 18, 2018; Revised on March 9, 2018; Accepted on April 21, 2018
    References: 18
    A Method of Dynamically Associating Behavior Risks based on Time Thread in Smartphones
    Zhenliu Zhou, Xiaoming Zhou, Weichun Ge, Yueming Pan, and Yu Gu
    2018, 14(5): 1014-1022.  doi:10.23940/ijpe.18.05.p20.10141022
    Abstract    PDF (508KB)   
    References | Related Articles

    Behavior associated risks are analyzed and detected based on a series of software behaviors and user behaviors in a smartphone. Based on time threads, a method of dynamically associating and analyzing risks of behaviors is proposed. According to the time sequence of the behavior occurrence, the behaviors are organized into a behavior associated graph with time threads. Associated analysis and detection of risks among behaviors are achieved through matching association rules. The advantage of this method is that it cannot only realize dynamic analysis while behavior occurs, but can also realize static postmortem analysis using collected data sets. The dynamic association of behavior risk based on time thread improves behavior risk analysis and realizes the real-time risk detection in smartphones. Formalized definitions of behavior and behavior associated graphs are presented. Algorithms of behavior risk associating are described, and the results of the experimental analysis are given.

    Submitted on February 21, 2018; Revised on March 26, 2018; Accepted on April 29, 2018
    References: 17
    Preventing Override Trip Algorithm based on Quantum Entanglement in Coal Mine High-Voltage Grid
    Xinliang Wang, Zhigang Guo, Qianhui Yang, and Jianing Zou
    2018, 14(5): 1023-1029.  doi:10.23940/ijpe.18.05.p21.10231029
    Abstract    PDF (384KB)   
    References | Related Articles

    In the traditional preventing override trip system, the preventing override trip issue is still not resolved very well because of the transmission delay problem. Based on the characteristic of quantum entanglement, a quantum preventing override trip algorithm is proposed, which can allow the power monitoring system to quickly obtain the switch’s over current fault information and effectively prevent override trip of the coal mine high-voltage grid. The simulation results show that the quantum override trip algorithm uses less time to obtain fault information of all switches and saves more time to finish protection setting when compared with traditional algorithms. So, the quantum preventing override trip algorithm can better solve the problem of transmission delay and can effectively improve the reliability of preventing override trip system.

    Submitted on January 29, 2018; Revised on March 3, 2018; Accepted on April 20, 2018
    References: 14
    Auto-Tuning for Solving Multi-Conditional MAD Model
    Feng Yao, Yi Liu, Huifen Chen, Chen Li, Zhonghua Lu, Jinggang Wang, Zhiheng Li, and Ningming Nie
    2018, 14(5): 1030-1039.  doi:10.23940/ijpe.18.05.p22.10301039
    Abstract    PDF (520KB)   
    References | Related Articles

    As an important branch of Integer Programming (IP), Mixed Integer Nonlinear Programming (MINLP) has been applied in many fields. As a typical MINLP model, solving the multi-conditional MAD model is a NP-hard problem. In order to solve the model efficiently and rapidly, an auto-tuning of the branch-cut algorithm, which is the solving algorithm of the multi-conditional MAD model, is performed by using the CPLEX solver deployed on the Era supercomputer. The experimental results show that the parallel branch-cut algorithm after auto-tuning can improve the computation speed significantly and can obtain comparable results with the algorithm before auto-tuning, and the parallel efficiencies are better and preponderate over 60% when the number of threads is 2 or 4.

    Submitted on February 2, 2018; Revised on March 8, 2018; Accepted on April 17, 2018
    References: 13
    Negative Correlation Incremental Integration Classification Method for Underwater Target Recognition
    Ming He, Nianbin Wang, Hongbin Wang, Ci Chu, and Songyan Zhong
    2018, 14(5): 1040-1049.  doi:10.23940/ijpe.18.05.p23.10401049
    Abstract    PDF (669KB)   
    References | Related Articles

    In this paper, an incremental learning algorithm based on negative correlation learning (NCL) is used as an identification classifier for underwater targets. Based on Selective negative incremental learning SNCL (Selective NCL) algorithm in the process of training, there are numbers of hidden layer nodes that are difficult to determine training time. Problems such as over fitting analysis arise. The algorithm combined with Bagging makes the difference between individual network further increase, and ensures the generalization performance of the whole. On the basis of this method, the use of the selective integration method based on clustering and a new proposed algorithm called SANCLBag, combined with the convolution of underwater target recognition neural network shows that the proposed integration approach can make the difference between individual network in the classification process further increase, and ensure the whole generalization performance. The model has higher identification accuracy, and can effectively solve the problem of incremental learning.

    Submitted on January 29, 2018; Revised on March 12, 2018; Accepted on April 24, 2018
    References: 28
    BotCapturer: Detecting Botnets based on Two-Layered Analysis with Graph Anomaly Detection and Network Traffic Clustering
    Wei Wang, Yang Wang, Xinlu Tan, Ya Liu, and Shuangmao Yang
    2018, 14(5): 1050-1059.  doi:10.23940/ijpe.18.05.p24.10501059
    Abstract    PDF (743KB)   
    References | Related Articles

    Botnets have become one of the most serious threats on the Internet. On the platform of botnets, attackers conduct series of malicious activities such as distributed denial-of-service (DDoS) or virtual currencies mining. Network traffic has been widely used as the data source for the detection of botnets. However, there are two main issues on the detection of botnets with network traffic. First, many traditional filtering methods such as whitelisting are not able to process the very large amount of traffic data in real-time due to their limited computational capability. Second, many existing detection methods, based on network traffic clustering, result in high false positive rates. In this work, we are motivated to resolve the above two issues by proposing a lightweight botnet detection system called BotCapturer, based on two-layered analysis with anomaly detection in graph and network communication traffic clustering. First, we identify anomalous nodes that correspond to C&C (Control and Command) servers with anomaly scores in a graph abstracted from the network traffic. Second, we take advantage of clustering algorithms to check whether the nodes interacting with an anomalous node share similar communication pattern. In order to minimize irrelevant traffic, we propose a traffic reduction method to reduce more than 85% background traffic. The reduction is conducted by filtering the packets that are unrelated to the hosts like C&C server. We collect a very big dataset by simulating five different botnets and mixing the collected traffic with background traffic obtained from ISP. Extensive experiments are conducted and evaluation results based on our own dataset show that BotCapturer reduces more than 85% input raw packet traces and achieves a high detection rate (100%) with a low false positive rate (0.01%), demonstrating that it is very effective and efficient in detecting latest botnets.

    Submitted on February 5, 2018; Revised on March 18, 2018; Accepted on April 26, 2018
    References: 21
    Smart Mine Construction based on Knowledge Engineering and Internet of Things
    Xiaosan Ge, Shuai Su, Haiyang Yu, Gang Chen, and Xiaoping Lu
    2018, 14(5): 1060-1068.  doi:10.23940/ijpe.18.05.p25.10601068
    Abstract    PDF (315KB)   
    References | Related Articles

    In comparison with digital mine project, the proposal of Smart Project means a difference tendency in mine informationization. From the point view of intelligent mining production process and fine mining production management, this paper firstly presented the idea of Smart Mine, based on the philosophy of a fusion of object wisdom and tool wisdom, and analyzed the connotation of Smart Mine. Then, based on a depth study of the support of knowledge engineering and the networking of things technology for Smart Mine construction, we proposed an advanced architecture of Smart Mine based on these two major technologies, and also gave a comprehensive explanation to the balance between human, environment and sustainable mineral resource exploitation in the aspects of material flow, information flow support network, the synchronization and intelligence of smart mining production process.

    Submitted on February 2, 2018; Revised on March 5, 2018; Accepted on April 13, 2018
    References: 33
    Extraction and Mining of Video Feature in Sport Videos
    Yang Han
    2018, 14(5): 1069-1077.  doi:10.23940/ijpe.18.05.p26.10691077
    Abstract    PDF (341KB)   
    References | Related Articles

    On the basis of analyzing the characteristics of sports video, the parameters of the feature generation are adjusted. According to the sports video library, three features of SD-VLAD (Soft Distribution-Vectors of Locally Aggregated Descriptors), BOC (Bag of Color) and shot type were selected as the description information of the image; the appropriate parameters were selected through experiments; the best parameter configuration for soccer video library was given. In order to detect the influence of parameters in SD-VLAD and BOC descriptors on the recognition effect of descriptors, and select the appropriate parameters, the experiment was carried out in part of the library of search web, and the experimental results were analyzed.

    Submitted on February 1, 2018; Revised on March 19, 2018; Accepted on April 27, 2018
    References: 14
    A Mining Model of Network Log Data based on Hadoop
    Yun Wu, Xin Ma, Guangqian Kong, Bin Wang, and Xinwei Niu
    2018, 14(5): 1078-1087.  doi:10.23940/ijpe.18.05.p27.10781087
    Abstract    PDF (617KB)   
    References | Related Articles

    With the increasing amount of data in the information age, traditional Web log data mining method has been unable to deal with large-scale text data. Aiming at these problems, we design a high reliability Web log data mining scheme and put forward a kind of text similarity simulation detection model based on Hadoop. Firstly, we design a data mining scheme for user behavior log, which considering the heterogeneity, diversity and complexity of network log data. The design of the platform is divided into three layers: Data storage layer, Business logic layer, and Application layer. In this part, we design the data cleaning algorithm and KPI, and then use Hive to complete mining. Secondly, a text log data similarity mining model based on Hadoop is proposed, and the algorithm of text similarity mining model is designed. This mining model including the Shingling algorithm and NewMinhash algorithm for the design of MapReduce. Using the improved Shingling algorithm based on the MapReduce programming model, the document is converted to a collection. The distributed New Minhash algorithm is used to solve the signature matrix, and the Jaccard coefficients are used to calculate the similarity. We conduct experimental analysis based on data set SogouCS. The experimental results show the effectiveness of the NewMinhash algorithm, and prove that the model can not only find the similarity of text accurately, but also can better adapt to the distributed platform, and have good expansibility.

    Submitted on January 25, 2018; Revised on March 13, 2018; Accepted on April 17, 2018
    References: 15
    GPU-Accelerated Support Vector Machines for Traffic Classification
    Guanglu Sun, Xuhang Li, Xiangyu Hou, and Fei Lang
    2018, 14(5): 1088-1098.  doi:10.23940/ijpe.18.05.p28.10881098
    Abstract    PDF (490KB)   
    References | Related Articles

    Machine learning model tackles traffic classification effectively. But, it consumes considerable computing resources and computing time, resulting in the difficulty to accommodate large-scale network. In the presented study, GPU-accelerated Support Vector Machines (SVM) is proposed for traffic classification. GPU is used to parallelly calculate the kernel matrix and process the grid traversal of iterative-tuning scheme, in order to accelerate the training and parameters optimization procedure of SVM. Parallel traffic classification is applied to accelerate the classification procedures through the single instruction multiple data paradigm, multithreading and the shared memory of the threads. The experimental results show that the presented method achieves the similar accuracy comparing to the existing CPU-based LibSVM. Furthermore, it ramps up the training speed to 1.53 times and the classification speed to 24 times, which is suitable for the real time classification of high speed backbone networks.

    Submitted on January 26, 2018; Revised on March 2, 2018; Accepted on April 26, 2018
    References: 21
ISSN 0973-1318