Please wait a minute...
, No 7


■ Cover Page (PDF 282 KB)   Table of Contents, July 2019 (PDF 270 KB)

  
  • Performance Modeling and Analysis of Refrigeration System of a Milk Processing Plant using Petri Nets
    Narendra Kumar, P. C. Tewari, and Anish Sachdeva
    2019, 15(7): 1751-1759.  doi:10.23940/ijpe.19.07.p1.17511759
    Abstract    PDF (761KB)   
    References | Related Articles
    :This paper described the performance modeling of the refrigeration system of a milk processing plant using Petri nets and obtained a quantitative analysis in terms of availability under varying operating parameters. For the modeling and simulation of the system, a Petri module of GRIF software was used. In the current study, an effort was made to use reliability, availability, and maintainability (RAM) tools, which can be quantitative or qualitative methods and software that reduce the uncertainties involved in random failures and consequent shutdowns of the plant. Finally, an attempt was made to provide a specific direction for determining maintenance strategies to meet operational objectives economically considering spare parts and repair facilities.
    Question-based Methodology for Rating the Severity of Defects in Construction Through on-Site Inspection
    Bessa Rui, Costa Jorge, and Calejo Rui
    2019, 15(7): 1760-1771.  doi:10.23940/ijpe.19.07.p2.17601771
    Abstract    PDF (485KB)   
    References | Related Articles
    The impact of defects in construction is still not well-defined, and it is challenging to quantify all related indirect variables numerically. The majority of the studies in this field focus on the direct impact of defects on costs and planning, neglecting other indirect impacts that are more difficult to measure. Hence, it is vital to evaluate and classify the defects based on their impact, so that priorities can be defined for action both in their correction and prevention. In this work, a generalized methodology to grade the severity of defects was developed based on five impacts: impact on costs, planning, health and safety, system performance, and subsequent tasks.A suitable variant of failure modes and effects analysis (FMEA) was selected in order to develop a qualitative analysis methodology to grade the impact factors, severity, and risk prior number (RPN) of defects, based on question forms. Moreover, the presented methodology was applied on-site and compared with FMEA traditional severity calculations applied in the same construction project by the same individuals evaluating the same severity of defects.The obtained results from both methodologies were slightly different, and the authors believe that the identified tendency of different severity levels between both calculations could lead to different risk categories when applied to the whole project. Moreover, it is expected that the proposed methodology can also help separate the impacts for an individual evaluation of the defects.
    Short-Term Wind Power Forecasting using Wavelet-based Hybrid Recurrent Dynamic Neural Networks
    Pavan Kumar Singh, Nitin Singh, and Richa Negi
    2019, 15(7): 1772-1782.  doi:10.23940/ijpe.19.07.p3.17721782
    Abstract    PDF (918KB)   
    References | Related Articles
    In the recent past, the integration of wind energy generation into smart grids has gained lot of momentum because of its availability. The major hurdle in the integration of wind power in smart electric grids, at present time is the irregularity and unpredictability of wind power. Therefore, in order to deal with these challenges, the superior forecasting tool plays an important role in the planning and execution of the wind energy integration. In the expanding power system, because of increasing wind power penetration, a precise wind power forecasting technique is greatly needed to help system operators and consider wind power production in economic scheduling, unit commitment, and allocation trouble reservation. In this paper, two hybrid recurrent dynamic neural networks have employed hybridizing wavelet transform (WT) for short-term prediction of wind power. The proposed approach consists of wavelet decomposition of wind power and wind speed time series, and NAR and NARX recurrent dynamic neural networks are employed to regress upon each decomposed sub-series. Thereafter, the individual outputs of sub-series are aggregated to achieve final prediction of wind power, with up to 24 hours of forecast horizon. The performance of the proposed method is obtained in terms of MAE, MSE, and MAPE values and compared to the results of the persistence method. The forecast results reveal that WT-NARX model is better in terms of the selected performance criteria as compared to the WT-NAR and persistence models respectively.
    Remote Sensing Image Super-Resolution Reconstruction based on Generative Adversarial Network
    Aili Wang, Ying Wang, Xiaoying Song, and Yuji Iwahori
    2019, 15(7): 1783-1791.  doi:10.23940/ijpe.19.07.p4.17831791
    Abstract    PDF (865KB)   
    References | Related Articles
    The super-resolution reconstruction algorithm based on generative adversarial network (GAN) can generate realistic texture in the super-resolution process of a single remote sensing image. In order to further improve the visual quality of the reconstructed image, this paper will improve the generation network, discrimination network, and perceptual loss of the generated confrontation network. Firstly, the batch normalization layer is removed and dense connections are used in the residual blocks, which effectively improves the performance of the generated network. Then, we use the relative discriminant network to learn more detailed texture. Finally, we obtain the perception loss before the activation function to maintain the consistency of brightness. In addition, transfer learning is used to solve the problem of insufficient remote sensing data. The experimental results show that the proposed algorithm has superiority in the super-resolution reconstruction of remote sensing images and can obtain better subjective visual effects.
    Pedestrian Detection based on Faster R-CNN
    Shuang Liu, Xing Cui, Jiayi Li, Hui Yang, and Niko Lukač
    2019, 15(7): 1792-1801.  doi:10.23940/ijpe.19.07.p5.17921801
    Abstract    PDF (1173KB)   
    References | Related Articles
    Pedestrian detection has a wide range of applications, such as intelligent assisted driving, intelligent monitoring, pedestrian analysis, and intelligent robotics. Therefore, it has been the focus of research on target detection applications. In this paper, the Faster R-CNN target detection model is combined with the convolutional neural networks VGG16 and ResNet101 respectively, and the deep convolutional neural network is used to extract the image features. By adjusting the structure and parameters of Faster R-CNN's RPN, the multi-scale problem existing in the pedestrian detection process is solved to some extent. The experimental results compare the detection ability of the two schemes on the INRIA pedestrian dataset. The resulting model is migrated and validated on the Pascal Voc2007 dataset.
    Target Tracking Algorithm based on Context-Aware Deep Feature Compression
    Ying Wang, Aili Wang, Ronghui Wang, Haiyang Liu, and Yuji Iwahori
    2019, 15(7): 1802-1812.  doi:10.23940/ijpe.19.07.p6.18021812
    Abstract    PDF (988KB)   
    References | Related Articles
    The main focus of target tracking is robustness and efficiency. Because of challenges such as background clutter, occlusion, and rotation, high robustness and efficiency cannot be achieved simultaneously. A tracking framework of perceptual correlation filter is improved to achieve high-speed computation between real-time trackers. The main contribution to high-speed computing speed comes from improved depth feature compression, which is realized by combining content-aware features with multiple automatic encoders. In the pre-training stage, an automatic encoder is trained for each class separately. In order to obtain the feature map suitable for target tracking, the orthogonal loss function is added to the training stage and the fine-tuning self-encoder stage. Experiments show that the improved algorithm demonstrates great improvement in accuracy and speed.
    Bit Allocation Algorithm based on SSIM for 3D Video Coding
    Tao Yan, In-Ho Ra, Hui Wen, Hang Xu, and Linyun Huang
    2019, 15(7): 1813-1821.  doi:10.23940/ijpe.19.07.p7.18131821
    Abstract    PDF (512KB)   
    References | Related Articles
    The 3D video system has broad application prospects and has become a new research hotspot at home and abroad in the video field. However, there are still many problems in multi-view video coding rate control in a three-dimensional video system. Therefore, this paper proposes a rate allocation algorithm based on structural similarity index measurement (SSIM) for 3D video coding. In this paper, we first analyze the correspondence between the weights of inter-view rate and the correlation between viewpoints, and then we establish the multi-view video main view and non-main view bit allocation calculation model. Finally, the view layer, frame layer, and macro block layer respectively perform bit allocation and rate control. The experimental results show that the proposed method can effectively control the bit rate of multi-view video coding while maintaining the multi-view video coding quality under limited bandwidth compared with the existing view layer fixed ratio allocation rate.
    Parallel Topology Analysis Method of Coal Mine High Voltage Power Grids based on Genetic Algorithm
    Xinliang Wang, Boqi Zhang, Mengmeng Fu, Zhihuai Liu, and Wei Fang
    2019, 15(7): 1822-1828.  doi:10.23940/ijpe.19.07.p8.18221828
    Abstract    PDF (434KB)   
    References | Related Articles
    The existing topology analysis method based on correlation matrix for coal mine high voltage power grids has the problems of high time complexity and low computational efficiency. By introducing the first-come first-serve parallel scheduling algorithm into the above topology analysis method, the computational efficiency can be improved to a certain extent. Based on this, this paper further proposes an adaptive topology analysis algorithm for coal mine high voltage power grid based on the successive comparison method and genetic algorithm, which can further improve the parallel scheduling efficiency of topology analysis for coal mine high voltage power grids. The simulation results show that the adaptive topology analysis algorithm for coal mine high voltage power grid based on genetic algorithm can improve the computational efficiency and reduce the time overhead better than other algorithms can.
    Gaussian Perturbation Whale Optimization Algorithm based on Nonlinear Strategy
    Yu Li, Xiaoting Li, Jingsen Liu, and Xuechen Tu
    2019, 15(7): 1829-1938.  doi:10.23940/ijpe.19.07.p9.18291838
    Abstract    PDF (617KB)   
    References | Related Articles
    Whale Optimization Algorithm (WOA) is a recently developed swarm intelligence optimization algorithm that has strong global search capability. In this work, considering the deficiency of WOA in a local search mechanism and convergence speed, a Gaussian Perturbation Whale Optimization Algorithm based on Nonlinear Strategy (GWOAN) is introduced. By implementing a nonlinear change strategy on the parameters, the swarm is able to enter the local search process faster and thus improve the local exploitation ability of the algorithm. In a later stage, Gaussian perturbation is performed on the current optimal individuals to enrich the population diversity, avoid premature convergence of the algorithm, and improve the global development capability of the algorithm. The results of the comparison experiment between the GWOAN, WOA, and PSO algorithms show that the accuracy of GWOAN in the selected ten function optimization solutions is significantly higher than that of the comparison algorithms, and its optimization efficiency is also better. Among the ten benchmark functions, four can converge to the theoretical optimal value.
    Link Adaptive Optimization Method based on Minimizing Packet Loss Rate in AOS Communication System
    Qingli Liu, Yanjun Yang, and Zhiguo Liu
    2019, 15(7): 1839-1848.  doi:10.23940/ijpe.19.07.p10.18391848
    Abstract    PDF (844KB)   
    References | Related Articles
    Aiming at the problem of packet loss of finite-length queues caused by the characteristics of data high burst, and finally leading to a decrease in throughput in the AOS space communication system, a link adaptive optimization method based on the minimization of the system packet loss rate is proposed. It combines the finite length queue, limited retransmission of the data link layer, and adaptive modulation coding of the physical layer. The system packet loss rate is used as an objective function. By solving the minimum system packet loss rate to reasonably assign the retransmission times and modulation encoding schemes, the system average throughput is improved finally. The theoretical analysis and simulation results show that this method can reduce the system packet loss rate by 80% and increase the system average throughput by 4.2% compared with the AMCA method. At the same time, it can reduce the system packet loss rate by 90% and increase the system average throughput by 11.1% compared with the AMCAFQS method.
    Heuristic for Hot-Rolled Batch Scheduling of Seamless Steel Tubes with Machine Maintenance and Tardiness
    Yang Wang, Tieke Li, and Bailin Wang
    2019, 15(7): 1849-1859.  doi:10.23940/ijpe.19.07.p11.18491859
    Abstract    PDF (745KB)   
    References | Related Articles
    Machine maintenance is an indispensable management activity for companies to maintain stability and safety in the process of production. In this paper, the batch scheduling of hot-rolled steel tubes with maintenance and tardiness are considered and abstracted into a single machine scheduling problem with maintenance and tardiness. Combined with the constraint of sequence-dependent setup times, a multi-objective integer programming model is established to minimize the total idle time, total setup time, and total tardiness, and a two-stage local reordering heuristic based on optimization strategy is designed. Finally, comparative experiments are carried out based on actual production data, and the results show that the model and algorithm help alleviate this kind of problem.
    Reliability Analysis of Ring Mold Granulator based on Minimum Maintenance Model
    Risu Na, Xin Li, Jie Liu, and Yuan Liu
    2019, 15(7): 1860-1867.  doi:10.23940/IJPE.19.07.P12.18601867
    Abstract    PDF (405KB)   
    References | Related Articles
    Granulation molding equipment is the most critical link in the granule feed production line. The ring mold granulator is one of the main equipment of feed machinery, but its low life, frequent failures, and high maintenance costs seriously restrict the development of granulators in China. Based on the minimum maintenance model, this paper proposes the timing replacement strategy (CIRP) analysis, so as to obtain the Weibull probability diagram of the time between failures and obtain the fault distribution mode, fault frequency, and reliability of the sub-system of the ring mold granulator. By analyzing the failure statistics of the ring mold granulator, it is concluded that the granulator system and transmission system are the main causes of failures. Improvement measures are proposed for the weak links of the ring mold granulator, providing an important reference for improving the life of the ring mold granulator and reducing failures.
    Comparison of Fatigue Reliability Life of Telescopic Rod of an Eccentric Telescopic Rod Conveyor with and without Strength Degradation
    Yingsheng Mou, Zhiping Zhai, Xiaoyun Kang, Zhuwei Li, and Yuezheng Lan
    2019, 15(7): 1868-1877.  doi:10.23940/ijpe.19.07.p13.18681877
    Abstract    PDF (706KB)   
    References | Related Articles
    Eccentric telescopic rod conveyors are used to convey straw from chain conveyors to the feeding and compression mechanism of the 4FZ-2000A type self-propelled straw harvesting baler. As the main working component, the telescopic rod endures higher dynamic loads while conveying the straw. This will make the telescopic rod prone to fatigue fracture. Therefore, it is necessary to find a feasible model that can accurately estimate the fatigue reliability life of the telescopic rod. In order to ensure the safe working of the eccentric telescopic rod conveyer, a mechanical model of the main bearing parts of the telescopic rod is established, and virtual prototype technology is used to obtain the work load spectrum borne by the telescopic rod. Static analysis of the telescopic rod using the finite element method shows that the telescopic rod experiences multi-axial stress fatigue, and the critical section is determined. The S-N curve equation of the structure modified by the surface mass and stress gradient, critical plane approach, Miner's cumulative fatigue linear damage model, and Gaussian normal distribution model of fatigue life are used to estimate the fatigue life of the telescopic rod. In order to predict the fatigue life of the telescopic rod accurately, fatigue reliability lives are calculated with and without consideration of the strength degradation. The results show that: (1) the fatigue life prediction without considering the strength degradation is far different from practical experience; the results obtained after considering the strength degradation are more conservative and more consistent with the actual fatigue lives. The prediction results are more accurate. (2) When the strength degradation is considered, the modified Gerber correction method is more accurate than the Goodman correction method. The results of this study can provide a reference for the fatigue reliability analysis and optimization of eccentric telescopic rod conveyors.
    Spare Parts Forecast Analysis based on Important Calculation of Element Fault Tree
    Xiaoyan Wang, Hongkai Wang, Jinghui Zhang, and Chun Zhang
    2019, 15(7): 1878-1885.  doi:10.23940/ijpe.19.07.p14.18781885
    Abstract    PDF (508KB)   
    References | Related Articles
    Spare parts are an important material basis for the use and maintenance of machine tools, and they are an important factor affecting equipment life cycle costs. In this paper, the element movements and the element action failure modes of equipment operation are found for the machine function unit. The reason of the failure of the unit base element is identified by using the element action fault tree to determine the reason of the fault of the unit base element. The calculation of the importance degree of the machine unit is carried out, and the importance degree is analyzed for rational distribution, thus ensuring the reduction of maintenance costs and the normal operation of the system. It is proven that the method based on the importance of the fault tree of the element action plays a guiding role in the analysis of spare parts.
    Analysis of Meshing Performance and Fatigue Reliability of Main Reducer Transmission Device for Rail Conveyor
    Wenzhi Liu, Tianxiang Wang, and Jian Tao
    2019, 15(7): 1886-1894.  doi:10.23940/ijpe.19.07.p15.18861894
    Abstract    PDF (772KB)   
    References | Related Articles
    Aiming at the reduction gear transmission of a rail conveyor, the two-stage main gear transmission system is taken as an example to establish the contact collision dynamics calculation model of the meshing gear teeth. The contact force of each gear speed and the tooth flanks of a gear with different helix angles in one motion cycle are calculated and analyzed. Based on the kinetic calculation results, the finite element calculation model of gear transmission friction contact is established. The Lagrangian multiplier method is used to calculate and analyze the meshing performance of the teeth under different helix angle contact conditions during one motion cycle. In order to avoid the contact fatigue of the gear teeth, the dangerous position of the tooth surface contact fatigue is obtained by finite element calculation. Based on the miner linear cumulative damage theory, the contact fatigue damage degree of the tooth surface of the tooth is obtained under different helix angle contact conditions.
    Fault Diagnosis of Wind Turbine Blades based on Wavelet Theory and Neural Network
    Junxi Bi, Chenglong Zheng, Hongzhong Huang, Xiaojuan Song, and Jinfeng Li
    2019, 15(7): 1895-1904.  doi:10.23940/ijpe.19.07.p16.18951904
    Abstract    PDF (456KB)   
    References | Related Articles
    With the development of the wind turbine industry, the reliability requirements of wind turbine blades are continuously increasing. In this paper, static load fatigue experiments are carried out on wind turbine blades, and the collected fault data of blades are extracted using the wavelet transform method. Wavelet theory is applied to remove the noise of the data and eliminate the interference of noise on the fault diagnosis of wind turbine blades. Then, the wavelet decomposition method is used to decompose high frequency signals and low frequency signals. The faulty low frequency signals are extracted and analyzed in the time domain, and a fault diagnosis method of wind turbine blade is established. The data of different vibration frequencies of wind turbine blades are collected by the acquisition system, and the data are imported into the neural network. The neural network is used to process the data and identify the states of wind turbine blades. The neural network proves that the wavelet transform method has reliable fault diagnosis ability in time domain analysis.
    7A52 Aluminum Alloy MIG Welding Residual Stress Reliable Measurement based on Hole-Drilling Method
    Shiming Gan, Yongquan Han, and Xiaoyan Bao
    2019, 15(7): 1905-1911.  doi:10.23940/ijpe.19.07.p17.19051911
    Abstract    PDF (595KB)   
    References | Related Articles
    To analyze the welding residual stress distributions for aluminum alloy medium and thick plates after the process of MIG welding, a residual stress testing system based on hole-drilling method was designed by virtual instrument and NI data acquisition card. To improve the accuracy and reliability of measurement results, the elasticity modulus error, strain gauge pasted error, and strain reading value time error were emphatically analyzed. The elasticity modulus error could be corrected by the curve that is fit to data measured in different MIG welding joint areas. The final measurement error caused by the strain gauge pasted error was reduced to 0 in 24 hours after the strain gauge was pasted. The final measurement error caused by the strain reading value time error was reduced to 0 in 150 minutes after the residual stress began to be measured. The experiment of MIG welding residual stress measurement was carried out on 10 mm thick 7A52 aluminum alloy plates. The results showed that the distributions of residual stresses on two sides of the weld seam were basically symmetrical about the weld center. The maximum tensile stress appeared in the fusion zone, and the maximum transverse residual stress and the maximum longitudinal residual stress were 96 MPa and 185 MPa, respectively. The residual stresses from the fusion zone to heat affected zone were all tensile stresses, which were higher than the residual stresses in the center of the welding seam. Smaller compressive stresses appeared in the base metal.
    Fault Diagnosis Technology of Plunger Pump based on EMMD-Teager
    Shijie Deng, Liwei Tang, Xujun Su, and Jinli Che
    2019, 15(7): 1912-1919.  doi:10.23940/ijpe.19.07.p18.19121919
    Abstract    PDF (758KB)   
    References | Related Articles
    Based on the analysis of common failure modes of plunger pumps, a fault diagnosis method based on EMMD decomposition and Teager energy operator demodulation is proposed to solve the problem of weak characteristic signals in early failure of plunger pumps. Firstly, the extremum field mean mode decomposition (EMMD) is used to obtain the finite mode component IMF and the residual C. Then, the IMF component is demodulated by Teager energy operator, and the characteristic peak appears in the spectrum. The energy information of the feature frequency points is extracted to form the feature vectors, which can be used as the proportion. The elements in the vectors are screened by the classification sensitivity, and the effective feature vectors are finally obtained. The experimental results show that the EMMD-Teager method can filter the signal effectively and extract features from the frequency domain conveniently. The selected feature vectors can accurately classify the three states of the normal plunger pump, plunger hole wear, and slipper wear.
    Fastening Function Reliability Analysis of Aircraft Lock Mechanism based on Competitive Failure Method
    Yugang Zhang, Jingyi Liu, and Tianxiang Yu
    2019, 15(7): 1920-1928.  doi:10.23940/ijpe.19.07.p19.19201928
    Abstract    PDF (592KB)   
    References | Related Articles
    The functional principle and failure analysis of a landing gear cabin door lock mechanism are researched in this paper. The fastening process is important for achieving the mission of the mechanism. There are two potential risks in the fastening process that may impact the stealthy performance of an aircraft: accidentally open errors and lock hook position errors. These two risks compete with each other. Competing failure models are established for the fastening process of the lock mechanism. The extreme model is used to describe accidentally open failures, while Brownian motion (BM) with non-linear drift and the Poisson process are adopted to model lock hook position error failures. The reliabilities for the lock mechanism are calculated at different working times. Results and conclusions are illustrated and provide helpful insight into the changes and degradation of the fastening process of the lock mechanism.
    Chicken Swarm Optimization in Task Scheduling in Cloud Computing
    Liru Han
    2019, 15(7): 1929-1938.  doi:10.23940/ijpe.19.07.p20.19291938
    Abstract    PDF (372KB)   
    References | Related Articles
    In order to solve the problem of low efficiency in resource scheduling in cloud computing, an improved chicken swarm optimization (CSO) is proposed for task scheduling. Firstly, the concept of opposition-based learning is introduced to initialize the chicken population and improve the global search ability. Secondly, the concepts of the weight value and learning factor in particle swarm optimization (PSO) are introduced to improve the positions of chickens, and the individual positions of chickens are optimized. Thirdly, the overall individual positions of the CSO are optimized by the difference algorithm. Finally, the possible cross-boundary of individual positions in the algorithm is prevented as a whole by boundary processing. In the simulation experiment, the optimized CSO is compared with the basic CSO, PSO, and ant colony optimization (ACO) in terms of completion time, cost, energy consumption, and load balancing, and good results are achieved.
    Combining Stochastic Grammar and Semi-Supervised Learning Techniques to Extract RNA Structures with Pseudoknots
    Sixin Tang
    2019, 15(7): 1939-1946.  doi:10.23940/ijpe.19.07.p21.19391946
    Abstract    PDF (553KB)   
    References | Related Articles
    To predict RNA structures with pseudoknots, traditional stochastic grammar models must collect several related labeled RNA sequences, which limits the practical application of this method. In order to use a large number of unlabeled RNA sequences effectively for structure prediction, the combination of stochastic grammar and semi-supervised learning techniques has been proposed. In these techniques, we used a small amount of labeled RNA sequences and a large number of unlabeled sequences as a training set of the prediction model. Designing a semi-supervised learning model based on the SCFG inside/outside algorithm and using a SCFG model based on the generative method as a classifier, we labeled the unlabeled RNA sequences through training and then gradually merged them into the labeled data set. This model can regulate the proportion of labeled and unlabeled sequences and finally output the structure tags sequence. Experimental results showed that this method can utilize unlabeled sequences data effectively, greatly reduce the demand for the number of related sequence samples, and improve the prediction accuracy. In addition, we measured the performance of model prediction influenced by different amounts of unlabeled sequences.
    Efficiently Retrieving Differences Between Remote Sets using Counting Bloom Filter
    Xiaomei Tian, Huihuang Zhao, Yaqi Sun, and Xiaoman Liang
    2019, 15(7): 1947-1954.  doi:10.23940/ijpe.19.07.p22.19471954
    Abstract    PDF (415KB)   
    References | Related Articles
    Retrieving differences between remote sets is widely used in set reconciliation and data deduplication. Set reconciliation and data deduplication between two nodes are widely used in various network applications. The basic idea of the difference retrieving problem is that each member of a node pair has an object set and seeks to find all differences between the two remote sets. There are many methods for retrieving difference sets, such as the standard Bloom filter (SBF), counting Bloom filter (CBF), and invertible Bloom filter (IBF). In these methods, based on the standard Bloom filter or its variants, each node represents its objects using a standard Bloom filter or other Bloom filter, which is then exchanged. A receiving node retrieves different objects between the two sets according to the received SBF, CBF, or IBF. We propose a new algorithm for retrieving differences that finds differences between remote sets using counting Bloom filters' deletion operation. The theoretical analyses and experimental results show that the differences can be retrieved efficiently. Only a very small number of differences are missing in the retrieving process, and this false negative rate can be decreased to 0% by adjusting the counting Bloom filter's parameters.
    Cloud Computing Resource Load Forecasting based on Bat Algorithm Optimized SVM
    Yuxia Li
    2019, 15(7): 1955-1964.  doi:10.23940/ijpe.19.07.p23.19551964
    Abstract    PDF (487KB)   
    References | Related Articles
    For the problem of resource load forecasting in cloud computing, the optimized bat algorithm is combined with SVM for forecasting. Firstly, the bat algorithm adopts the reverse learning strategy for population initialization, and secondly, the weighting factor in the particle swarm optimization is used for individual optimization. Finally, the individual is selected using the Gaussian mutation method. Two important parameters in the SVM are optimized using the improved algorithm. In the simulation experiment, the SVM is optimized by the particle swarm optimization and compared with the genetic algorithm optimizing SVM, and a better forecasting effect is obtained.
    Task Scheduling of an Improved Cuckoo Search Algorithm in Cloud Computing
    Wenli Liu, Cuiping Shi, Hongbo Yu, and Hanxiong Fang
    2019, 15(7): 1965-1975.  doi:10.23940/ijpe.19.07.p24.19651975
    Abstract    PDF (410KB)   
    References | Related Articles
    In view of the low efficiency of task scheduling in cloud computing, this paper introduces the cuckoo algorithm to optimize task scheduling. Firstly, the cloud computing task scheduling model is established. Secondly, the particle swarm algorithm and quantum algorithm are introduced for the short search ability of the cuckoo algorithm and the low precision of optimization. The cuckoo is fixed as a "particle" in the search direction in three-dimensional space, so that it cannot be randomly offset. Through the binary algorithm, the particle can be made faster by having the Levy flight randomly generate the step size. The optimal solution direction moves, which speeds up the convergence speed of the algorithm and avoids the blindness in the search process. By using four classical benchmark functions, the simulation results show that the improved algorithm has better performance and improves the efficiency of task scheduling and scheduling under cloud computing.
    Data Analysis of Hybrid Principal Component for Rural Land Circulation Management based on Gray Relation Algorithmic Models
    Zhongbo Wang and Zhilin Suo
    2019, 15(7): 1976-1987.  doi:10.23940/ijpe.19.07.p25.19761987
    Abstract    PDF (602KB)   
    References | Related Articles
    Data analysis is a common and essential process of determining the main driving factors of hybrid information principal components for rural land circulation management. To solve the existing hybrid information regarding the problem of rural land circulation in China, the main driving factors need to be confirmed based on gray relation algorithmic models in the paper. Five types of gray relation algorithmic models are adopted for hybrid Information principal component analysis for rural land circulation, such as the Deng’s gray relation algorithmic model, gray absolute relation algorithmic model, T-type gray relation algorithmic model, improved gray relation algorithmic model, and gray slope relation algorithmic model. According to our collected data, the results of data analysis comparison illustrate that different gray relation algorithms may affect the order of the importance of each driving factor. The most critical driving factors are obtained as follows: the rate of non-agricultural income, the ratio of signed contracts and the ratio of peasants’ spontaneous taking part in rural land circulation, which are also the most three main driving factors on the Chinese rural land circulation management.
    Normalization of Notation NCF for Improving Fault Localization
    Zhao Li, Yi Song, Siwei Zhou, Dongcheng Li, and Peng Chen
    2019, 15(7): 1988-1997.  doi:10.23940/ijpe.19.07.p26.19881997
    Abstract    PDF (632KB)   
    References | Related Articles
    In view of the importance and the high cost of effective software fault localization, how to improve the effectiveness of the software fault localization has become an important and persistent issue in software engineering. Featuring simple operation and high popularity, the spectrum-based fault localization technique obtains program spectrum information by executing test cases on the program and then calculates the suspiciousness of each statement, which provides a basis for the programmers to debug. This paper proposed CFNorm, a new fault localization parameter, which can be obtained by processing the column data of the spectrum information matrix. CFNorm emphasizes and amplifies the role of NCF (the number of times a statement is executed by failed test cases) to optimize the traditional fault localization technique for better fault localization. Three fault localization techniques were selected to be used in an experiment involving 111 versions of Siemens Suite. The results showed that the effectiveness of fault localization was significantly improved with the increase in the weight of CFNorm over a certain range.
    Test Set Augmentation Technique for Deep Learning Image Classifiers
    Qiang Chen, Zhanwei Hui, and Jialuo Liu
    2019, 15(7): 1998-2007.  doi:10.23940/ijpe.19.07.p27.19982007
    Abstract    PDF (1482KB)   
    References | Related Articles
    Widely applied in various fields, deep learning (DL) is becoming the key driving force in the industry. Although it has achieved great success in artificial intelligence tasks, similar to traditional software, it has defects involving unpredictable accidents and losses due to failure. To ensure the quality of DL software, adequate testing needs to be carried out. In this paper, we propose a test set augmentation technique based on an adversarial example generation algorithm for image classification deep neural networks (DNNs). It can generate a large number of useful test cases, especially when test cases are insufficient. We briefly introduce the adversarial example generation algorithm and implement the framework of our method. We conduct experiments on classic DNN models and datasets. We further evaluate the test set by using a coverage metric based on states of the DNN.
    Equivalent Version Sets Testing Method for Android Applications based on Code Analysis
    Song Huang, Sen Yang, Yongming Yao, and Lele Chen
    2019, 15(7): 2008-2018.  doi:10.23940/ijpe.19.07.p28.20082018
    Abstract    PDF (551KB)   
    References | Related Articles
    The Android system is an open source mobile operating system that has been released in numerous versions. Android fragmentation is becoming more and more serious. This paper shows how different Android runtime environments affect test coverage results. To address this limitation, we run apps on all Android versions to collect coverage rate and also present an algorithm to generate equivalent test runtime-environment-set to exercise mobile apps. Our approach is to systematically test the targeted code of Android apps based on code analysis. It analyzes the decompiled code that identifies the code related to the Android SDK version and then generates the corresponding test cases. An empirical study of the practical usefulness of the technique is presented for six widely-used industrial apps. The test result shows that our equivalence test runtime-environment-set only requires less than half of all versions, which dramatically reduces the test resources. Moreover, the method coverage of these applications increased by an average of 49.3% on all versions and 46.8% on equivalent test runtime-environment-set, respectively.
Online ISSN 2993-8341
Print ISSN 0973-1318