Please wait a minute...
, No 6
 ■ Cover Page (PDF 3,197 KB) ■ Editorial Board (PDF 144 KB)  ■ Table of Contents, June 2018 (196 KB)
  
  • Original articles
    Document Correlation Measurement based on Conceptual Dependent Forest
    Gang Liu, Hanwen Zhang, Hanmo Zhang
    2018, 14(6): 1099-1108.  doi:10.23940/ijpe.18.06.p1.10991108
    Abstract    PDF (422KB)   
    References | Related Articles

    The formal expression of natural language is the primary task of all natural language problems. In this paper, we propose the concept of conceptual dependency tree based on the conceptual dependency theory. Conceptual dependency differs from dependency analysis in that the performance at the grammatical and semantic level is more concerned with the conceptual hierarchy. Based on the conceptual dependency tree, a conceptual dependency forest model is defined, which provides a solution to the formalized representation of natural language. Based on the conceptual dependency forest model, the definition and calculation method of conceptual dependency strength and potential similarity are further proposed. The experiment proves that the conceptual dependency forest model proposed in this paper is reasonable and effective.


    Submitted on March 1, 2018; Revised on April 21, 2018; Accepted on May 15, 2018
    References: 7
    Restricted Boltzmann Machine Collaborative Filtering Recommendation Algorithm based on Project Tag Improvement
    Xiaodong Qian Guoliang Liu
    2018, 14(6): 1109-1118.  doi:10.23940/ijpe.18.06.p2.11091118
    Abstract    PDF (449KB)   
    References | Related Articles

    The collaborative filtering algorithm based on Restricted Boltzmann Machine (RBM) has the problem of heavy weight in the prediction of the “popular project” and poor discrimination of the “unpopular project”, which results in reduced prediction accuracy of the model algorithm. In order to improve the personalization and accuracy of the model, this article integrates project tags into the prediction process based on the RBM model and uses the project tags to describe the user's own interest preference, which strengthens the individual needs of the user: First, it uses projects that the user has already graded to calculate the user's probability of rating the objective tag; Second, it uses the probability of the scoring to predict the probability of different scoring levels of the user's unprotected items; Then, RBM model training is used to predict the probability that the user will score different grades for items that are not scored; Finally, the two scoring probabilities are weighted to the RBM model prediction process to produce prediction results. Experimental results using Movielens datasets show that the accuracy of the proposed method is improved by 1.2% compared with the original algorithm.


    Submitted on February 16, 2018; Revised on March 28, 2018; Accepted on April 30, 2018
    References: 13
    A Mixed Algorithm for Building Facade Extraction from Point Clouds
    Rui Zhang, Jiayi Wu, Guangyun Li, and Li Wang
    2018, 14(6): 1119-1129.  doi:10.23940/ijpe.18.06.p3.11191129
    Abstract    PDF (805KB)   
    References | Related Articles

    As a leading method for capturing 3D urban scene data, laser scanning technology has been increasingly used in feature extraction, object recognition and modeling tasks. This study presents a new strategy that can be used to quickly and accurately extract building facade features based on point clouds, which are captured from laser scanners. The data first need to be pre-processed, including building a Kd-OcTree mixed index and calculating the normal vectors of point clouds using principal component analysis (PCA). On this basis, the initial clusters are obtained via fuzzy clustering, and then the generalized Hough transformation (GHT) is used in each cluster according to the sampling interval to detect the local peak value to obtain the preliminary plane. Next, similar planes are merged together based on the normal vectors and distance thresholds of the pending planes to generate better planeness. Finally, the extraction effects are optimized by the adjunctive judgment of neighborhood points, which is used to classify boundary points into the correct plane. The proposed approach has been tested with three different terrestrial laser scanners (TLS) datasets, and the results show that this mixed approach is able to speed up building facade extraction as well as the recall.


    Submitted on March 7, 2018; Revised on April 19, 2018; Accepted on May 25, 2018
    References: 26
    What Is the Whole Development Process? Subevent Detection using Micro Index and Local Clustering
    Hua Zhao, Qingtian Zeng, Yuqiang Zhang, and Weiyi Meng
    2018, 14(6): 1130-1139.  doi:10.23940/ijpe.18.06.p4.11301139
    Abstract    PDF (437KB)   
    References | Related Articles

    Users can easily obtain a massive amount of news stories related to an event. But, often the obtained stories are fragmented and can only reflect certain aspects of the event. Detecting the subevents automatically is important for users to understand the whole development process of the event. Motivated by the co-evolution between the event and the opinions about it, we firstly propose to adopt Micro Index and give a dynamic time window construction method based on the recognition of the peaks of the Micro Index Curve. Secondly, we propose a two-stage subevent detection method based on local clustering and classification. And finally, we introduce the news stories about “Luo Yixiao Event” and “Shandong Illegal Vaccines Event”, which are two recent hot events, and use them to evaluate the proposed methods. It is found that the proposed methods are successful.


    Submitted on March 10, 2018; Revised on April 26, 2018; Accepted on May 25, 2018
    References: 20
    A Novel Imbalanced Classification Method based on Decision Tree and Bagging
    Hongjiao Guan, Yingtao Zhang, Hengda Cheng, and Xianglong Tang
    2018, 14(6): 1140-1148.  doi:10.23940/ijpe.18.06.p5.11401148
    Abstract    PDF (394KB)   
    References | Related Articles

    Imbalanced classification is a challenging problem in the field of big data research and applications. Complex data distributions, such as small disjuncts and overlapping classes, make traditional methods unable to easily recognize the minority class and thus, lead to low sensitivity. The misclassification costs of the minority class are usually higher than that of the majority class. To deal with imbalanced datasets, typical algorithmic-level methods either introduce cost information or simply rebalance class distribution without considering the distribution of the minority class. In this paper, we propose an optimization embedded bagging (OEBag) approach to increase the sensitivity by learning the complex distributions in the minority class more precisely. By learning these base classifiers, OEBag selectively learns the minority examples that are misclassified easily by referring to examples in out-of-bag. OEBag is implemented by using two specialized under-sampling bagging methods. Nineteen real datasets with diverse levels of classification difficulties are utilized in this paper. Experimental results demonstrate that OEBag performs significantly better in sensitivity and has a great overall performance in terms of AUC (area under ROC curve) and G-mean when compared with several state-of-the-art methods.


    Submitted on March 6, 2018; Revised on April 16, 2018; Accepted on May 21, 2018
    References: 22
    Apple Image Segmentation Model Based on R Component with Swarm Intelligence Optimization Algorithm
    Liqun Liu Jiuyuan Huo
    2018, 14(6): 1149-1160.  doi:10.23940/ijpe.18.06.p6.11491160
    Abstract    PDF (414KB)   
    References | Related Articles

    Because of large numbers interference factors such as complex background in apple images in natural scene, it is difficult to achieve good image segmentation results. To solve these problems, the color apple image segmentation method under natural scenes is modeled, and an apple image segmentation model based on R component with swarm intelligence optimization algorithm (AISM-RSIOA) is constructed to achieve the initial and secondary segmentation of the images. Under the six conditions of direct sunlight with strong, medium and weak illumination, and backlighting with strong, medium and weak illumination in natural scenes, the images segmentation experiments were taken on a series of mature HuaNiu apple images. The results of initial segmentation showed that the ISMR method has the optimal segmentation effect, and the segmentation success rates achieve 100.0%. In the secondary segmentation stage, the fruits can be fully separated from the background by using the improved threshold segmentation method. The segmentation results demonstrated that the model can effectively improve segmentation effect of images.


    Submitted on March 11, 2018; Revised on April 21, 2018; Accepted on May 28, 2018
    References: 27
    Optimization of Particle Genetic Algorithm based on Time Load Balancing for Cloud Task Scheduling in Cloud Task Planning
    Yenzhen Zhang, Shouming Hou, Li Chang
    2018, 14(6): 1161-1170.  doi:10.23940/ijpe.18.06.p7.11611170
    Abstract    PDF (680KB)   
    References | Related Articles

    To solve the problems of long time consumption, imbalanced time load and low resource utilization for cloud task scheduling in cloud task planning, we propose an optimized strategy of particle genetic algorithm based on time load balancing. This strategy was adopted to improve the quality of particles by optimizing particle initialization operation. To ensure that better particles capable of more balanced time load are selected, a model of fitness in time load balancing was established. To prevent the particles from jumping out of the specified area in iterations, the element values of their location and velocity were processed in a standardized way. Finally, genetic crossover and mutation operators were introduced to avoid leading the algorithm to local optimization. This strategy could effectively improve the convergence rate of the particle genetic algorithm and the quality of solutions. The experimental results showed that the algorithm had greater power to search for a better global optimal solution, consumed less time, and reached a more balanced time load. With this algorithm, we may achieve better and more logical task scheduling sequences. Simultaneously, the idea owns a certain degree of practicality and generalization in many fields.


    Submitted on March 7, 2018; Revised on April 29, 2018; Accepted on May 24, 2018
    References: 25
    Monitoring and Warning Methods of Tailings Reservoir using BP Neural Network
    Tianyong Wu, Chunyuan Zhang, and Yunsheng Zhao
    2018, 14(6): 1171-1180.  doi:10.23940/ijpe.18.06.p8.11711180
    Abstract    PDF (821KB)   
    References | Related Articles

    The tailings reservoir is a major hazard source with high potential energy, which may cause artificial debris flow. The stability of the tailings reservoir is extremely important to the normal operation of the mining enterprises and the safety of people's lives and property. In order to reduce the risk of a tailings accident, a multivariate linear regression model, a BP neural network and a regression analysis model optimized by genetic algorithm are established in this article to discuss the monitoring and warning method of the tailings reservoir. It takes the safety monitoring data of the Huangmailing tailings as an example to make a comparison of three forecasting models by taking fitness, simulating capability of initial data and the predicting ability of new data into consideration. The results of the experiment show that the BP neural network forecasting model is better able to predict safety monitoring data over the other two models. The predicting ability of the regression analysis model optimized by genetic algorithm is better than the forecasting capability of the multivariate linear regression model.


    Submitted on February 27, 2018; Revised on April 1, 2018; Accepted on May 21, 2018
    References: 15
    Software Reliability Test Case Generation using Temporal Motifs Recovery and Configuration
    Xuetao Tian, Feng Liu and Honghui Li
    2018, 14(6): 1181-1189.  doi:10.23940/ijpe.18.06.p9.11811189
    Abstract    PDF (610KB)   
    References | Related Articles

    In software updating process, software reliability test plays an important role. Test cases are the key to software test. To match real usage habit, log analysis becomes a hot way to generate reliability test cases. However, using log analysis can’t cover new operations arising from software updating. In this paper, a reliability test cases generation method for updating software using temporal motifs recovery and configuration is presented in this paper. We tentatively introduce temporal network idea to abstract software usage log. Test cases adapting to software updating are generated using temporal motifs recovery and configuration. As a case study, the method is applied to an online application. The coverage frequency comparison experiment is designed. The proposed method can obtain similar results as log and Markov model. Thereby, the usability of the method is validated.


    Submitted on February 27, 2018; Revised on April 1, 2018; Accepted on May 21, 2018
    References: 14
    A Real Time Detection Method of Track Fasteners Missing of Railway based on Machine Vision
    Hongfeng Ma, Yongzhi Min, Chao Yin, Tiandong Cheng, Benyu Xiao, Biao Yue, and Xiaobin Li
    2018, 14(6): 1190-1200.  doi:10.23940/ijpe.18.06.p10.11901200
    Abstract    PDF (956KB)   
    References | Related Articles

    Detection of the missing track fasteners is an important part of the daily inspection of the railway, according to the requirement of real-time and self-adaptation of the modern railway to the automatic detection technology. A real-time detection method based on machine vision is proposed. On the basis of the basic principle of machine vision, the image acquisition device with LED auxiliary light source hood is designed. Adaptive image enhancement for fastener edge feature by using switching median filter and improved Canny edge detection method based on image gradient magnitude combined with the stability of the edge profile of fastener, real time detection of missing fastener has realized by template matching based on curve feature projection. After the experiment, the average processing time of each image is 245.61ms, the correct rate of recognition is 85.8%, and the method has a certain degree of adaptability, which supports up to 3.85m/s implementation speed and meets the real-time detection requirements for missing fasteners for actual operation of the real line. Rail damage detection method based on machine vision is often affected by noise interference in the process of image acquisition. In this paper, an improved median filtering algorithm is proposed to solve the problem of noise filtering appearing in the image. The algorithm points out an upper triangular block in the rectangular filter window as the mark point and finds the gray value in the upper triangular block to replace the gray value of the processing point. By the simulation experiment of this algorithm and other algorithms, the results show that the new algorithm is effective and the running time of the algorithm can be reduced effectively.


    Submitted on March 1, 2018; Revised on April 8, 2018; Accepted on May 20, 2018
    References: 20
    Reducing Energy Cost of Multi-Threaded Programs on NUMA Architectures
    Hao Fang, Liang Zhu, and Xiangyu Li
    2018, 14(6): 1201-1212.  doi:10.23940/ijpe.18.06.p11.12011212
    Abstract    PDF (595KB)   
    References | Related Articles

    Many recent data center servers are built with NUMA (Non-Uniform Memory Access) characteristics. Accessing remote memory generally takes longer time than accessing local memory. There are a lot of research works that discuss the performance improvement of NUMA multi-core systems. However, rare research work considers reducing the energy cost of NUMA multi-core systems. This work studies reducing energy cost of multi-threaded programs on NUMA architectures using DVFS (Dynamic Voltage and Frequency Scaling) adjustment strategy. We consider three factors of the multi-threaded programs which influence the energy saved by our DVFS adjustment strategy. These three factors are: (1) the memory access intensity of parallel programs; (2) the proportion of remote memory access; (3) the ratio between remote and local memory access latency. In addition, we propose two DVFS adjustment strategies to save the energy cost of multi-threaded programs. The energy-saving effect of these two DVFS adjustment strategies is influenced by these three factors. Two DVFS adjustment strategies can save maximally 20% and 39.2% of total energy when considering one factor and 33.3%, 48.1% of total energy when considering two factors, respectively.


    Submitted on March 6, 2018; Revised on April 12, 2018; Accepted on May 26, 2018
    References: 28
    On-line Detector and Inversion Algorithm of Suspended Particles
    Deli Jia, Tong Guo, Zhenkun Zhu, Quanbin Wang, Yan Wang, and Yanping Wang
    2018, 14(6): 1213-1223.  doi:10.23940/ijpe.18.06.p12.12131223
    Abstract    PDF (562KB)   
    References | Related Articles

    Stratified waterflooding is the main technology for oilfield development. Continuously improving waterflooding control and utilization through fine waterflooding is a never-ending pursuit. Reinjecting the produced liquid into the formation after electrical dehydration is the main technical means for the waterflooding development of oilfields. Whether the suspended particle in the reinjection water meets the standards will directly affect the waterflooding development effect. This paper proposes and develops an on-line suspended particle detector and its inversion algorithm. In order to meet the downhole detection requirements of subsurface suspended particles in reinjection water, a laser on-line detector for suspended particle was developed based on the light scattering method, and an array structured light-induced ring detector was designed to realize the identification of scattered light. In order to meet the engineering requirements of on-line inspection, the genetic algorithm and least-squares algorithm are used to optimize the granularity inversion calculation. The detection error and response time of these algorithms are compared and analyzed. With an error below the allowed value of 4% and a response time of 0.22s, the least square algorithm has more engineering application value. In order to prevent the occurrence of negative numbers after iteration of the least squares algorithm, a non-negative least square granularity inversion algorithm was designed in the practical engineering application. Based on the actual engineering data and the simulation structure, it can be concluded that the simulation values are highly consistent with the theoretical values, which proves that the laser suspended particle on-line detector and its inversion algorithm are applicable to the on-line detection system of suspended particles in the reinjection water of oilfields.


    Submitted on March 5, 2018; Revised on April 16, 2018; Accepted on May 21, 2018
    References: 7
    A New Multi-Sensor Target Recognition Framework based on Dempster-Shafer Evidence Theory
    Kan Wang
    2018, 14(6): 1224-1233.  doi:10.23940/ijpe.18.06.p13.12241233
    Abstract    PDF (647KB)   
    References | Related Articles

    In order to meet the higher requirements in military technology, automation, and intelligence, increasingly importance has been attached to the information fusion for multi-sensor systems. Dempster-Shafer evidence theory is a typical method of uncertainty information fusion due to its adjustability in uncertainty modeling; whereas classical evidence theory is still insufficient in solving high-conflict problems. This assumption studies the multi-sensor information fusion model based on evidence theory from the following aspects. First, it introduces the basic principles of evidence theory, and focuses on how to use triangular fuzzy numbers to obtain basic probability assignments. Second, the method of weighting the evidence using the reliability of the sensor is introduced. The reliability of the sensor is divided into two parts: static reliability and dynamic reliability. Moreover, this model proposes the irrationality of Deng's entropy weight for the binary target recognition problem, and improves the entropy weight in sensor dynamic weights. Finally, on the basis of the above research, sensor sensing data is applied to this model. Through simulation experiments, the validity of the model is proved and the target can be accurately identified.


    Submitted on February 25, 2018; Revised on April 4, 2018; Accepted on May 3, 2018
    References: 23
    An Efficient FD-aided Visible Light Communication System
    Xu Wang, Yang Li, Min Feng, Jianye Zhang, and Wang Luo
    2018, 14(6): 1234-1240.  doi:10.23940/ijpe.18.06.p14.12341240
    Abstract    PDF (609KB)   
    References | Related Articles

    The sharply increasing implementation of light emitting diodes (LED) boosts the rapid development of the visible light communication (VLC). However, current VLC systems fail to achieve high data rate and large capacity region, which are prohibited by the fact that the dynamically changing VLC wireless channel and the statistics of the visible lights. For the sake of breaking these fundamental limitations, in this paper, we develop a novel VLC system for achieving high-speed and reliable communication. To be specific, taking the advantages of the full-duplex (FD) communication that can significantly enhance the spectrum efficiency, we exploit a novel FD-aided VLC system. Also, the developed VLC system is capable of realizing transmitting network signals directly. Additionally, we test the proposed VLC system in practice, and experiments results demonstrate that the developed VLC system outperforms other strategies in terms of the data rate and reliability.


    Submitted on March 7, 2018; Revised on April 21, 2018; Accepted on May 23, 2018
    References: 14
    Test Scenario Generation using Model Checking
    Zhixiong Yin, Min Zhang, Guoqiang Li, and Ling Fang
    2018, 14(6): 1241-1250.  doi:10.23940/ijpe.18.06.p15.12411250
    Abstract    PDF (808KB)   
    References | Related Articles

    Testing, including designation, execution and bug analysis has been broadly adopted in various industries. Test cases must be designed to confirm the entire behavior of the object system. In practice, a test case normally includes a series of operations that cold conduct the system status eventually to fulfill the precondition of the targeted test case. However, the applications to trigger the function calls of the software system would often be manually composed in traditional test activities, which is difficult especially for complex concurrent systems. Insufficient testing might result in hidden system defects, which increases the likelihood of economic loss or human injury. Model checking is a formal technique that can check the properties of a system automatically with strictness, completeness, and traceability. In this paper, a novel formal approach is proposed to systematically generate test scenarios automatically with traceability by the model checking technique. Meanwhile, the scenarios are reliable and precise, and debugging also becomes easier. Moreover, despite the reduction in demand of an experienced test engineer to design the target test cases, the coverage could be improved as the tedious and complicated processes of test scenario generation are accomplished by the model checker. This method has been applied in two practical case studies, and the results show the effectiveness of the proposed approach in terms of high coverage, automation, traceability and reusability.


    Submitted on March 17, 2018; Revised on April 14, 2018; Accepted on May 21, 2018
    References: 18
    Decomposing Constraints for Better Coverage in Test Data Generation
    Ju Qian, Kun Liu, Hao Chen, Zhiyi Zhang, and Zhe Chen
    2018, 14(6): 1251-1262.  doi:10.23940/ijpe.18.06.p16.12511262
    Abstract    PDF (656KB)   
    References | Related Articles

    In black-box testing, a possible choice for test data generation is to derive test data from interface constraints using some constraint solving techniques. However, directly performing constraint solving to the whole constraint formula may not be able to fully leverage the information embodied in a constraint. Thus, it is difficult to obtain a high coverage test set. For example, when solving a constraint (a > 0 or b < 0) in a whole, we cannot guarantee that data covering sub-constraint b < 0 will be involved in the test set. To address the problem, in this paper, we firstly define a hierarchy of coverage criteria at the specification constraint level. Then, algorithms are designed to decompose constraints according to such coverage criteria and to generate test input sets. The experiments on a set of benchmark programs show that decomposing constraints according to constraint-level coverage criteria does effectively lead to better coverage in test data generation.


    Submitted on March 6, 2018; Revised on April 21, 2018; Accepted on May 19, 2018
    References: 22
    An Information Flow-based Feature Selection Method for Cross-Project Defect Prediction
    Yaning Wu, Song Huang, and Haijin Ji
    2018, 14(6): 1263-1274.  doi:10.23940/ijpe.18.06.p17.12631274
    Abstract    PDF (783KB)   
    References | Related Articles

    Software defect prediction (SDP) plays a significant part in identifying the most defect-prone modules before software testing and allocating limited testing resources. One of the most commonly used scenarios in SDP is classification. To guarantee the prediction accuracy, the classification models should first be trained appropriately. The training data could be obtained from historical software repositories, which may affect the performance of classification to a large extent. In order to improve the data quality, we propose a novel software feature selection method, which innovatively utilizes the information flows to perform causality analysis in the features of training datasets. More specifically, we conduct causality analysis between each feature metric and the labeled metric bug; then, based on the obtained feature ranking list, we select the top-k features to control redundancy. Finally, we choose the most suitable feature subset based on the F-measure. To demonstrate the effectiveness and practicability of the feature selection method, we select the Nearest Neighbor approach to construct a homogeneous training dataset, and utilize three commonly used classification models to implement comparison experiments. The final experimental results have verified the availability and validity of the feature selection method.


    Submitted on March 12, 2018; Revised on April 17, 2018; Accepted on May 8, 2018
    References: 34
    Challenges of Testing Machine Learning Applications
    Song Huang, Er-Hu Liu, Zhan-Wei Hui, Shi-Qi Tang, and Suo-Juan Zhang
    2018, 14(6): 1275-1282.  doi:10.23940/ijpe.18.06.p18.12751282
    Abstract    PDF (523KB)   
    References | Related Articles

    Machine learning applications have achieved impressive results in many areas and provided effective solution to deal with image recognition, automatic driven, voice processing etc. problems. As these applications are adopted by multiple critical areas, their reliability and robustness becomes more and more important. Software testing is a typical way to ensure the quality of applications. Approaches for testing machine learning applications are needed. This paper analyzes the characteristics of several machine learning algorithms and concludes the main challenges of testing machine learning applications. Then, multiple preliminary techniques are presented according to the challenges. Moreover, the paper demonstrates how these techniques can be used to solve the problems during the testing of machine learning applications.


    Submitted on March 21, 2018; Revised on April 20, 2018; Accepted on May 16, 2018
    References: 7
    Combinatorial Test Case Prioritization based on Incremental Combinatorial Coverage Strength
    Ziyuan Wang Feiyan She
    2018, 14(6): 1283-1290.  doi:10.23940/ijpe.18.06.p19.12831290
    Abstract    PDF (512KB)   
    References | Related Articles

    A combinatorial test case prioritization technique based on incremental combinatorial coverage strength is proposed in this paper. Such an efficient technique prioritizes test cases in an existing high-strength combinatorial test suite, to form an ordered test case sequence. The output prioritized combinatorial test suite could cover combinations of parametric values, where their strengths are mixed from 2 or more, quicker than non-prioritized combinatorial test suite. Theoretical analysis shows that the proposed technique consumes less cost than existing combinatorial test case prioritization technique for the same type of scenarios. Experimental results also show that compared to initial combinatorial test suites without prioritization, prioritized combinatorial test suites yield a higher speed of covering combinations of parametric values. Compared to combinatorial test suites generated by an existing incremental adaptive combinatorial testing strategy, prioritized combinatorial test suites 1) require less test cases to achieve final combinatorial coverage, and 2) yield a higher speed of covering high-strength combinations of parametric values.


    Submitted on March 16, 2018; Revised on April 26, 2018; Accepted on May 27, 2018
    References: 24
    Impact of Hyper Parameter Optimization for Cross-Project Software Defect Prediction
    Yubin Qu, Xiang Chen, Yingquan Zhao, and Xiaolin Ju
    2018, 14(6): 1291-1299.  doi:10.23940/ijpe.18.06.p20.12911299
    Abstract    PDF (549KB)   
    References | Related Articles

    Recently, most studies have considered the default value for hyper parameters of the classification methods used by cross-project defect prediction (CPDP) methods. However, in previous studies for within-project defect prediction (WPDP), researchers found that the optimization for hyper parameter helps to improve the performance of software defect prediction models. Moreover, the default value for some hyper parameters in different machine learning libraries (such as Weka, Scikit-learn) may not be consistent. To the best of our knowledge, we first conduct an in-depth analysis for the influence on the performance of CPDP by using hyper parameter optimization. Based on different classification methods, we consider 5 different instance selection based CPDP methods in total. In our empirical studies, we choose 8 projects in AEEEM and Relink datasets as our evaluation subjects, and we use AUC as our model performance measure. Final results show that among these methods, the influence of hyper parameter optimization for 4 methods is non-negligible. Among the 11 hyper parameters considered by these 5 classification methods, the influence of 8 hyper parameters is non-negligible, and these hyper parameters are mainly distributed in support vector machine and k nearest neighbor classification methods. Meanwhile, by analyzing the actual computational cost of hyper parameter optimization, we find that the spent time is within the acceptable range. These empirical results show that in the future CPDP research, the hyper parameter optimization should be considered in experimental design.


    Submitted on March 2, 2018; Revised on April 16, 2018; Accepted on May 19, 2018
    References: 32
    Pinpoint Minimal Failure-Inducing Mode using Itemset Mining under Constraints
    Yong Wang, Liangfen Wei, Yuan Yao, Zhiqiu Huang, Yong Li, Bingwu Fang, and Weiwei Li
    2018, 14(6): 1300-1307.  doi:10.23940/ijpe.18.06.p21.13001307
    Abstract    PDF (428KB)   
    References | Related Articles

    A minimal failure-inducing mode (MFM) based on a t-way combinatorial test set and its test results can help programmers identify root causes of failures that are triggered by combination bugs. However, an MFM for systems containing many parameters may be affected by masking effects to result in coincidences correct in practice, which makes pinpointing MFS more difficult. An approach for pinpointing MFM and an iterative framework are proposed. The identifying MFM approach first collects combinatorial test cases and their testing results, then mines the frequent itemset (suspicious MFM) in failed test cases, and finally computes suspiciousness for each MFM belonged to close pattern via contrasting frequency in failed test cases and successful test cases. Through the iterative framework, MFM is pinpointed until a certain stopping criterion is satisfied. Preliminary results of simulation experiments show that this approach is effective.


    Submitted on March 21, 2018; Revised on April 27, 2018; Accepted on May 28, 2018
    References: 13
    Coding Standards and Human Nature
    Michael Dorin Sergio Montenegro
    2018, 14(6): 1308-1313.  doi:10.23940/ijpe.18.06.p22.13081313
    Abstract    PDF (248KB)   
    References | Related Articles

    Intuition tells us that code that is difficult to review is likely complicated and faulty. Many organizations will create a coding standard to encourage higher quality software development. Coding standards are not always followed, and even when they are, complicated code continues to be written. Human nature demonstrates that people do not put effort into activities that they believe to be unproductive. It is also true that people have limited capacity for remembering and following directions, so extra requirements from a coding standard may even inhibit creativity. Because of human behavior, this paper recommends that organizations have two layers of a coding standard. The first layer should be easy to remember items. The second layer should be the long-established coding standard an organization wishes to comply with.


    Submitted on March 13, 2018; Revised on April 26, 2018; Accepted on May 12, 2018
    References: 20
    Fault Injection for Performance Testing of Composite Web Services
    Ju Qian, Han Wu, Hao Chen, Changjian Li, and Weiwei Li
    2018, 14(6): 1314-1323.  doi:10.23940/ijpe.18.06.p23.13141323
    Abstract    PDF (492KB)   
    References | Related Articles

    Fault injection has already been used to access the dependability of web services. However, most of the existing work focuses on how to inject faults. Problems such as where to inject faults and what faults should be injected still have not been systematically studied in literature, especially for the testing of performance related issues in composite web services. This paper presents an approach that defines coverage criteria to guide fault injection testing of performance related issues in composite web services. We generate fault injection configurations that follows the defined test criteria for systematic fault injection. The configurations specify where to inject faults and what faults should be injected, and the injected faults (e.g. message delays) are generated according to the characteristics of each individual sub-service in order to make the faults more realistic. With the fault injection configurations, the fault injection process can be automatically conducted and the performance of a composite service can be effectively evaluated.


    Submitted on March 8, 2018; Revised on April 16, 2018; Accepted on May 20, 2018
    References: 25
    Test Suite Augmentation via Integrating Black- and White-Box Testing Techniques
    Zhiyi Zhang Ju Qian
    2018, 14(6): 1324-1329.  doi:10.23940/ijpe.18.06.p24.13241329
    Abstract    PDF (333KB)   
    References | Related Articles

    Test suite augmentation is an important technique for quality assurance of evolved software. In rapid evolution of software development, engineers usually combine black-box and white-box test suites to facilitate understanding and management. This paper presents a research proposal on test suite augmentation via integrating black- and white-box testing techniques. There are two directions of our test suite augmentation. One direction is from black-box to white-box, augmenting functional test suites to satisfy structure coverage criteria. The other direction is from white-box to black-box, augmenting coverage test suite to satisfy functional requirements. A series of evaluation methods is proposed to verify the effectiveness of our augmentation approaches.


    Submitted on February 27, 2018; Revised on April 15, 2018; Accepted on May 16, 2018
    References: 7
    Input Domain Reduction of Search-based Structural Test Data Generation using Interval Arithmetic
    Xuewei Lv, Song Huang, and Haijin Ji
    2018, 14(6): 1330-1340.  doi:10.23940/ijpe.18.06.p25.13301340
    Abstract    PDF (928KB)   
    References | Related Articles

    The size of search space has an impact on the efficiency of test data generation by meta-heuristic algorithms. To enhance the efficiency of test data generation, a method that reduces search space utilizing interval arithmetic is proposed. Firstly, all input variables of the program are presented as interval variables. Then, the interval of each variable is gradually reduced by many constraint conditions in the target path. Finally, meta-heuristic algorithm with reduced search space is carried out to generate test data. Experimental results show the proposed method has advantages in the number of generations, running time, and success rate, which can significantly enhance the efficiency of test data by using meta-heuristic algorithms.


    Submitted on March 7, 2018; Revised on April 4, 2018; Accepted on May 8, 2018
    References: 14
    Model Test on Mechanical Properties of Glass Fiber Reinforced Plastic Mortar Pipes Culvert Under High Embankment
    Huawang Shi, Hang Yin, and Lianyu Wei
    2018, 14(6): 1352-1359.  doi:10.23940/ijpe.18.06.p27.13521359
    Abstract    PDF (791KB)   
    References | Related Articles

    Glass fiber reinforced plastic mortar pipes is a new type of composite material. In order to obtain the design and construction parameters and to detect the cracking reason for the glass fiber reinforced plastic mortar pipes (FRPM) culvert with high filling, model test on mechanical properties of FRPM culvert was performed. The results show that the deformation laws of circumferential tension or compression are consistent, with the increase of filling height, the growth of stress in FRPM and vertical soil pressure are nonlinear. Simulation of the maximum filling height of 12m, under axial loading, the circumferential stress and strain values of culvert are 6770kPa and 2109 )/(×10-6) ; the maximum soil pressure occurred at the top of the culvert, the value is 322kPa ; under eccentric loading, the maximum values of circumferential stresses and strain are 6092kPa and 1898)/(×10-6); the maximum value of soil pressure is183kPa.


    Submitted on February 26, 2018; First revised on March 28, 2018; Second revised on April 28, 2018; Accepted on May 10, 2018
    References: 24
    A Data Mining Algorithm based on Relevant Vector Machine of Cloud Simulation
    Wuqi Gao, Gang Li, and Hui Liu
    2018, 14(6): 1360-1364.  doi:10.23940/ijpe.18.06.p28.13601364
    Abstract    PDF (522KB)   
    References | Related Articles

    Regarding the problems of long time running and memory overflowing caused from the analysis of data mining algorithms for tactical communication network simulation data, using relevance vector machine (RVM), a data mining algorithm that is mainly used on the small sample of data mining with a good effect but a large amount of calculation that is based on an open source distributed storage and computing platform Hadoop, the author designs a kind of relevance vector machine data mining algorithm based on cloud computing. Based on the sum of the distribution of small sample data mining law in sequence, in some cases, the algorithm reflects the law of large sample data mining. Then, it carries on programming and empirical research, which supports the analysis of massive cloud simulation data.


    Submitted on March 29, 2018; Revised on April 12, 2018; Accepted on May 23, 2018
    References: 14
    An Advertising Spreading Model for Social Networks
    Jing Yi, Peiyu Liu, Wenfeng Liu, Jingang Ma, and Tianxia Song
    2018, 14(6): 1365-1373.  doi:10.23940/ijpe.18.06.p29.13651373
    Abstract    PDF (628KB)   
    References | Related Articles

    The SIR spreading model cannot fully reflect the regularity of information propagation for social networks. In this paper, based on the influence analysis on the propagation mechanism and network parameters on the process of advertising spreading in social networks, the advertising spreading model that is applied to social networks is established and corresponding dynamic evolution equations are given. Meanwhile, because there is currently no unified evaluation criteria for the validity of spreading models, the application of AEI, which is the advertising effectiveness index used to evaluate and analyze the effectiveness of spreading models, is put forward in this paper. The simulation results demonstrate that the model proposed in this paper can correctly reflect the trend of advertising spreading in social networks and accurately describe the spreading process. The validity of the model is also verified in this paper.


    Submitted on February 27, 2018; Revised on April 3, 2018; Accepted on May 12, 2018
    References: 21
    Analysis of Atmospheric Transmission Characteristics on the Bit Error Rate of Atmospheric Laser Communication System
    Jin Guo, Shao Li, and Qinkun Xiao
    2018, 14(6): 1374-1381.  doi:10.23940/ijpe.18.06.p30.13741381
    Abstract    PDF (586KB)   
    References | Related Articles

    The atmospheric transmission characteristics influence the information transmission of atmospheric laser communication system, increase the bit error rate, and reduce the information transmission reliability. Under the atmospheric attenuation effect, according to the relation of atmospheric transmittance and transmission distance and atmospheric visibility, this paper establishes the calculation model of bit error rate. Under the influence of atmospheric turbulence, in accordance with restricted relationship between the probability density of light intensity and the atmospheric turbulence intensity, a model of bit error rate is built. When atmospheric attenuation effect and atmospheric turbulence effect take up \ different weights in the atmospheric channel, by analysing the influence of atmospheric attenuation and atmospheric turbulence to bring about the bit error rate, a new model of bit error rate in atmospheric laser communication system is constructed. Through calculation and analysis, the results show that the main factor of influence of bit error rate is atmospheric turbulence effect; meanwhile, the impact of atmospheric attenuation is not negligible.


    Submitted on March 2, 2018; Revised on April 9, 2018; Accepted on May 20, 2018
    References: 12
Online ISSN 2993-8341
Print ISSN 0973-1318