Please wait a minute...
, No 9
 ■ Cover Page (PDF 4,743 KB) ■ Editorial Board (PDF 144 KB)  ■ Table of Contents, September 2018 (161 KB)
  
  • Coding Method for HEVC System based on Homogeneity Region Classification Analysis
    Qiuwen Zhang and Kunqiang Huang
    2018, 14(9): 1937-1946.  doi:10.23940/ijpe.18.09.p1.19371946
    Abstract    PDF (609KB)   
    References | Related Articles
    High Efficiency Video Coding (HEVC) employs a flexible coding unit (CU) partitioning pattern and advanced modes prediction method, which contribute to video coding efficiency. It adopts a rate distortion optimization (RDO) method to achieve the selection of coding parameters. Although the advanced coding technologies improve HEVC encoder performance, they also increase computational complexity. In both coding unit partitioning and the prediction mode decision process, HEVC must utilize the RDO method to calculate the rate-distortion (RD) cost of all candidates. The optimal candidate selection process is always accompanied by large computational complexity in the original HEVC. Therefore, a decision early termination method is important to omit complex RD calculations. In this paper, a region classification method based on motion diversity is proposed to early stop CU partitioning and the mode decision process. In this proposed method, if a CU is classified into a smooth region, it will stop further depth levels partitioning and limit intra modes in the candidates list. The fast coding algorithm can thereby omit large coding times and reduce computational complexity. Extensive experiments show that the proposed method can save on average 32% coding time with a negligible 0.049dB PSNR drop compared with original HEVC encoders.
    Formal Verification of Helicopter Automatic Landing Control Algorithm in Theorem Prover Coq
    Xi Chen and Gang Chen
    2018, 14(9): 1947-1957.  doi:10.23940/ijpe.18.09.p2.19471957
    Abstract    PDF (886KB)   
    References | Related Articles
    The helicopter flight control system plays an important role in helicopter flight and is known as the “brain” of the helicopter. Only when the system is verified correctly can the helicopter fly safely and steadily. This paper describes and validates the major part of an algorithm of automatic landing control in the high-order theorem prover Coq. Z transform is currently one of the most important flight control system analysis tools. This paper formally describes the definition of Z transform, validates some properties (i.e., homogeneous, uniformity, linear, and complex shift properties) of Z transform to extend the system analysis capabilities of theorem proving, and lays the foundation for further formalization of the helicopter flight control system.
    Autonomic Cloud Resource Allocation Method based on LS-SVM and Virtual Allocation
    Chenyang Zhao and Junling Wang
    2018, 14(9): 1958-1967.  doi:10.23940/ijpe.18.09.p3.19581967
    Abstract    PDF (654KB)   
    References | Related Articles
    Current cloud resource allocation cannot be performed autonomously. When a cloud server overloads, the task queue continues to grow, which leads to delay or failure of task execution. In order to solve this problem, an autonomic cloud resource allocation method is proposed in this paper. For each type of task, Least Squares Support Vector Machine (LS-SVM) is used to predict the number of upcoming tasks in the next period by analyzing a time series of historical task numbers. Meanwhile, the queue lengths of various types of tasks are also periodically monitored during each period. Then, according to the predicted task numbers and the real-time queue lengths, Virtual Allocation (VA) is used to autonomously adjust resource allocation for various types of tasks during the task execution. The experiment shows that LS-SVM prediction is more accurate and VA is more effective, which can improve loads of cloud servers and reduce completion time of tasks.
    Fast 3D-HEVC Coding based on Support Vector Machine
    Hanqing Ding, Shuaichao Wei, Yan Zhang, and Qiuwen Zhang
    2018, 14(9): 1968-1974.  doi:10.23940/ijpe.18.09.p4.19681974
    Abstract    PDF (383KB)   
    References | Related Articles
    3D high efficiency video coding (3D-HEVC) is the latest video compression standard for multi-view video systems. In this paper, a fast coding method is proposed by utilizing machine learning to alleviate the complexity of the 3D-HEVC system while maintaining the RD performance. The main content of our algorithm is to utilize the support vector machine (SVM) to analyze the motion properties of texture video where variable mode prediction is needed and early skip unnecessary modes for a given coding unit (CU). Experimental results confirmed that the proposed method could greatly lessen the computational complexity of the 3D-HEVC system with only a small BD-rate loss for texture view and synthesized view.
    Hyperspectral Image Adaptive Denoising Method based on Band Selection and Elite Atomic Union Dictionary Learning
    Xiaodong Yu, Hongbin Dong, Tian Xia, and Xiaohui Li
    2018, 14(9): 1975-1984.  doi:10.23940/ijpe.18.09.p5.19751984
    Abstract    PDF (699KB)   
    References | Related Articles
    The image noise distribution of each band in a hyperspectral image is complex, and it is difficult for the traditional denoising method to achieve the desired effect. To address this problem, a new hyperspectral denoising method is proposed, based on the selection of the band combined with elite atomic joint dictionary learning. Firstly, the original hyperspectral data is reduced by band selection while retaining the main physical information of the spectrum. Then, the K-SVD dictionary learning is performed on each band of the selected image. Finally, the dictionary of each band learning is selected by the elite atom. This strategy generates a joint dictionary, proposes a dictionary learning denoising algorithm with adaptive dictionary length characteristics, and applies it to hyperspectral noisy images for denoising processing. Experiments on hyperspectral remote sensing images show that the peak signal-to-noise ratio (PSNR) of the image after denoising is improved compared with CFS, CFS-SRNS, and CFS-KSVD.
    Software Trustworthiness Metric Model based on Component Weight
    Dujuan Huang, Yanfang Ma, Haiyu Pan, and Mengyue Wang
    2018, 14(9): 1985-1996.  doi:10.23940/ijpe.18.09.p6.19851996
    Abstract    PDF (727KB)   
    References | Related Articles
    In recent years, the component-based development pattern is becoming increasingly popular with developers. Generally, a software system is made up of multiple components. The trustworthiness of software systems depends on the trustworthiness of every component. This paper will try to study the trustworthiness of systems from the component view. Firstly, all components in the system are divided into the critical and non-critical ones according to their importance, and a weight value is assigned to each component. For every basic construction between components, the trustworthiness metric model of the subsystem is proposed by composing the trustworthiness of components. Secondly, we prove that these metric models satisfy the metric criteria, such as monotonicity, non-negativity, acceleration, sensitivity, and substitution. Furthermore, the trustworthiness metric model of the whole system is shown based on the trustworthiness metric model of the subsystem. Finally, an algorithm is designed to compute the trustworthiness metric of the whole system. An example is shown to verify the reasonability of the metric model.
    STAMP-based Hazard Analysis for Computer-Controlled Systems using Petri Nets
    Danjiang Zhu, Shuzhen Yao, and Chonghao Xu
    2018, 14(9): 1997-2007.  doi:10.23940/ijpe.18.09.p7.19972007
    Abstract    PDF (902KB)   
    References | Related Articles
    Systems-Theoretic Accident Modeling and Process (STAMP) is a novel accident causality model and has been used in various areas. Most of the STAMP based hazard analysis methods are ad-hoc without rigorous procedures, and the process model used in STAMP is too simple to identify the hazardous control actions as the causes. Petri nets, which have been used to graphically model computer-controlled systems and resolve system safety issues, can make the hazard analysis with STAMP more effective. To identify the hazardous control actions in the STAMP-based hazard analysis, extended Petri nets are proposed in this paper to model the control processes in the system control structure. The runtime control action failures are considered in the reachability graph for the hazard analysis. Furthermore, the types of hazardous control actions are studied and analyzed in the extended reachability graph.
    An Improved Text Sentiment Analysis Algorithm based on TF-Gini
    Songtao Shang, Yong Gan, and Huaiguang Wu
    2018, 14(9): 2008-2014.  doi:10.23940/ijpe.18.09.p8.20082014
    Abstract    PDF (242KB)   
    References | Related Articles
    With the development of social media, more and more people prefer to express their opinions on the Internet. Therefore, developing a way to mine people’s emotional attitudes has become an important area of research. Text sentiment analysis is a method to mine people’s emotional attitudes through texts and an effective tool to grasp Internet users’ emotional tendencies. Naïve Bayes is a reliable text classification algorithm that has been approved by many researchers. Feature weighting is the most important problem for Naïve Bayes. Hence, this paper proposes an improved feature weighting algorithm, entitled TF-Gini, to enhance the performance of Naïve Bayes. The experimental results demonstrate the effectiveness of the improved algorithm.
    Reliability Simulation in Cloud Computing System
    Sa Meng, Xiwei Qiu, Liang Luo, Han Xu, and Meilian Lei
    2018, 14(9): 2015-2020.  doi:10.23940/ijpe.18.09.p9.20152020
    Abstract    PDF (567KB)   
    References | Related Articles
    With the large-scale increase of users, the reliability of the cloud system has become a challenging issue in the industry and academia. Many researchers have studied the reliability mechanism of cloud computing systems and proposed reliability awareness methods to achieve resource integration and improve system reliability. However, various hardware and software failures occur inevitably and cannot be accurately found and repaired in a timely manner. Moreover, since most of the studies cannot determine the background operation mechanism of the cloud system, this brings significant problems to the research of cloud computing reliability. To solve this problem, we first extract the key features that can be used to increase system reliability in cloud computing architectures. Secondly, we present an architecture framework for reliability simulation and analyze four types of common system failures: hardware failures, virtual machine failures, data inconsistency failures, and service timeout failures. Finally, experiments and verification based on a set of realistic configurations and operation runtimes are implemented as an extension of a well-known cloud simulation tool, CloudSim, to illustrate how these failures affect the reliability of cloud computing systems and how different resource scheduling algorithms handle these failures.
    Prioritizing-based Message Scheduling for Reliable Unmanned Aerial Vehicles Ad Hoc Network
    Jun Li, Ming Chen, Fei Dai, and Huibing Wang
    2018, 14(9): 2021-2029.  doi:10.23940/ijpe.18.09.p10.20212029
    Abstract    PDF (560KB)   
    References | Related Articles
    The Unmanned Aerial Vehicles Ad hoc Network (UAANET) consists of multi UAVs through multi-hop wireless communication links, and it can execute missions more efficiently compared with the single UAV. Due to its dynamic topology, fast-moving velocity, and unstable radio channel quality, message delivery in UAANET often suffers from increased delays and packet loss. In particular, the command messages and the coordination messages are lost or significantly delayed, which results in uncontrolled UAV, failure of the task execution, or even the crash of UAANET. For the reliability and performance of UAANET, the message scheduling scheme should be taken into account. In this paper, we propose a Prioritizing-based Message Scheduling algorithm (PMS) to provide reliable transmission of command messages and coordination messages. The proposed algorithm assigns different priorities to the messages based on the content of messages and dynamic context factors, and then schedules the messages respectively. Simulation results verify that PMS can substantially increase the reliability of UAANET.
    A Markov Error Propagation Model for Component-based Software Systems
    Zijing Tian, Yichen Wang, and Pengyang Zong
    2018, 14(9): 2030-2039.  doi:10.23940/ijpe.18.09.p11.20302039
    Abstract    PDF (573KB)   
    References | Related Articles
    In this paper, we propose a Markov chain-based error propagation model to analyze the reliability of component-based software systems and take measures to make the critical components safer. Because it is difficult to test the whole component-based system, we apply an error propagation model to evaluate the reliability of the system with parameters obtained by preliminary data from existing components and integration testing from two connected components. The main parameters required in our Markov model are the error probability for each component, the error tolerance probability, and the error propagation probability for every two connected components. Our model is applied to compute the reliability of the system, find the most suspicious component during debugging, and protect the critical components. Finally, we simulate the process of these three applications using three different systems on MATLAB.
    Research and Development of Blockchain Security
    Zengyu Cai, Chunfeng Du, Yong Gan, Jianwei Zhang, and Wanwei Huang
    2018, 14(9): 2040-2047.  doi:10.23940/ijpe.18.09.p12.20402047
    Abstract    PDF (303KB)   
    References | Related Articles
    Blockchain is a distributed data structure that integrates security, reliability, and centralization. It is the central support technology of the emerging digital encryption currency. With the emergence of Bitcoin, blockchain as its underlying support has begun to attract great attention and has been initially applied in the fields of finance, stocks, and securities. This article briefly describes the classification, architecture, and key technologies of blockchain. At the same time, it briefly analyzes the key management, access control mechanism, DDOS attack defense mechanism, and fragment information leakage prevention mechanism in the blockchain security mechanism. Finally, the prediction of the future development trend of blockchain aims to help further studies on blockchain and its security.
    Innate-Adaptive Response and Memory based Artificial Immune System for Dynamic Optimization
    Weiwei Zhang, Menghua Zhang, Weizheng Zhang, Yinghui Meng, and Huaiguang Wu
    2018, 14(9): 2048-2055.  doi:10.23940/ijpe.18.09.p13.20482055
    Abstract    PDF (548KB)   
    References | Related Articles
    Artificial immune systems (AIS) have been widely applied in optimization under static situations. Due to their dynamism, particular challenges are posed when handling dynamic optimization problems (DOPs). The designed algorithms must overcome these challenges to accomplish efficient results. In the paper, a new AIS based algorithm denoted as IAMAIS is proposed. In this algorithm, innate and adaptive responses in the immune system are elaborated on. The innate response is introduced to maintain the diversity of the population and implement global search, while the adaptive immune response is developed to locally locate the optima. Moreover, a memory mechanism is presented to reserve the found optima and further track the optima when environmental change happens. The experiments were applied on the most well-known benchmark, the Moving Peak Benchmark. Simulation results show that IAMAIS is competitive for Dops.
    A Survey of Software Trustworthiness Measurement Validation
    Hongwei Tao and Yixiang Chen
    2018, 14(9): 2056-2065.  doi:10.23940/ijpe.18.09.p14.20562065
    Abstract    PDF (242KB)   
    References | Related Articles
    The software trustworthiness measurement is an essential research subject in trustworthy software. Software trustworthiness measurement validation can show whether the measurement is adequate for measuring the software trustworthiness. There are many research results in software trustworthiness measurement validation. In this paper, we survey software trustworthiness measurement theoretical validation and empirical validation. The research states of software trustworthiness measurement theoretical validation are summarized from the view of validation based on measurement theory and validation based on axiomatic approaches, and state-of-the-art empirical validation methods are studied through case studies, surveys, and experiments. Lastly, we analyze the challenges faced in software trustworthiness measurement validation.
    From Predicate Testing to Identify Fault Location for Safety-Critical Software
    Yong Wang, Qiansong Wang, Guifu Lu, Zhiqiu Huang, Bingwu Fang, Yong Li, and Weiwei Li
    2018, 14(9): 2066-2075.  doi:10.23940/ijpe.18.09.p15.20662075
    Abstract    PDF (604KB)   
    References | Related Articles
    Statistical fault localization is one of the essential tasks of program debugging, and it has shown that the evaluation history of predicates may disclose important clues about the root cause of failures. However, especially for safety-critical software, there exists evaluation bias using same granularity to measure simple predicates and complex compound predicates. Intuitively, we should use fine-grain predicates to evaluate the suspiciousness of complex compound predicates and reduce the evaluation bias. In this paper, we propose a novel predicate fault localization technique from predicate testing to identify fault location. Based on the predicate fault model, we first generate constraint sets for each predicate and then calculate the suspiciousness of predicates by evaluating their constraint sets. Finally, we sort the suspicious predicates by their suspiciousness. Our preliminary results show that our approach can significantly improve fault predicate absolute ranking.
    Hierarchical Bayesian Reliability Analysis of Binomial Distribution based on Zero-Failure Data
    Shixiao Xiao and Haiping Ren
    2018, 14(9): 2076-2082.  doi:10.23940/ijpe.18.09.p16.20762082
    Abstract    PDF (292KB)   
    References | Related Articles
    The aim of this paper is to develop a new hierarchical Bayesian estimation method under symmetric entropy loss function for reliability of the binomial distribution. With the rapid development of manufacturing techniques, some electric products are highly reliable, and thus zero-failure data often occur when putting them in censored lifetime tests. Based on zero-failure data, the reliability analysis is very important for manufacturing. The hierarchical Bayesian estimator is regarded as a robust estimating method, but many existing robust Bayes estimators are complex and difficult to be utilized in practice. The contribution of this article is to present an easy hierarchical Bayesian estimator for reliability of the binomial distribution when reliability has a negative log-gamma prior distribution. Finally, a practical example is provided to show the feasibility and robustness of different estimators.
    Performability Modeling for Cloud Service with Check-Pointing Mechanism Considering Hardware and Software Failures
    Xiwei Qiu, Liang Luo, Sa Meng, and Xiaochuan Tang
    2018, 14(9): 2083-2089.  doi:10.23940/ijpe.18.09.p17.20832089
    Abstract    PDF (418KB)   
    References | Related Articles
    Cloud service performance is an important metric that must be considered in detail. Most existing researches study various methods and approaches for evaluating the performance metric; however, these are inadequate because they do not take into account dynamic performance changes caused by reliability factors. In fact, both software failures of a virtual machine (VM) and hardware failures of a server inevitably interrupt the execution of a cloud service and eventually result in more time being spent on completing the cloud service. Meanwhile, the check-pointing mechanism is an important fault tolerant technique that is widely adopted to handle software failures. In this paper, we present a joint modeling approach encompassing Semi-Markov and the Laplace-Stieltjes transform to analyze the reliability-performance correlation for cloud services that adopt the check-pointing fault recovery mechanism. Finally, we present a recursive method to evaluate the expected service time.
    A Study of Applying Fault-based Genetic-Like Programming Approaches to Automatic Software Fault Corrections
    Chia-Hao Lee, Chin-Yu Huang, and Tzu-Yang Lin
    2018, 14(9): 2090-2104.  doi:10.23940/ijpe.18.09.p18.20902104
    Abstract    PDF (879KB)   
    References | Related Articles
    Correcting software bugs automatically is challenging because the process poses many uncertainties. As the size and complexity of software increases, manually correcting software bugs becomes very difficult. Hence, automatic software repair has become increasingly essential. Genetic programming (GP) is a method for addressing this problem, as research has applied it to find ways to repair faulty programs in recent years. Nevertheless, most of the variants generated by GP are not precise in detecting repair solutions. In this paper, we propose a fault-based genetic-like programming approach that heuristically searches all possible variants as they increase with the number of modifications. Our method is able to find the best repairs for programs with fewer faults, faster than genetic programming. However, the cost is that heuristically searching for suitable repairs is time-consuming. Hence, we have also optimized our approach to speed up performance. In this study, our approach was used to repair faulty C programs, and the results were compared with those generated by genetic programming. The results show that our approach better detected faulty programs in up to 18000 lines of code when the number of program faults was less than two.
    Using Cross-Entropy Value of Code for Better Defect Prediction
    Xian Zhang, Kerong Ben, and Jie Zeng
    2018, 14(9): 2105-2115.  doi:10.23940/ijpe.18.09.p19.21052115
    Abstract    PDF (715KB)   
    References | Related Articles
    Defect prediction is meaningful because it can assist software inspection by predicting defective code locations and improving software reliability. Many software features are designed for defect prediction models to identify potential bugs, but no one feature set can perform well in most cases yet. To improve defect prediction, this paper proposes a new code feature, the cross-entropy value of the sequence of code’s abstract syntax tree nodes (CE-AST), and develops a neural language model for feature measurement. To evaluate the effectiveness of CE-AST, we first investigate its discrimination for defect-proneness. Experiments on 12 Java projects show that CE-AST is more discriminative than 45% of twenty widely used traditional features. Furthermore, we investigate CE-AST’s contribution to defect prediction. Combined with different traditional feature suites to feed prediction models, CE-AST can bring performance improvements of 4.7% in Precision, 2.5% in Recall, and 3.5% in F1 on average.
    An Improved Tensor Decomposition Model for Recommendation System
    Wenqian Shang, Kaixiang Wang, and Junjie Huang
    2018, 14(9): 2116-2126.  doi:10.23940/ijpe.18.09.p20.21162126
    Abstract    PDF (786KB)   
    References | Related Articles
    With the arrival of the large data age, the algorithm of traditional recommendation systems cannot fully excavate the context information of the user’s decision-making and cannot provide a satisfactory recommendation for users. With the development of the label system, it becomes a hot topic to use multidimensional context-aware data to provide an accurate recommendation for users. At present, the more advanced scheme is to use the recommendation algorithm based on tensor decomposition to excavate the three-element relationship group of user-item-label. This paper proposes K-Means and the Time-Context based Tensor Decomposition Model (KTTD). The initial clustering of datasets is carried out through K-Means to improve the data aggregation and algorithm efficiency. The time context of the situation recommendation is excavated, and the implicit feedback in the temporal context perception is used as a dimension of tensor to establish the tensor decomposition model to improve the efficiency and quality of the recommendation. At the end of the paper, we verified the model by experiments, and the results show that the improved algorithm is better than the traditional recommendation algorithm in the accuracy of the recommended system.
    A Solution to Make Trusted Execution Environment More Trustworthy
    Xiao Kun and Luo Lei
    2018, 14(9): 2127-2136.  doi:10.23940/ijpe.18.09.p21.21272136
    Abstract    PDF (605KB)   
    References | Related Articles
    Trusted Execution Environment is an execution environment that resides in connected devices and ensures that sensitive data are stored, processed, and protected isolated from general-purpose OS such as Android. The TrustZone TEE solution can achieve a medium protection level with comparatively low cost, so it is widely used. However, related researches show that the TrustZone TEE solution has security defects; for example, hardware isolation provided by TrustZone is insufficient. In this paper, we propose a security enhancement scheme based on TEE. According to the existing problems in the TrustZone TEE scheme, a corresponding protection mechanism is established to fully enhance the reliability of connected devices. In our scheme, TEE is used alongside other security technology such as secure elements and microkernel and kernel real-time protection to provide multi-layered defense mechanisms. In our scheme, we introduce a security element as the root of trust (ROT) of connected devices. The secure element is used to store sensitive data such as the first-stage bootloader, various secret keys, and the certificate of the second-stage bootloader. The secure element is also used to execute sensitive operations such as encryption and decryption.
    Research on the Cache Replacement Algorithm of Universal Network based on Cooling Mechanism
    Yuan Feng, Lu Wang, Jianchun Li, Chunfeng Du, Nana Li, and Jianwei Zhang
    2018, 14(9): 2137-2144.  doi:10.23940/ijpe.18.09.p22.21372144
    Abstract    PDF (889KB)   
    References | Related Articles
    Universal network is a kind of typical separation network on identity or position, and it is one of the important trends of the new generation of networks. In this paper, identity mapping query process of universal network was analyzed and a beforehand push scheme of universal network was put forward in view of the main problems about time delay occurring in the identity mapping query process. On this basis, the access route cache partition scheme was given, and the access router cache replacement algorithm based on cooling mechanism was proposed. In the end, a detailed simulation experiment scheme was presented to validate that the method of replacement between pre-fetching and cache partition cooling was effective. The experimental results showed that the pre-fetching scheme of universal network identity mapping could effectively shorten the communication time delay of the mobile terminals, which improved the mobile supportive ability of universal network; furthermore, the access router cache replacement algorithm based on cooling mechanism can improve the routing cache hit ratio, which can shorten the routing time delay.
    A Method of Encoding Coordinates on the Paper for Digitizing Handwriting
    Qingcheng Li, Guangming Zheng, Ye Lu, and Heng Cao
    2018, 14(9): 2145-2152.  doi:10.23940/ijpe.18.09.p23.21452152
    Abstract    PDF (584KB)   
    References | Related Articles
    We have been using paper for more than a thousand years and we are so accustomed to using it. However, digital information is easy to share and manage. It is necessary to digitize the handwriting. This article proposes a method of encoding coordinates on a two-dimensional page for digitizing hand-writing, combining Anoto encoding and nCode encoding. The coding scheme is calibrated based on coordinate relation, and the feasibility of the scheme is verified through experiments.
    Measuring Surface Area of Leaf based on Multi-Angle Images
    Weizheng Zhang, Weiwei Zhang, Yan Liu, Guoqing Li, and Qiqiang Chen
    2018, 14(9): 2153-2162.  doi:10.23940/ijpe.18.09.p24.21532162
    Abstract    PDF (950KB)   
    References | Related Articles
    The measurement of plant leaf area (LA) has important guiding significance for the diagnosis of plant growth status. Most of the existing methods for measuring LA are contact measurement. This paper proposes a method to directly create a 3D model of the leaf and calculate the surface area of the leaf in the natural state. Firstly, the digital camera is calibrated to obtain the camera parameters. Then, the leaves are photographed from multi-angles in order to obtain the three-dimensional point cloud; the images are processed by Photomodeler. Use MATLAB programming to achieve 3D modeling of the leaf and calculate the surface area using scanner combination Photoshop software methods. The experimental results show that the method proposed has a prominent effect on the measurement of the leaf under natural conditions with an accuracy of 99%.
    HMM-based User Behavior Prediction Method in Heterogeneous Cellular Networks
    Shanshan Tu, Xinyi Huang, Yaqin Zhang, Mingyang An, Lei Liu, and Yao Huang
    2018, 14(9): 2163-2174.  doi:10.23940/ijpe.18.09.p25.21632174
    Abstract    PDF (791KB)   
    References | Related Articles
    In the heterogeneous cellular networks (HCN) environment, users travel between different cells to ensure that the network connection will not be interrupted, and there is a need for network handover management for users. In HCN, adopting the same handover strategy for different cells will reduce handover performance. Therefore, a reasonable handover management strategy needs to consider the mobile preference and mobility characteristics of users in hot spots. Aiming at the existing problems, in this study, the self-similar least-action human walk (SLAW) is analyzed and a method based on the hidden Markov model (HMM) for perceiving user behavior in hot spots is proposed. First, users’ mobile paths in hot spots are simulated based on SLAW, and user behaviors are modeled using HMM. Then, the corresponding moving time is predicted by the mobile sequence of the user. Finally, the effects of different sampling times and different base station densities on the behavior prediction are analyzed through simulation experiments, providing specific parameter settings for designing a reasonable handover management plan. Meanwhile, the prediction of user movement time can ensure that the base stations in hot spots will make effective preparations for the upcoming handover requests.
    Adaptive Classifier based on Distance of Probabilistic Fuzzy Set for EMG Robot
    Wenjing Huang, Yaoqing Ren, Kejun Li, and Yihua Li
    2018, 14(9): 2175-2180.  doi:10.23940/ijpe.18.09.p26.21752180
    Abstract    PDF (555KB)   
    References | Related Articles
    Surface electromyographic (sEMG) signals always change with the external and internal conditions of human beings. Such a time-varying characteristic leads to decreasing classification accuracy of fixed-parameter classifiers for EMG patterns with time. To design a control system for EMG-based artificial limbs with stable performance, it is necessary to introduce the adaptive mechanism in the classifiers for EMG patterns. In addition, there are many uncertainties in the process of EMG signal acquisition and grasp model recognition. In this paper, on the basis of a distance classifier based on probabilistic fuzzy set, we attempted to introduce the adaptive scheme to the classifiers for EMG patterns and then verified the application of the scheme in the classification of EMG patterns through experiments. The study shows that a self-enhancement distance classifier based on probabilistic fuzzy set can improve recognition accuracy.
    Software Protection Algorithm based on Control Flow Obfuscation
    Yongyong Sun
    2018, 14(9): 2181-2188.  doi:10.23940/ijpe.18.09.p27.21812188
    Abstract    PDF (350KB)   
    References | Related Articles
    Control flow confusion is a software protection technique. There is uncertainty of obfuscation strength and extra cost by using the traditional garbage code control flow obfuscation algorithm. To solve this problem, a control flow obfuscation algorithm based on nested complexity is proposed. The cost introduced by obfuscation is calculated quantitatively, and the complexity of control flow is measured by nested complexity. The knapsack decision table is constructed based on the idea of packet knapsack. Considering the obfuscation strength and the cost, the garbage code insertion point is selected, and the obfuscation strength is increased as much as possible within the cost threshold. The results show that the obfuscation strength of the algorithm in this paper is higher than that of the control flow obfuscation algorithm using the traditional random insertion strategy.
    Image Stitching in Smog Weather based on MSR and SURF
    Guanghong Li, Xuande Ji, and Ming Zhang
    2018, 14(9): 2189-2196.  doi:10.23940/ijpe.18.09.p28.21892196
    Abstract    PDF (679KB)   
    References | Related Articles
    Image stitching can enlarge the range of viewing angles and increase different images information, and it is used in many fields such as industry, civil, and military. However, smog weather is an environmental problem in our country, because it can cause serious degradation of images. The loss of characteristic information will have negative impacts on the subsequent stitching process. Firstly, the smog image should be improved. In this paper, the application of the Multi-Scale Retinex (MSR) algorithm and the comparison and objective evaluation between it and the Histogram Equalization (HE) is discussed. Then, after removing the smog, the image is registered using local invariant features and the Speeded-up Robust Features (SURF) algorithm, and the Euclidean distance is adopted to obtain a satisfactory matching. Finally, the image stitching after registration may produce discontinuity of brightness in the overlapping area, and a higher quality stitching image can be achieved more quickly by using the progressive fade-out method. Through experiments and simulations, the smog images could be well stitched after removing the smog.
    Reliable and Energy-Efficient Data Gathering in Wireless Sensor Networks via Rateless Codes and Compressed Sensing
    Xiaoxia Song, Yong Li, Ye’e Zhang, and Defa Hu
    2018, 14(9): 2197-2206.  doi:10.23940/ijpe.18.09.p29.21972206
    Abstract    PDF (729KB)   
    References | Related Articles
    It is difficult for data gathering via a fixed code rate in wireless sensor networks (WSNs) to achieve reliable recovery. Compared with the fixed code rate, rateless codes can continuously send a code word to the sink node until the source node information is recovered. Thus, data gathering methods based on rateless codes are effective in achieving reliable data in the sink node. However, to achieve high reliability, a large amount of sensor data must be collected, and this greatly increases the energy consumption of sensor nodes and the storage space of the sink nodes. Fortunately, data gathering via compressed sensing (CS) can largely reduce the number of sensor data collected to further save energy consumption and storage space. This paper proposes a data gathering method via rateless codes and CS. The proposed method can not only achieve reliable recovery, but also save energy consumption of data collection and storage space of the sink nodes. The experimental results show that the proposed method can reduce energy consumption by about 40% and storage space by about 40% compared with the data gathering via LT codes, which are a typical rateless code.
    Information Matrix Algorithm of Block-based Bivariate Thiele-Type Rational Interpolation
    Le Zou, Liangtu Song, Xiaofeng Wang, Qiong Zhou, Yanping Chen, and Chao Tang
    2018, 14(9): 2207-2218.  doi:10.23940/ijpe.18.09.p30.22072218
    Abstract    PDF (596KB)   
    References | Related Articles
    Interpolation plays an important role in image processing, numerical computation, and engineering technology. Almost all interpolation computation is based on differences and inverse differences. This paper presents the recursive algorithm of modified bivariate block-based Thiele-type blending interpolation to meet the nonexistence of partial inverse differences of blocks. Inspired by the basic thoughts of transformation in linear algebra, this paper studies the information matrix algorithm of bivariate block based Thiele-type blending rational interpolation. The algorithm is simple and easy to compute. Through research, the author presents the interpolation theorems and recursive algorithm and also comes up with the modified bivariate block-based Thiele-type blending rational interpolation, together with its information matrix algorithm, under the nonexistence of block-based partial inverse differences. Finally, a numerical example is introduced to show the effectiveness of the proposed algorithm.
    Cross-Media Retrieval based on Pseudo-Label Learning and Semantic Consistency Algorithm
    Gongwen Xu, Zhiqi Sang, and Zhijun Zhang
    2018, 14(9): 2219-2229.  doi:10.23940/ijpe.18.09.p31.22192229
    Abstract    PDF (969KB)   
    References | Related Articles
    To retrieve heterogeneous multimodal data with the same semantics, many algorithms for retrieval over multimodal data have been suggested. The organization and analysis of heterogeneous data have become the focus of intensive research. Here, a new and efficient algorithm for cross-media retrieval is proposed based on pseudo-label learning and semantic consistency (PLSC). In this algorithm, an adaptive learning projection matrix optimization method is proposed, and in the process of learning the projection matrices, the method fully considers the semantic information of the labeled and unlabeled samples. Thus, the PLSC algorithm can utilize more useful information than other methods and can learn the more efficient projection matrices. Firstly, the class centers of labeled text are computed. We use median feature vectors as the class center vectors. Next, unlabeled images are projected onto the text space and are assigned pseudo-labels by comparing with the class center vectors of the text data. Finally, a new training dataset, which includes labeled and unlabeled data, is generated for training the projection matrix. Using the projection matrix to project image or text data onto the same feature space, the data can be compared with each other for similarity, and the distance between data points can be calculated using the Euclidean metric. Validation experiments suggest that the PLSC outperforms other state-of-the-art algorithms.
    Automation Control System of Nuclear Track Membrane Research and Design
    Yunjie Li, Yanyu Wang, Dan Mo, Jun Yin, and Jie Liu
    2018, 14(9): 2230-2238.  doi:10.23940/ijpe.18.09.p32.22302238
    Abstract    PDF (641KB)   
    References | Related Articles
    The nuclear track membrane is the most precise microporous membrane in the world. It is a porous, plastic film with dense holes in it, each of which has the same shape and size. Nuclear track membranes are available in a variety of sizes ranging from 5 microns to 60 microns and pore sizes ranging from 0.2 microns to 15 microns. Density of the hole ranges from 1-10 to 9th power per square centimeter. The nuclear track membrane of the Institute of Modern Physics of the Chinese Academy of Sciences through heavy ions generated by the HIRFL forms microspores and then undergoes special chemical etching. It is widely used in electronics, medicine, filtration, and analysis. The control system of the nuclear track membrane has an important role in the entire production system. Due to the special beam radiation, the automation requirements of the control system is also very high. The automatic control system of nuclear track membrane production, from theoretical design to actual control construction is studied by this article. The main focus is the beam homogeneity correction control system and the production automation control system. The principle of beam uniformity correction, the implementation process, and the specific implementation of the automatic control system are studied in detail. Specific hardware deployment architecture and software design process are also discussed.
    Optimized VMD-Wavelet Packet Threshold Denoising based on Cross-Correlation Analysis
    Xin Wang, Xi Pang, and Yuxi Wang
    2018, 14(9): 2239-2247.  doi:10.23940/ijpe.18.09.p33.22392247
    Abstract    PDF (435KB)   
    References | Related Articles
    To address the problem that wavelet packet denoising is unable to process signals with strong white noise, an optimized VMD-wavelet packet threshold denoising method based on cross-correlation analysis is proposed. This method combines the advantages of VMD and wavelet packet denoising. By decomposing the noisy signal into several modal components using VMD, the excellent modal components are selected from all modal components according to the cross-correlation analysis based critical correlation coefficient. After that, these excellent modal components are processed using the wavelet packet threshold denoising method. Experimental results show that the proposed method has the advantage of denoising signal with strong white noise, which preserves the effective components of signal, overcomes the blindness of traditional VMD denoising methods and ensures the authenticity of the denoised signal.
Online ISSN 2993-8341
Print ISSN 0973-1318