Please wait a minute...
, No 4

■ Cover Page (PDF 979 KB) ■ Editorial Board (PDF 71 KB) ■ Table of Contents, April 2019 (PDF 190 KB)

  
  • SLA Constraint Quickest Path Problem for Data Transmission Services in Capacitated Networks
    Ashutosh Sharma, Rajiv Kumar, and Pradeep Kumar Singh
    2019, 15(4): 1061-1072.  doi:10.23940/ijpe.19.04.p1.10611072
    Abstract    PDF (726KB)   
    References | Related Articles
    In this paper, an extension has been made on the quickest path problem (QPP) with a constraint of service level agreements and energy required for the data transmission services. This new variant of QPP strengthens the applicability of QPP with criticality of data transmission service. The criticality of service is measured in terms of the requested service completion time and mean time of failure of service. The selection of the values of the constraint plays an important role in the computation of the SLA constraint quickest path problem (SLAQPP) for the data transmission services. The variation of SLA has been analysed to obtain the pattern of selection of number of SLAQPP paths. The proposed algorithm is tested on serval benchmark networks and random networks, providing results after computation of SLAQPP. The results show that the proposed algorithm outperforms several existing algorithms in terms of selection of paths and computation time.
    Radio Frequency Identification: An Apparatus Instrumental in Smart ID Applications
    Praveen Kumar Singh, Neeraj Kumar, and Bineet Kumar Gupta
    2019, 15(4): 1073-1082.  doi:10.23940/ijpe.19.04.p2.10731082
    Abstract    PDF (256KB)   
    References | Related Articles
    Optical Character Groups (OCG), optical character recognition, bar codes, security tags, Electronic Artificial Surveillance (EAS), magnetic stripes: Radio frequency identification (RFID) is also just another form of this technology. The major difference of RFID compared to other technologies is that it does not require a direct line of sight to operate. The distance from which it can be read is also relatively longer. The coverage of nature of data to support the RFID tags is relatively much wider than barcodes and includes environmental factors like humidity and temperature, apart from carrying the product information of what prototype or which manufacturer it belongs to. This technology facilitates real-time positioning and has received interest from numerous sectors such as logistics, manufacturing, and healthcare. The durability of an RFID chip varies and can be effectively used for more than ten years with very lower maintenance expenditure. The memory capacity of current RFID tags is much larger than the traditional barcodes and amounts to 16-64 Kbytes. At the same time, the read/write time is also drastically improved.
    Salient Bag of Feature for Skin Lesion Recognition
    Pawan Kumar Upadhyay and Satish Chandra
    2019, 15(4): 1083-1093.  doi:10.23940/ijpe.19.04.p3.10831093
    Abstract    PDF (808KB)   
    References | Related Articles
    With the rapidly increasing incidence of various types of skin cancer, there is a need for decision support systems to detect abnormalities in the early stages and help reduce the mortality rate. Several computer-aided diagnosis (CAD) systems have been proposed in the last two decades for skin melanoma recognition. Continuous improvements have been made in the accuracy of melanoma diagnosis, but other classes of cancer, such as basal cell carcinoma and squamous cell carcinoma, are not very intact with the non-invasive diagnosis system. In this paper, a generic method of diagnostic system is proposed and is viable to classify the ten classes of a skin lesion. These lesion classes belong to cancer, pre-cancerous, and tumor categories of samples, as shown in a gold standard image dataset. The key idea of the proposed approach is to optimize the bag-of-SURF features by the non-linear Hessian matrix of HSV color descriptors. These features are combined to form a salient bag-of-features, which helps recognize the skin lesion classes more accurately. Experimental results show that the proposed method of skin lesion diagnosis significantly improves the accuracy of recognition up to 89% as compared to the current state-of-the-art accuracy of 81.8%. It does not require any complex pre-processing of images, which affects the performance of the recognition system.
    Efficient Inter-View Prediction Structure for Multi-View High Efficiency Video Coding
    Tao Yan, In-Ho Ra, and Qian Zhang
    2019, 15(4): 1094-1102.  doi:10.23940/ijpe.19.04.p4.10941102
    Abstract    PDF (835KB)   
    References | Related Articles
    In addition to high coding efficiency, multiview high-efficiency video coding (MV-HEVC) should include backward compatibility and temporal random access, which are mainly determined by the prediction structure used. For standardization of multiview video coding, (MVC) MV-HEVC normally uses the fixed-view temporal prediction structure, which runs into challenges with various characteristics of multiview videos. This paper starts by exploring the relationship between MV-HEVC prediction structure and coding complexity, compression efficiency, and random access performance and constructs mathematical models; it then comprehensively considers coding efficiency and user random access based on multiview video similarity analysis and adaptively adjusts the inter-view prediction structure to obtain better coding performance. The experimental results demonstrate that when compared with MV-HEVC, the proposed method has better random access performance and improved coding efficiency.
    Community Structure Division based on Immune Algorithm
    Yuling Tian
    2019, 15(4): 1103-1111.  doi:10.23940/ijpe.19.04.p5.11031111
    Abstract    PDF (495KB)   
    References | Related Articles
    Recently, the characteristics of complex networks and community structure have attracted attention from academia and society, and their research and applications have become increasingly important. Community structure division makes complex networks easy to understand. However, most community structure division methods often need the number of communities and have low efficiency. In this paper, an efficient method of community structure division in complex networks based on the immune algorithm is proposed. The method aims to find the core members of communities and classify other members according to core members. The individual evaluation of the core member is obtained by the affinity degree of the immune algorithm. In addition, the clone and mutation operation in the traditional immune algorithm is improved to be affected not only by the affinity but also by the iterative process. The improved immune algorithm can guarantee antibody diversity in the early stage of search and convergence in the later stage, and it then achieves faster convergence and higher precision of community structure division. Compared with traditional methods, the proposed method does not need the number of communities in advance, can get better results on real datasets, and has greater efficiency.
    Routing Protocol Research for Wireless Quantum Networks based on Resource Reservation
    Xinliang Wang, Qinggai Huang, Zhihuai Liu, Na Liu, and Wei Fang
    2019, 15(4): 1112-1121.  doi:10.23940/ijpe.19.04.p6.11121121
    Abstract    PDF (583KB)   
    References | Related Articles
    In wireless quantum communication networks, existing AODV routing protocols can guarantee the stability of quantum entangled channels. However, the protocol takes a long time to establish the quantum channel on average, and it also consumes many entanglement resources. In order to solve the above problems, a routing protocol for wireless quantum networks based on resource reservation is proposed in this paper. Before the source node sends the quantum state information to the destination node, the proposed protocol estimates the quantum state information to be transmitted and writes the resource estimation value into the routing request message. A broadcast routing request message can not only satisfy the requirement of quantum data transmission, but also ensure that the number of hops is as small as possible. Simulation results show that the proposed protocol can effectively shorten the time of establishing quantum channels, reduce the consumption of entanglement resources, and improve the communication efficiency of wireless quantum communication networks. It has good practical value.
    Plant Leaves Recognition Combined PCA with AdaBoost.M1
    Hui Chen, Haodong Zhu, and Xufeng Chai
    2019, 15(4): 1122-1130.  doi:10.23940/ijpe.19.04.p7.11221130
    Abstract    PDF (393KB)   
    References | Related Articles
    In order to improve the overall performance of plant leaves recognition, this paper proposed a novel method combining PCA with AdaBoost.M1to recognize plant leaves. The proposed method firstly carries out the image preprocessing, which includes the image gray processing, the image binarization, and the edge extraction; extracts the 13 features of plant leaf with the characteristics of rotation invariance, proportion invariance, and translation invariance; subsequently employs PCA to reduce the dimensions of these feature parameters; and finally adopts the AdaBoost.M1 classifier to train and recognize the reduced-dimension plant leaf images. Simulation experiment results indicate that the proposed method is able to improve the overall performance effectively of plant leaves recognition.
    Feature Selection Combined Feature Resolution with Attribute Reduction based on Correlation Matrix of Equivalence Classes
    Zhifeng Zhang and Junxia Ma
    2019, 15(4): 1131-1140.  doi:10.23940/ijpe.19.04.p8.11311140
    Abstract    PDF (381KB)   
    References | Related Articles
    Feature selection is one of the key steps in text classification. To some extent, it can affect the performance of text classification. In this paper, we firstly proposed an optimized document frequency-based word frequency and document frequency and then presented the feature resolution based on the optimized document frequency. Meanwhile, we introduced rough set into feature selection and provided an attribute reduction algorithm based on the correlation matrix of equivalence classes. We finally put forward a feature selection method combining the presented feature resolution with the provided attribute reduction algorithm. The proposed feature selection method firstly employs the presented feature resolution to select some valuable text features and filter out useless terms to reduce the sparsely of text feature spaces, and then it uses the provided attribute reduction algorithm to eliminate redundant features. The comparative experimental results show that the proposed feature selection method has certain advantages in consumed time, macro-average, micro-average, and average classification accuracy.
    Four-Layer Feature Selection Method for Scientific Literature based on Optimized K-Medoids and Apriori Algorithms
    Hongchan Li and Ni Yao
    2019, 15(4): 1141-1150.  doi:10.23940/ijpe.19.04.p9.11411150
    Abstract    PDF (343KB)   
    References | Related Articles
    With the increase in scientific literature, classifying scientific literature has become an important focus. Effectively selecting representative features from scientific literature has become a key step in scientific literature classification and influences the performance of scientific literature classification. According to the structural characteristics of scientific literature, we combine an optimized K-medoids algorithm, which firstly adopts information entropy to empower clustering objects to correct the distance function and then employs the corrected distance function to select the optimal initial clustering centres, with the Apriori algorithm to propose a four-layer feature selection method. The proposed feature selection method firstly divides scientific literature into four layers according to their structural characteristics, selects features layer by layer from the former three layers by means of the optimized K-medoids algorithm, subsequently mines the maximum frequent item sets from the fourth layer by the Apriori algorithm to act as the features of the fourth layer, and finally merges selected features of every layer and eliminates duplicate features to obtain the final feature set. Experimental results show that the proposed four-layer feature selection method achieves higher performance in scientific literature classification.
    Adaptive Modulation Coding Method based on Minimum Packet Loss Rate in AOS Communication System
    Qingli Liu, Yanjun Yang, and Zhiguo Liu
    2019, 15(4): 1151-1160.  doi:10.23940/ijpe.19.04.p10.11511160
    Abstract    PDF (656KB)   
    References | Related Articles
    Aiming at the problem of increasing packet loss rate and decreasing throughput caused by the characteristics of high data burst and high channel error rate in the AOS space communication system, this paper considers the factors of queue packet loss and transmission error packet loss and proposes an adaptive modulation coding method based on minimum packet loss rate in the AOS communication system. Firstly, a system packet loss objective function is established, and the modulation coding mode can be determined by solving the objective function minimum. Secondly, the modulation coding mode is dynamically adjusted according to certain rules, determined by the channel state and the queue state jointly. Finally, the system packet loss rate is reduced and the transmission performance of the system is improved. The theoretical analysis and simulation results show that compared with the SLBCCQ method, this method can reduce the system packet loss rate by up to 30%. Meanwhile, compared with the AMC algorithm, it can reduce the system packet loss rate by 41.7%.
    Hybrid SVM and ARIMA Model for Failure Time Series Prediction based on EEMD
    Haiyan Sun, Jing Wu, Ji Wu, and Haiyan Yang
    2019, 15(4): 1161-1170.  doi:10.23940/ijpe.19.04.p11.11611170
    Abstract    PDF (584KB)   
    References | Related Articles
    A more widely used hybrid model of support vector regression (SVR) and autoregressive integrated moving average (ARIMA) based on Ensemble Empirical Mode Decomposition (EEMD) is proposed for failure time series prediction by taking advantage of the SVR model to forecast the nonlinear part of failure time series and the ARIMA model to predict the linear basic part. It firstly uses EEMD to decompose the original failure sequence into several significant fluctuation components and a trend component, and then it utilizes SVR and ARIMA to forecast them separately. The performance of the presented model is measured against other unitary models such as Holt-Winters, autoregressive integrated moving average, multiple linear regression, and group method of data handling of seven published nonlinear non-stationary failure datasets. The comparison results indicate that the proposed model outperforms other techniques and can be utilized as a promising tool for failure data forecast applications.
    Similarity based on the Importance of Common Features in Random Forest
    Xiao Chen, Li Han, Meng Leng, and Xiao Pan
    2019, 15(4): 1171-1180.  doi:10.23940/ijpe.19.04.p12.11711180
    Abstract    PDF (674KB)   
    References | Related Articles
    In the existing methods for calculating the similarity between samples in random forests, the only case considered is where different samples fall on the same leaf node of the decision tree. The cases where there are leaf nodes in different positions of the decision tree or the sample falls on different leaves are neglected, thus affecting the accuracy of the similarity. In this paper, firstly, according to the difference of the leaf nodes in different positions of the decision tree, the importance of the sample features to which the leaf nodes belong are used as an attribute to describe the similarity. Secondly, for the case that the samples fall on different leaf nodes, the common features between samples are taken as another attribute to describe the similarity. Therefore, the measure method SICF (similarity between samples based on the importance of common features) is proposed. Finally, it is applied to the K-nearest neighbor classification algorithm, and the validity and correctness of the similarity are verified by the OOB index. The experimental results show that for the UCI data set, compared with two classical methods, the similarity SICF achieves better classification results.
    Automatic Pre-Identification Method of Navigation Tasks for Intelligent Ship
    Jie Zhang, Qinglong Hao, and Ran Dai
    2019, 15(4): 1181-1189.  doi:10.23940/ijpe.19.04.p13.11811189
    Abstract    PDF (678KB)   
    References | Related Articles
    In order to solve the problem of the large amount of real-time operation of the automatic identification method for intelligent ship navigation tasks, this paper analyzes the pattern characteristics of different navigation tasks, based on the information and data automatically perceived by ships, and uses large data processing analysis technology and the pick-up algorithm. The method of automatic pre-identification of intelligent ship navigation tasks is proposed, that is, the static pre-identification of each sub-navigation task of the planned route before sailing and the sharing of most real-time operations of automatic identification of navigation tasks. It lays the foundation for dynamic and accurate identification of navigation tasks. The simulation results show that the automatic pre-recognition method is feasible and the results have high reliability.
    UCM: A Novel Approach for Delay Optimization
    Rajkumar Sarma, Cherry Bhargava, Sandeep Dhariwal, and Shruti Jain
    2019, 15(4): 1190-1198.  doi:10.23940/ijpe.19.04.p14.11901198
    Abstract    PDF (1037KB)   
    References | Related Articles
    In the era of digital signal processing, such as graphics and computation systems, multiplication is one of the prime operations. A multiplier is a key component in any kind of digital system such as Multiply-Accumulate (MAC) unit, various FFT algorithms, etc. The efficiency of a multiplier is mainly dependent upon the speed of operation and power dissipation of the circuit along with the complexity level of the multiplier. This paper is based on Universal Compressor based Multiplier (UCM), which yields a high-speed operation with comparative power dissipation; hence, the enhanced performance is reported. The novel design of UCM is analyzed using Cadence Spectre tool in 90nm CMOS technology. Finally, the UCM is implemented using Nexys-4 Artix-7 FPGA board. The novel design of UCM has demonstrated significant improvement in terms of delay, which is explored in this paper.
    Average Energy Analysis in Wireless Sensor Networks using Multitier Architecture
    Hradesh Kumar and Pradeep Kumar Singh
    2019, 15(4): 1199-1208.  doi:10.23940/ijpe.19.04.p15.11991208
    Abstract    PDF (622KB)   
    References | Related Articles
    The energy of sensor nodes plays a vital role in wireless sensor networks for different purposes such as sensing events, communicating among sensor nodes, and transmitting information from one node to another node. The average energy of the network is referred to as the ratio of the total energy of all sensor nodes in the network to the number of nodes. In this paper, multitier architecture is proposed for calculating the average energy and throughput of the network in terms of the number of packets reached at the base station (BS). The proposed approach has been compared with two existing approaches, the low energy adaptive clustering hierarchy and stable election protocol, in terms of average energy and throughput of the network. This paper presents the average energy of each node in the network in both 2D and 3D views for better interpretation of results. The proposed approach is 19.79% better in terms of average energy compared with the stable election protocol. The proposed approach is further compared with the low energy adaptive clustering hierarchy protocol and is found to be 34.20% better.
    Financial Risk Prediction for Listed Companies using IPSO-BP Neural Network
    Sha Li and Yu Quan
    2019, 15(4): 1209-1219.  doi:10.23940/ijpe.19.04.p16.12091219
    Abstract    PDF (437KB)   
    References | Related Articles
    Manufacturing is an important part of the market economy. Judgment and analysis of financial risks in the manufacturing industry help promote the healthy development of the real economy. A sample of manufacturing companies for the period 2015-2017 is selected. First, the financial indicators of the companies are screened using principal component analysis. Second, Back Propagation (BP) neural network parameters are optimized using improved particle swarm optimization (IPSO), and a financial risk early warning model based on IPSO-BP is constructed. Finally, an empirical analysis is performed. The analysis results reveal that the model can accurately predict the financial risks of manufacturing companies and provide valuable guidance in the form of a company financial risk warning.
    Fault-Section Location of Distribution Network based on Adaptive Mutation Shuffled Frog Leaping Algorithm
    Yanzhou Sun, Han Wu, Yawei Zhu, Yanfang Wei, and Tieying Zhao
    2019, 15(4): 1220-1226.  doi:10.23940/ijpe.19.04.p17.12201226
    Abstract    PDF (618KB)   
    References | Related Articles
    Fast and accurate identification of feeder fault section plays a crucial role in improving the stability of distribution networks. We address the low accuracy and unsatisfactory effect of traditional algorithms when the fault indicator has been used for fault location of distribution network lines. This paper proposes a kind of fault location method based on an adaptive mutation shuffled frog leaping algorithm (AMSFLA), which introduces an adaptive mutation factor. This proposed method was validated by simulation with a typical IEEE 33-bus distribution network model and has been shown to effectively solve the premature problem that exists in the classical shuffled frog leaping algorithm (SLFA), as well as speed up the calculation and accurately locate the fault section when multiple point faults and fault signal distortion occur.
    Two-Dimensional Product Warranty Cost Model under Preventive Maintenance Time Constraints
    Qian Wang, Zhonghua Cheng, Zhiyong Li, and Yongsheng Bai
    2019, 15(4): 1227-1234.  doi:10.23940/ijpe.19.04.p18.12271234
    Abstract    PDF (544KB)   
    References | Related Articles
    Providing high-quality product warranty service is an important part of the current competition among manufacturers for customers. On the basis of current warranty service, providing reasonable warranty service to customers more closely and practically is a problem that manufacturers must consider. On the basis of existing literature and considering the actual situation of manufacturers and customers, the time constraints of preventive maintenance in warranty service are proposed for the first time. Under the incomplete preventive maintenance strategy, the time of preventive maintenance during the warranty period is limited, and a two-dimensional product maintenance cost model under the time constraint is established to determine how to carry out preventive maintenance to obtain the ideal maintenance cost. The validity of the model is verified by an example, providing a basis for product two-dimensional warranty service decision-making.
    Target Identity Recognition Method based on Trusted Information Fusion
    Lu Wang, Chenglin Wen, and Lan Wu
    2019, 15(4): 1235-1246.  doi:10.23940/ijpe.19.04.p19.12351246
    Abstract    PDF (971KB)   
    References | Related Articles
    Safe and reliable target identity recognition is the important foundation of information security. In the complex environment of multi-source target information, in view of the potential impact of many uncertain factors on target identity recognition and the performance requirements of information security in the process of recognition, a trusted target identity recognition method is proposed in this paper. The BP neural network based on momentum factor is used to study and build an ensemble classification model, and based on this model, the trusted target identity recognition model is constructed. According to the relevant information characterized by the model, it can improve the recognition reliability of the target to a certain extent, thus providing more security and credibility for the recognition of the identified target. Finally, the effectiveness and feasibility of the proposed algorithm is verified by simulation experiments under an uncertain set environment.
    Compression Method of Factor Oracle by Triple-Array Structures
    Koji Bando, Takato Nakano, Kazuhiro Morita, and Masao Fuketa
    2019, 15(4): 1247-1254.  doi:10.23940/ijpe.19.04.p20.12471254
    Abstract    PDF (492KB)   
    References | Related Articles
    Pattern matching is an important technique in text processing and is used for character string replacement and search. A factor oracle is a data structure for pattern matching, and it is a finite state automaton that can search substrings. This data structure consists of internal and external transitions and has the characteristic property of accepting at least all substrings. The automaton including the factor oracle is represented by a two-dimensional array called Table, Johnson method, etc. Search using Table is fast, but the memory capacity is large. The Johnson method has the feature of representing the factor oracle using a small amount of storage, and since it can be achieved with a transition speed of O(1), it is considered as fast as Table. The memory of the Johnson method depends on the size of the element itself and the number of elements. In other words, by reducing these, the applicability of the factor oracle to enormous data will be further enhanced. In this research, using compact double-array, we propose a method consisting of compressing the size of the elements themselves, improving the Johnson method in order to achieve a construction using external transitions only, and compressing the number of elements. As shown by the experimental results, the proposed method has higher storage efficiency compared to the Johnson method and is capable under certain conditions of high-speed search.
    Detector Layout and Detection Probability Analysis for Physical Protection Systems of Nuclear Power Plants in Virtual Environments
    Junbo Wang, Ming Yang, and Yuxin Zhang
    2019, 15(4): 1255-1262.  doi:10.23940/ijpe.19.04.p21.12551262
    Abstract    PDF (498KB)   
    References | Related Articles
    Based on the nuclear power plant environment and virtual reality engine Unreal Engine4, a simulation platform for physical protection systems is built in this paper. The virtual reality simulation method is used to simulate the layout of the detection and defense devices of the physical protection system of the nuclear power plant under the condition of the closest real environment. In order to improve the defense level of the physical protection system, the security system designer can be assisted to identify the weak links of the detection device layout scheme by analysing the effectiveness of the detection device.
    Reliability and Availability Engineering: Modeling, Analysis, and Applications Kishor S. Trivedi and Andrea Bobbio. 2017, 726 pages, ISBN 9781107099500
    Reviewers: Xiwei Qiu and Yuanshun Dai
    2019, 15(4): 1263-1264.  doi:10.23940/ijpe.19.04.p22.12631264
    Abstract    PDF (261KB)   
    Related Articles
    This book provides systemic and comprehensive treatment of reliability and availability (i.e., dependability) models of complex computing, communication, and network systems. It will be known as not only a classic textbook for any scholar who studies dependability modeling and analysis but also a technical reference for any engineer who develops dependability assurance techniques. This book is divided into six parts: 'Part I. Introduction', 'Part II. Non-state-space models (combinatorial models)', 'Part III. State-space models with exponential distributions', 'Part IV. State-space models with non-exponential distributions', 'Part V. Multi-level models', and 'Part VI. Case studies'. Some parts are further sub-divided into chapters. Thus, the book is well-organized so that readers can easily and explicitly find pertinent approaches and methods and understand how to solve a concrete problem of dependability modeling, evaluation, and analysis. We believe this book makes significant contributions to dependability engineering, as detailed below:
ISSN 0973-1318