Please wait a minute...
, No 7

■ Cover page(PDF 4.93 MB)■ Editorial Board (PDF 39 KB)■ Table of Contents, July 2020  (PDF 73 KB)

  
  • Optimization of High Speed Rotor-Bearings System to Assess the Reliability using XLrotor
    Murgayya S B, Suresh H N, Madhusudhan N, and Saravanabavan D
    2020, 16(7): 991-998.  doi:10.23940/ijpe.20.07.p1.991998
    Abstract    PDF (889KB)   
    References | Related Articles
    The research work is focused on the behavior of the Nelson rotor with bearings in order to predict the dynamic forces (critical speeds, vibration levels, and load on bearings at peak amplitude) acting on it, which tends to decrease reliability. The reliability of the rotor system not only depends on static stress, but also dynamic stress which directly affects the performance. This work is mainly focused on the computation of a rotor system using the XLrotor tool. The tool effectively computes the results with the least acceptable error compared to other simulation techniques. The primary failure in the rotor assemblies is due to imbalance, which leads to misalignment, looseness, bent shafts, and bearing faults. The imbalance is an inherent property of rotors which leads to an increase in the centrifugal force of rotors. The Nelson rotor with a bearing configuration (Isotropic, Orthotropic and Fluid film Bearings) is modeled in XLrotor to analyze the rotor performance to determine Undamped Critical Speeds (UCS), Damped Critical Speeds (DCS), and Vibration Levels at Imbalance and Load acting on Bearings at peak amplitude. The rotor is optimized and suggests a reliable model using the XLrotor.
    SMART Criteria for Quality Assessment of Key Performance Indicators Used in the Oil and Gas Industry
    Jon Tømmerås Selvik, Ian Stanley, and Eirik Bjorheim Abrahamsen
    2020, 16(7): 999-1007.  doi:10.23940/ijpe.20.07.p2.9991007
    Abstract    PDF (220KB)   
    References | Related Articles
    SMART criteria, referring to the acronym for the terms 'specificity', 'measurability', 'achievability', 'relevancy', and 'time-based', are commonly used as a basis for the assessment of the quality of key performance indicators (KPIs). In this article, we discuss whether it is appropriate to use these criteria for this purpose. We conclude that all the SMART criteria should be satisfied if the KPI is to be regarded as of high quality and useful for business improvement, but, in addition, there is a particular need to include the criterion 'manageability' in the assessment. Without the inclusion of this criterion, the assessment of KPI quality could be misleading. An example is used for illustrative purposes. Although our starting point is KPIs for reliability and maintenance in the oil and gas industry, the discussions are also applicable to other industries.
    Empirical Characterization of the Likelihood of Vulnerability Discovery
    Carl Wilhjelm, Taslima Kotadiya, and Awad A. Younis
    2020, 16(7): 1008-1018.  doi:10.23940/ijpe.20.07.p3.10081018
    Abstract    PDF (657KB)   
    References | Related Articles
    Assessing the risk of the likelihood of a vulnerability discovery is very important for decision-makers to prioritize which vulnerability should be investigated and fixed first. Currently, the likelihood of vulnerability discovery is being assessed based on expert opinion which could potentially hinder its accuracy. In this study, we propose using Time to Vulnerability Disclosure (TTVD) as a proxy for assessing the likelihood of vulnerability discovery. We will then empirically explore characterizing TTVD using intrinsic vulnerability attributes including CVSS Base metrics and vulnerabilities types. We examine 799 reported vulnerabilities of Chrome and 156 vulnerabilities of the Apache HTTP server. The results show that TTVD correlated at a statistically significant level to some of the intrinsic attributes, namely, access complexity metric, confidentiality, and integrity metrics, and the vulnerabilities' types. Our results from machine learning analysis also show ranges of TTVD values are associated with specific combined values of the metrics under consideration.
    Model Similarity Calculation based on Self-Adaptive Global Best Harmony Search Algorithm
    Xueyao Gao, Xinran Dong, and Chunxiang Zhang
    2020, 16(7): 1019-1026.  doi:10.23940/ijpe.20.07.p4.10191026
    Abstract    PDF (274KB)   
    References | Related Articles
    In order to measure the difference between models, a method of computing 3D model similarity based on the self-adaptive global best harmony search algorithm (SGHS) is proposed. The face similarity matrix of two models is constructed according to the number of edges in the face and the face’s adjacency relationship. From the face similarity matrix, SGHS is used to search for an optimal sequence of matching faces between two models. Based on the optimal face matching sequence, similarities between source faces and target faces are accumulated to compute the two models’ similarity. Experimental results show that the proposed method can accurately measure the difference between the two models.
    Pedestrian Re-Identification Incorporating Multi-Information Flow Deep Learning Model
    Minghua Wei
    2020, 16(7): 1027-1037.  doi:10.23940/ijpe.20.07.p5.10271037
    Abstract    PDF (547KB)   
    References | Related Articles
    In the problem of pedestrian re-identification, it is difficult to extract effective pedestrian features and improve the re-identification accuracy due to changes in the angle of view, illumination, and pedestrian attitudes. However, the deep learning model is difficult to train and prone to the over-fitting problem when the training samples are few. To solve these problems, this paper proposes a multi-information flow convolutional neural network, the Mif-CNN) model, which contains a special convolutional structure. In this structure, the features extracted from each convolutional layer are connected to the input of all subsequent convolutional layers, which enhances the mobility of the network's feature information and the back propagation efficiency of the gradient and makes the pedestrian features extracted from the model more discriminative. The multi-loss function combination method is used to train the network model to distinguish the pedestrian categories better. Finally, the Euclidean distance is used to rank the pedestrian feature similarity. A number of experiments have been carried out on the pedestrian re-identification data sets i-LIDS and PRID-2011. The experimental results show that the algorithm proposed in this paper achieves the improvement of cumulative matching curve (CMC) and Rank-n re-identification rate in the comparison of images, videos, and deep learning models. Experimental results suggest that the proposed algorithm not only improves the accuracy of pedestrian re-identification in various scenes, but also enhances the ability of pedestrian features representation and effectively improves the over-fitting problem of the deep learning model.
    A Novel Submitochondrial Localization Predictor based on Gradient Boosting Algorithm and Dataset Balancing Treatment
    Jinchao Zhao, Yinping Jin, Xi Lin, and Xiao Wang
    2020, 16(7): 1038-1045.  doi:10.23940/ijpe.20.07.p6.10381045
    Abstract    PDF (324KB)   
    References | Related Articles
    Mitochondria are universal in eukaryotes. Abnormalities in their location will lead to a wide range of human sicknesses, especially neurodegenerative diseases. Correctly identifying submitochondrial location is extremely critical and contributes to disease pathogenesis and drug design. Even with some important results in predicting the location of sub-subcellular structures, many problems remain. A mitochondrion has four submitochondrial compartments, but various available research ignores the intermembrane space. The publicly available benchmark datasets are unbalanced. Few researchers considered the matter of skewed data before classification, which will cause bias for some categories. In such a scenario, we present a novel predictor, called CatBoost-SubMito, for protein submitochondrial location prediction. To capture valuable information of a protein, the pseudo-amino acid composition approach is exploited to acquire feature vectors. Next, the synthetic minority oversampling technique method is used to decline the effects produced by unbalanced datasets. Finally, feature vectors are fed into the CatBoost classifier. The predictor is tested on three benchmark datasets (SM424-18, SubMitoPred, and M4-585). Experimental consequences indicate that our predictor surpasses state-of-the-art predictors.
    An Evaluation Method of Network Security Situation using Data Fusion Theory
    Zhongwei Zhao, Yong Peng, Jianhua Huang, Tingting Zhou, and Huan Wang
    2020, 16(7): 1046-1057.  doi:10.23940/ijpe.20.07.p7.10461057
    Abstract    PDF (420KB)   
    References | Related Articles
    Network security situation awareness can effectively grasp the macro-security situation of the network, but the evaluation process still face problems such as single data source and big accuracy deviation. Therefore, this paper proposes a network security situation awareness model and method based on D-S theory. Using PCA clustering, the model preprocesses alarm information and eliminates useless alarm information to reduce time costs in evaluation. Based on improved D-S evidence theory, multi-source alarm data fusion rules are established to improve accuracy in event detection. Three situation awareness indicators of vulnerability, threat, and asset importance are set up to quantify the situation indicators and form an intuitive situation display. The experimental comparison analysis indicates that the model proposed herein can accurately assess the network security situation.
    Differential Privacy Spatial Decomposition via Flattening Kd-Tree
    Guoqiang Gong, Cedric Lessoy, Chuan Lu, and Ke Lv
    2020, 16(7): 1058-1066.  doi:10.23940/ijpe.20.07.p8.10581066
    Abstract    PDF (918KB)   
    References | Related Articles
    The key problem of using differential privacy is controlling sensitivity. Almost all papers focus on processing sensitivity, but the efficiency of the algorithm is also very important. Therefore, this paper hopes to improve efficiency as much as possible under the premise of ensuring utility. In this paper, decomposition and reconstruction via flattening kd-tree (DRF) is proposed based on differential privacy, which applies a flattening kd-tree to process the adjacency matrix. Firstly, by adjusting the vertex labeling, the set of labeling form dense areas and sparse areas as much as possible in the adjacency matrix. The adjacency matrix is then decomposed by flattening kd-tree, and each sub-region is anonymously operated using differential privacy. Finally, each subregion is reconstructed to obtain a complete anonymous graph. At the end of the article, experiments are conducted over real-world datasets. According to the results, DRF has a significant improvement in efficiency, the time complexity of DRF is (|??|), and DRF has a good performance in degree distribution, degree centrality and cutting query.
    An Empirical Study on the Impact of Code Contributor on Code Smell
    Junpeng Jiang, Can Zhu, and Xiaofang Zhang
    2020, 16(7): 1067-1077.  doi:10.23940/ijpe.20.07.p9.10671077
    Abstract    PDF (514KB)   
    References | Related Articles
    Code smells refer to poor designs that are considered to have negative impacts on the readability and maintainability during software evolution. Much research has been conducted to study the effects and correlations between them. However, software is a product of human intelligence, and the fundamental cause of code smell is developers. As a result, the research on the impact of code contributors on code smell appears vital in particular. In this paper, on 8 popular Java projects with 994 versions, we investigate the impact on code smells from the novel perspective of code contributors on five features. The empirical study indicated that the greater number of contributors involved, the more likely it is to introduce code smell. Having more mature contributors, who participate in more versions, can avoid the introduction of code smell. These findings are helpful for developers to optimize team structure and improve the quality of products.
    Using Genetic Algorithm to Augment Test Data for Penalty Prediction
    Chunyan Xia, Xingya Wang, Yan Zhang, and Hao Yang
    2020, 16(7): 1078-1086.  doi:10.23940/ijpe.20.07.p10.10781086
    Abstract    PDF (330KB)   
    References | Related Articles
    With the development of smart court construction, a deep learning method has been introduced into the field of penalty prediction based on judicial text. Since the increasing parameters of the penalty prediction model, the size of the data set to test the performance of the model is gradually expanding. First, we use the data augmentation method to make some changes to the original data to obtain a large number of augmented data with the same label. Then, we use the multi-objective genetic algorithm to search for high-quality test data from a large number of augmented data, so as to improve the diversity of augmented data. Finally, we perform experiments. The results of actual judicial cases show that compared with the random method, augmented test data based on the genetic algorithm can better test the performance of the penalty prediction model.
    Cross-Domain Relationship Prediction by Efficient Block Matrix Completion for Social Media Applications
    Lizhi Xiao, Zheng Zhang, and Peng Sun
    2020, 16(7): 1087-1094.  doi:10.23940/ijpe.20.07.p11.10871094
    Abstract    PDF (764KB)   
    References | Related Articles
    The online social media has experienced vigorous evolution. Diversified needs of information acquisition and retrieval on social media platforms have been evoked by massive users. While all sorts of application demands meet with explosive data growth, the development of effective methodologies has become emergent. By taking full advantage of rich context, we propose a heterogeneous object relation matrix completion approach (EBMC) which jointly complements the relationship between the heterogeneous data objects. Specifically, we detect the Place-of-Interest (POI) with mean shift algorithm on the GPS information of the social image collection. Then, a batch matrix completion and learning method is developed by optimizing a unified objective function to learn the POI-specific user-image, image-tag and user-tag relationships. Finally, we decompose the whole learning problem into a set of POI-specific subtasks, which corresponding to the relation data blocks separated by the POI structure. Through experiments on tasks of image annotation and user retrieval based on image similarity of real-world social media datasets, we found that our proposed method achieved good performance.
    A Prototype for Software Refactoring Recommendation System
    Yuan Gao, Youchun Zhang, Wenpeng Lu, Jie Luo, and Daqing Hao
    2020, 16(7): 1095-1104.  doi:10.23940/ijpe.20.07.p12.10951104
    Abstract    PDF (595KB)   
    References | Related Articles
    Software refactoring is used to reduce the costs and risks of software evolution. Automated software refactoring tools can reduce risks caused by manual refactoring, improve efficiency, and reduce difficulties of software refactoring. Researchers have made great efforts to research how to implement and improve automated software refactoring tools. However, results of automated refactoring tools often deviate from the intentions of the implementer. To this end, in this paper, we proposed and implemented a prototype tool for a software refactoring recommendation system based on previous research. The tool provides users with an optimized software refactoring scheme and users realize refactoring intentions by interacting with the tool. The tool has been evaluated to be effective, especially for users who are inexperienced and non- English speaking.
    A Combinatorial Method based on Machine Learning Algorithms for Enhancing Cultural Economic Value
    Yuqing Qi, Wei Ren, Meiyu Shi, and Qinyun Liu
    2020, 16(7): 1105-1117.  doi:10.23940/ijpe.20.07.p13.11051117
    Abstract    PDF (574KB)   
    References | Related Articles
    Cultural heritage is created by people all around the world in a long history. Heritage sites connect with normal people on the tourism aspect and present close relationships with people's normal life. However, some problems have appeared in the cultural heritage sites' establishment process, such as being over commercialised, over developed, and damaging the nature. To maintain the balance between human living and nature and improve the cultural heritage popularity, work should be done with all available technologies, like AI and Creative Computing. AI can be used for supervising the cultural heritage area for the protection aim. The back propagation neural network is used in this research for supervising and protecting the heritage site. Meanwhile, the cultural heritage tourism value is a kernel index to evaluate people's interest in the heritage. To enhance the popularity of cultural heritage, improving the tourism value of cultural heritage is required. As artificial intelligence can complete complex data analysis, the cultural heritage value elements can be fully explored by this method. The tourism elements of the cultural heritage can be expanded with the associated learning algorithm connected with public databases. Novel elements could be discovered, for example by attempting different combinations of heritages. Based on Boden's theory, the transformational method can achieve creativity. Therefore, traditional elements can be replaced by novel elements to generate new tourism element sets that can be applied in the cultural heritage sites. Then, creative computing theories are used for combining computer techniques and tourism activities to complete the cultural heritage protection and tourism value improvement. Furthermore, the performability of this approach is considered as imperative characteristics during the approach generation process. Achieving sustainability and dependability are necessary for the approach (or services, systems and so forth) application in target realms. The entire workflow of this approach for evaluating tourism value and identifying novel fusions among tourism elements has positive influences on approach performance. A system could be developed based on this approach, which would have stable outputs for improving tourism values.
    Maintenance Engineering for Urban Utility Tunnel using 3D Simulation
    Rui Han, Dan Shao, and Xian Lu
    2020, 16(7): 1118-1129.  doi:10.23940/ijpe.20.07.p14.11181129
    Abstract    PDF (650KB)   
    References | Related Articles
    With the development and popularity of BIM (building information model), the application of 3D information models in the municipal engineering entire life cycle is gradually becoming the focus in the field of the operation and maintenance of engineering. This paper deeply analyzes the application prospect of the operation and maintenance system for an urban utility tunnel, while discussing the system's operation structure and major function. Through a practical engineering case, the research team uses the core technique to realize "separation" and "interlinkage" between component block and parameter information, and it shows the technique route from the BIM completion model to the visual tunnel operation and maintenance system completely. The research results provide a reliable basis for the construction and the application of intelligent public municipal construction management system, which is designed based on the interdisciplinary concept of art design and architecture science.
    Application of Panel Data Model to Economic Effects of High-Speed Railway
    Tiantian Wang, Baiji Li, and Gongpeng Zhang
    2020, 16(7): 1130-1138.  doi:10.23940/ijpe.20.07.p15.11301138
    Abstract    PDF (307KB)   
    References | Related Articles
    By taking Beijing-Shijiazhuang high-speed railway (HSR) as the research object, this thesis selects the panel data from 2000 to 2015 of 23 cities, and uses the synthetic control method to construct the “counterfactual” state to analyse the economic impact of the Beijing-Shijiazhuang HSR on Baoding and Shijiazhuang along the line. The research shows that: a) The synthetic control method can effectively match the economic impact of Beijing-Shijiazhuang HSR on Baoding and Shijiazhuang. b) The investment pulling effect of Beijing-Shijiazhuang HSR on Baoding is greater than the industrial driving effect, while the situation in Shijiazhuang City is just the opposite. c) For the influence mechanism, it is found that Beijing-Shijiazhuang HSR has an economic overflow effect on large cities and a siphon effect on small and medium-sized cities. It has a complementary effect on the small and medium-sized cities in the primary and secondary industries, and agglomeration on large cities in the tertiary industry.
ISSN 0973-1318