International Journal of Performability Engineering, 2018, 14(12): 2905-2914 doi: 10.23940/ijpe.18.12.p1.29052914

Engine Life Prediction based on Degradation Data

Yanhua Cao,, Jinmao Guo, Yong Li, and Huiqiang Lv

Department of Equipment Support and Remanufacture, Academy of Army Armored Forces, Beijing, 100072, China

*Corresponding Author(s): * E-mail address: mark_cao1983@163.com

Accepted:  Published:   

Abstract

The motor hour (working time) of an armored vehicle’s engine reflects its technical state to a certain extent. However, even the same type of engine with the same motor hour shows very different technical states in different working environments. At the same time, it is difficult to obtain the full life data or physical failure mechanism required by the traditional life prediction method. In view of the above problems, a model of engine life prediction based on degradation data and neural networks is built in this paper. Firstly, the degradation parameters are selected according to certain principles, and the sample data are standardized. Then, the principal component analysis method is used to simplify multiple parameters to a comprehensive parameter, and the interpolation method is applied to get the parameter’s time series data as the train data of the neural network. Finally, the life prediction model of the engine based on the neural network is established. The validation results indicate that the model runs accurately. It is also practical and worthy of being used abroad.

Keywords: degradation data; neural network; life prediction; principal component analysis

PDF (408KB) Metadata Related articles Export EndNote| Ris| Bibtex

Cite this article

Yanhua Cao, Jinmao Guo, Yong Li, Huiqiang Lv. Engine Life Prediction based on Degradation Data. International Journal of Performability Engineering, 2018, 14(12): 2905-2914 doi:10.23940/ijpe.18.12.p1.29052914

1. Introduction

The technical state of an engine will deteriorate with an increase in use time. In the case of armored equipment engines, the traditional method is generally used to measure the technical state by motor hours [1], and periodic maintenance is carried out for the same type of engine based on a fixed number of motor hours. However, due to different factors such as the use environment, the users, and the intensity of use, the recorded motor hours sometimes cannot fully reflect the actual technical states of all the engines of the same model. In this case, the implementation of periodic maintenance can easily lead to “surplus maintenance” or “insufficient maintenance” [2], so it is necessary to consider achieving condition based maintenance (CBM). Life prediction is the basis for the realization of CBM [3], and it is the key technology to make regular preventive maintenance and corrective maintenance into the foreseeable CBM.

The research on life prediction at present mainly focuses on single-parameter prediction of a single machine. The research methods mainly analyze and predict based on probability or mathematical statistics by collecting data, or they predict a certain part based on the expansion of micro cracks [4]. The former needs to use a large number of samples for full life test to collect the full life data, while the latter needs to analyze and determine the complex failure mechanism and process of the product. With the increasing level of science and technology, more and more products with long life and high reliability have appeared in the fields of aviation, aerospace, and military industry. These products are difficult to fail in a short period of time even in accelerated life tests [5]. In this case, the use of a large number of samples for full life test is time-consuming and laborious, and the failure mechanism of the product is complex and difficult to determine. In view of this, it is no longer feasible to use the traditional reliability analysis method for life prediction.

Degradation data refers to the data of equipment performance, state, and other parameters that gradually deteriorate with an increase in use time. In the process of equipment use and testing, it is easy to obtain degradation data through in-vehicle instrumentation, BIT devices, and sensor measurements, and then artificial intelligence algorithms such as neural networks can be adopted to carry out reasoning for life prediction and maintenance decisions.

This method of life prediction based on degradation data and intelligent algorithms does not require full life data, nor does it consider the failure mechanism of the product. Instead, it treats the degradation process as a “black box” [6]. The reliability analysis and state maintenance decision can be carried out by using the backward correlation of the degradation data and the powerful reasoning calculation function of the intelligent algorithm to solve the problem of “surplus maintenance” or “insufficient maintenance” in the traditional maintenance mode of equipment; moreover, it meets the maintenance requirements under the condition of informationization. In this paper, the Principal Component Analysis (PCA) method is used to combine multiple degradation parameter data into one comprehensive parameter data, and then the BP neural network algorithm is used to realize multi-parameter life prediction.

2. Theoretical Analysis

2.1. Principle of Neural Network Trend Prediction

The prediction problems of neural networks are mainly divided into two types: the trend prediction of time series relations and the regression prediction of causality. Neural network trend prediction is a prediction method that reflects the rule and relationship of a prediction variable by collecting and analyzing the historical data of a prediction variable, and then it carries out trend extrapolation. Neural network trend prediction is a nonlinear improvement of the time series prediction method in linear prediction, also known as time series neural network prediction [7]. The trend prediction problem of neural networks is mostly single-valued prediction, so the output layer of the prediction model has only one node that is generally composed of a three-layer BP network, as shown in Figure 1.

Figure 1

Figure 1.   Time series neural network prediction model


The relationship between output ${{y}_{t}}$ and input $[{{y}_{t-1}}$,${{y}_{t-2}}$, , ${{y}_{t-p}}$] can be expressed by the following equation:

${{y}_{t}}=h({{\alpha }_{0}}+\sum\limits_{i=1}^{q}{{{\alpha }_{i}}g({{\theta }_{i}}+\sum\limits_{j=1}^{p}{{{\omega }_{ij}}{{y}_{t-j}}})})$

Here, p is the number of input nodes, q is the number of hidden layer nodes, the parameter ${{\omega }_{ij}}$(i=1, 2, , q;j=1,2, , p) is the connection weight between the jth input point and the ith hidden layer node, ${{\theta }_{i}}$ is the offset value of the ithhidden layer node, and ${{\alpha }_{i}}$ is the connection weight value between the hidden layer node and the output. The activation function g of the hidden layer is usually logsig type or tansig type function, while the activation function h from the hidden layer to the output layer is generally purelin linear function [8].

From the above relation, it can be seen that the above model realizes the nonlinear mapping from the observed samples to the predicted values, i.e.:

${{y}_{t}}=f({{y}_{t-1}},{{y}_{t-2}},\cdots ,{{y}_{t-p}},w)$

Here,w represents all parameters in the above network, and f is determined by the network topology structure and activation function.

2.2. Prediction Steps

The input point of the neural network trend prediction model is the sample value of the time data series of the same state parameter. It is difficult to achieve multi-parameter prediction directly by using this model, so the multi-parameter prediction of the neural network trend model can be considered using the indirect method. First, the data fusion of multiple parameters is carried out using the appropriate statistical analysis method, such as Principal Component Analysis (PCA). The original multiple parameters are simplified into one integrated parameter (main component), and then the trend prediction is carried out with the integrated parameters. The prediction step is shown in Figure 2.

Figure 2

Figure 2.   Steps of neural network trend prediction


After obtaining the sample data of the comprehensive parameter (principal component), the technical condition limit value of the engine is determined, which is measured by the value of the principal component. Then, the neural network trend prediction model is established. The input of the model is the time series data of the main component. With an increase in using time, the main component will tend to decrease or increase. Therefore, when the predicted value of the principal component exceeds its limit value, the prediction is stopped and the final prediction result is obtained.

3. Selecting Degradation Parameters and Data Processing

The test parameters can be determined according to the following three principles [9]: the power and economic performance of the engine can be reflected; it can reflect the change process of the technical state and wear condition of the engine; technically, it can realize the non-disintegration detection of real vehicles. After analysis, the selected test parameters are: cylinder compression pressure ${{\hat{p}}_{\max }}$, fuel supply advance angle ${{\hat{\theta }}_{fd}}$, and vibration energy ${{\hat{V}}_{p}}.$ The tests are performed on a certain type of armored vehicle engine. In this paper, 16 armored vehicles whose engine motor hours are between 0 and 550 hours were selected. The data of these parameters were tested and pretreated, and 16 sample data of three degradation parameters were obtained, as shown in Table 1.

Table 1.   Sample data

NumberUse time/h${{\hat{p}}_{\max }}$/MPa${{\hat{\theta }}_{fd}}$/ºCA${{\hat{V}}_{p}}$/g2
1112.895034.950054.3540
2382.874433.979648.7860
3592.864733.900046.8470
41032.822832.846242.8991
51402.809032.114242.0632
61872.795831.188038.8610
71902.784731.167136.0951
82502.773631.094231.3503
93002.752130.314432.9560
103202.743929.552132.7874
113502.733029.011430.8780
123902.713228.975131.7270
134502.673927.552128.4621
144972.664726.628526.5810
155072.652726.488327.8581
165502.642925.885425.1580

New window| CSV


The above parameters influence each other, and their value decreases as the engine’s use time increases. Firstly, the data in the above table should be standardized. According to the requirements of the PCA method [10], the mean value of each parameter’s sample value vector is 0 and the variance is 1, so the following methods are used to standardize the data [11]:

${{{x}'}_{ij}}=\frac{{{x}_{ij}}-{{{\bar{x}}}_{i}}}{{{s}_{i}}}$

Here, ${{x}_{i}}$ is the characteristic parameter sample value vector, ${{x}_{ij}}$ is the jth sample value of the sample value vector ${{x}_{i}}$, ${{{x}'}_{ij}}$ is the normalized sample value, ${{\bar{x}}_{i}}$ is the mean value of the characteristic parameter sample value vector, and $s$is the standard deviation of the sample vector of characteristic parameters. ${{\bar{x}}_{i}}$ and $s$ could be expressed as:

${{\bar{x}}_{i}}=\frac{1}{n}\sum\limits_{j=1}^{n}{{{x}_{ij}}}$
${{s}_{i}}=\sqrt{\frac{1}{n-1}\sum\limits_{i=1}^{n}{{{({{{\bar{x}}}_{ij}}-{{{\bar{x}}}_{i}})}^{2}}}}$

The data of parameters standardized by the above methods can be seen in Table 4 in the later section.

4. Data Fusion based on Principal Component Analysis

4.1. Principal of PCA

Principal component analysis (PCA) is a multivariate statistical method that uses the idea of dimensionality reduction to transform multiple variables into one or a few comprehensive variables with little information loss. Generally, the transformed comprehensive index is called the principal component, where each principal component is a linear combination of the original variables and each principal component is not related to each other. This makes the principal component have superior performance compared to the original variable. In this way, when studying complex problems, we can only consider a few principal components without losing too much information, so that we can more easily grasp the main contradiction, reveal the regularity between the internal variables of things, simplify the problems, and improve the analysis efficiency [12].

Suppose n samples are collected, and p variables are observed for each sample (x1, x2, , xp). For the sake of simplicity, we can set the mean value of xi(1≤ip) as 0 and the variance as 1 to form an n-by-p-order data matrix Xn×p:

${{X}_{n\times p}}=\left[ \begin{matrix} {{x}_{11}} & {{x}_{12}} & \cdots & {{x}_{1p}} \\ {{x}_{21}} & {{x}_{22}} & \cdots & {{x}_{2p}} \\ \vdots & \vdots & \vdots & \vdots \\ {{x}_{n1}} & {{x}_{n2}} & \cdots & {{x}_{np}} \\ \end{matrix} \right]$

The purpose of principal component analysis is to use p primitive variables (x1, x2, , xp) to construct a few new synthetic variables. Each new variable is a linear combination of the original variable. The new variables are independent and contain most of the information of p original variables. Let us define that x1, x2, , xp are the original variables, z1, z2, , zm,(mp) are the new comprehensive variables, and each new comprehensive variable is a linear combination of p original variables:

$\left\{ \begin{matrix} & {{z}_{1}}={{l}_{11}}{{x}_{1}}+{{l}_{12}}{{x}_{2}}+\cdots +{{l}_{1p}}{{x}_{p}} \\ & {{z}_{2}}={{l}_{21}}{{x}_{1}}+{{l}_{22}}{{x}_{2}}+\cdots +{{l}_{2p}}{{x}_{p}} \\ & \cdots \cdots \\ & {{z}_{m}}={{l}_{m1}}{{x}_{1}}+{{l}_{m2}}{{x}_{2}}+\cdots +{{l}_{mp}}{{x}_{p}} \\ \end{matrix} \right.$

It can be seen from the above analysis that the essence of principal component analysis is to determine the coefficient lij(i =1, 2, , m; j=1, 2, , p) of the original variable xj(j=1, 2, , p) in the principal component zi (i =1, 2, , m). It can be proven mathematically that they are the eigenvectors corresponding to the first m larger eigenvalues of the original variable correlation matrix. The variance var(zi) of each comprehensive variable zi happens to be the corresponding eigenvalueλi. The contribution of variance of each principal component is arranged in the order of eigenvalue and decreases successively, that is, λ1λ2λp≥0.

The practical methods of PCA are as follows:

(1) Standardize original data;

(2) Determine the covariance matrix that is the correlation matrix of the original data from the standardized data. The equation is as follows:

${{R}_{p\times p}}=\left[ \begin{matrix} {{r}_{11}} & {{r}_{12}} & \cdots & {{r}_{1p}} \\ {{r}_{21}} & {{r}_{22}} & \cdots & {{r}_{2p}} \\ \vdots & \vdots & \vdots & \vdots \\ {{r}_{p1}} & {{r}_{p2}} & \cdots & {{r}_{pp}} \\ \end{matrix} \right]$

Here, rij(i, j=1, 2, , p) is the correlation coefficient between the original variable xi and xj, and rij=rji. The calculation formula is:

${{r}_{ij}}=\frac{\sum\limits_{k=1}^{n}{({{x}_{ki}}-{{{\bar{x}}}_{i}})\sum\limits_{k=1}^{n}{({{x}_{kj}}-{{{\bar{x}}}_{j}})}}}{\sqrt{\sum\limits_{k=1}^{n}{{{({{x}_{ki}}-{{{\bar{x}}}_{i}})}^{2}}\sum\limits_{k=1}^{n}{{{({{x}_{kj}}-{{{\bar{x}}}_{j}})}^{2}}}}}}$

(3) Calculate the eigenvalues and the corresponding orthonormal eigenvectors;

(4) Calculate the main component contribution rate (CR) and cumulative contribution rate (CCR);

$CR={{{\lambda }_{i}}}/{\sum\limits_{k=1}^{p}{{{\lambda }_{k}}}}\;,\text{ }i=1,2,\cdots ,p$
$CCR={\sum\limits_{k=1}^{i}{{{\lambda }_{i}}}}/{\sum\limits_{k=1}^{p}{{{\lambda }_{k}}}}\;,\text{ }i=1,2,\cdots ,p$

(5) Determine the number of principal components to be retained.Generally, the main components with a cumulative contribution rate of 85% to 95% are taken.

4.2. Data Fusion

According to the above PCA method, the eigenvalues, variance contribution rate (CR), and cumulative contribution rate (CCR) of parameter correlation coefficient matrix can be obtained, as shown in Table 2.

Table 2.   Eigenvalues and variance contribution rate

NumberEigenvalueCR/%CCR/%
12.93697.88097.880
20.0601.98599.865
30.0040.135100.000

New window| CSV


Find the length of 1. Mutually orthogonal eigenvectors correspond to the eigenvalues of the correlation coefficient matrix, as shown in Table 3.

Table 3.   Eigenvectors

ParametersEigenvectors
No.1No.2No.3
${{{\hat{p}}'}_{\max }}$0.581-0.333-0.743
$\hat{\theta }_{fd}^{'}$0.579-0.4720.665
${{{\hat{V}}'}_{p}}$0.5720.8160.082

New window| CSV


According to the principle of the PCA method for preserving the number of principal components, it can be seen from Table 2 that the variance contribution rate of only the first eigenvalue is 97.88%. That is, only preserving the first principal component can explain more than 90% of the original parameters. Therefore, the principal component (PC) X can be worked out from the first eigenvector in Table 3 as follows:

$X=0.581{{{\hat{p}}'}_{\max }}+0.579{{{\hat{\theta }}'}_{fd}}+0.572{{{\hat{V}}'}_{p}}$

The main component data of the parameters are calculated by Equation (10), as shown in Table 4.

Table 4.   Standardized data and its principal component

NumberUse time/hStandardized data and the PCX
${{{\hat{p}}'}_{\max }}$$\hat{\theta }_{fd}^{'}$${{{\hat{V}}'}_{p}}$X
1111.66271.63182.10693.1160
2381.40471.28741.46412.3989
3591.28321.25911.24022.1840
41030.75820.88500.78451.4017
51400.58540.62520.68801.0956
61870.42000.29640.31830.5977
71900.28090.2890-0.00100.3300
82500.14190.2631-0.5488-0.0790
9300-0.1275-0.0137-0.3634-0.2898
10320-0.2302-0.2843-0.3829-0.5173
11350-0.3668-0.4762-0.6033-0.8339
12390-0.6148-0.4891-0.5053-0.9294
13450-1.1071-0.9942-0.8822-1.7235
14497-1.2224-1.3220-1.0994-2.1046
15507-1.3727-1.3718-0.9520-2.1364
16550-1.4955-1.5858-1.2637-2.5100

New window| CSV


5. Establishment of Life Prediction Model

Firstly, the limit value of the principal component is determined as a condition for the prediction cycle to stop. By testing and analyzing the engine close to the overhaul time, the principal component eigenvalues are obtained, as shown in Table 5.

Table 5.   The limit value of the principal component

NumberUse time/hX
1550-2.5100
2550-2.5723
3550-2.5510

New window| CSV


Since the principal component X tends to decrease with the duration of use, the limit of the principal component X takes the maximum value of -2.5100 out of the three values in Table 5 into consideration of the amount of redundancy.

The neural network prediction model generally requires that the input time series data are at equal time intervals, but the engine use time measured in this paper is not equal intervals. Therefore, this paper adopts the interpolation method to get the principal component data at equal time intervals. The time interval cannot be too large, or it may miss important information, so the time interval is taken as 10 hours. The interpolation point data of principal component X is shown in Table 6. The principal component X after equal interval treatment changes with the use of time, as shown in Figure 3.

Table 6.   The interpolation point data of principal component X

Use time/hXUse time /hXUse time /hX
202.87702000.2618380-0.9055
302.61142100.1937390-0.9294
402.37842200.1255400-1.0618
502.27612300.0573410-1.1941
602.1662240-0.0108420-1.3264
701.9884250-0.0790430-1.4588
801.8106260-0.1212440-1.5912
901.6328270-0.1633450-1.7235
1001.4550280-0.2055460-1.8046
1101.3438290-0.2476470-1.8857
1201.2611300-0.2898480-1.9668
1301.1783310-0.4036490-2.0478
1401.0956320-0.5173500-2.1141
1500.9897330-0.6228510-2.1625
1600.8837340-0.7284520-2.2493
1700.7778350-0.8339530-2.3362
1800.6719360-0.8578540-2.4231
1900.3300370-0.8817550-2.5100

New window| CSV


Figure 3

Figure 3.   The interpolation curve of PC X


In this paper, the BP neural network prediction model chooses the hidden layer transfer function as tansig type function and the output layer transfer function as purelin linear transfer function. The neural network is trained by the Levenberg-Marquardt algorithm, and the training error is set as 0.001. The neural network input layer is the interpolation data sequence of the principal component, representing the historical data for prediction. The output is the predicted value, and the output neuron is 1. The establishment of hidden layer nodes is a very important and complicated problem to determine the structure of the neural network model. At present, there is not a theoretically universal determination method. For the trend prediction of neural networks, another important point is the determination of the number of input layer nodes. There is no common theoretical method so far. Both can only be combined with actual experience and through trial and error to find better results and determine the number of input layer and hidden layer nodes[11]. Through trial calculation with Matlab, the input unit of the neural network prediction model is set to 10, and the number of hidden layer neurons is set to 12. The training and prediction effects are improved.

Take the interpolation data of principal component X in Table 6 as the training data of the BP neural network prediction model. The training data is obtained as follows: the first group of input training data takes 10 principal component interpolation points (as a column vector) corresponding to 20-110 motor hours, and the corresponding outputs training data takes the principal component interpolation point corresponding to 120 motor hours; the second group of input training data takes 10 principal component interpolation points (as a column vector) corresponding to 30-120 motor hours, and the corresponding outputs training data takes the principal component interpolation point corresponding to 130 motor hours. By analogy, 44 sets of training data can be constructed to form 10 rows and 44 columns of input matrix, and the corresponding output is a 1 row 44 column output vector. Accordingly, the neural network can be established and trained by Matlab software to determine the weight of the network and establish the life prediction model of the armored vehicle engine. The training error curve can be drawn by Matlab software. It is shown in Figure 4.

Figure 4

Figure 4.   Training error curve


6. Test and Use of the Model

After establishing the neural network trend prediction model, it is necessary to evaluate the model through testing to ensure that the model can be used after the accuracy requirement is met. In this paper, the mean relative error (MRE) [13] is chosen as the evaluation index of the model, and the required error is not more than 5%. Assume that$x(n)$, (n=1, 2, , N) is the actual observed value and $\hat{x}(n)$ is the predicted value at time $n$. The MRE could be expressed as:

$MRE=\frac{1}{N}\sum\limits_{n=1}^{N}{\left| \frac{x(n)-\hat{x}(n)}{x(n)} \right|}$

The test and use steps of the model are as follows: first, the one interpolation point datum closest to the engine’s use time and its nine previous adjacent interpolation data are used as the first group of input time series data containing 10 data to predict, and the predicted value of the next time interval can be obtained. When predicting the value of the next interval, the predicted value is regarded as the known historical data to be added to the next group of input time series data as the last data value, and the first data value of the input time series data will be removed to keep the input data length unchanged. The simulation is performed step by step. The prediction value of each step will be recorded until the output data reaches the limit value. Then, the predicted value will be compared with the actual interpolation point, and the prediction error can be calculated. The number of simulation steps multiplied by the time interval is the remaining useful life (RUL) of the engine.

In the early stage of engine life, the technical state is generally good. Thus, it is usually not necessary to predict the engine’s life in the early stage. It is only required to predict the life of the engine with a relatively long use time. This paper chooses the engines with a serial number of 9, 10, 11, 12, and 13 (the use time is 300h, 320h, 350h, 390h, and 450h respectively) in Table 4 as the test sample to calculate the model prediction error, while predicting their remaining useful lives.

Taking the engine with number 11 in Table 4 as an example, the prediction result with the neural network prediction model is shown in Figure 5. The MRE is 3.20%, and the RUL is 220 motor hours.

Figure 5

Figure 5.   The prediction results of X


The engines of numbers 9, 10, 12, and 13 in Table 4 are simulated with the same method mentioned above, and their errors and remaining useful lives are calculated. The prediction results are shown in Table 7. It can be seen from Table 7 that the MREis controlled within 4%, satisfying the prediction accuracy requirement. The prediction results of RUL are basically consistent with the actual use life of this type of armored vehicle engine. Furthermore, the data in Table 7 shows that the MRE becomes smaller and smaller as the use time increases. As mentioned before, the engine is usually in good condition in its early period, and it is not necessary to predict its life in this period. In other words, a better and more exact result can be achieved by using this model in the middle or later period of the engine’s life.

Table 7.   RUL prediction results

NumberUse time/hRUL/hMRE/%
93002604.00
103202402.41
113502001.57
123901601.50
134501000.82

New window| CSV


7. Conclusions

Degradation data provides information about the degradation process, which can be used to analyze the product failure process in depth. The degradation data is of great use value. The useful information in the degradation data can be used to study the reliability of products and indicate a new direction for the reliability technology [14].On the basis of related researches, a model of engine life prediction is constructed using the principal component analysis method and neural network model. Based on the degradation data, the model integrates multiple degradation parameters of the engine into one comprehensive parameter. Then, the interpolation algorithm is used to construct the equal interval time series data of the integrated parameters as the input of the neural network prediction model to predict the remaining life of the engine. The comprehensive parameters can not only comprehensively integrate the various state information of the engine, but also apply to single-parameter neural network forecasting modeling and indirectly realize multi-parameter prediction. The calculation results show that the prediction results are accurate and the model can predict the remaining life of the engine under the actual working conditions after determining the limit technical state. The model is suitable for real-time evaluation and has good application and promotion value.

Armored vehicle diesel engines are complicated and include many systems and parts. It is difficult to get an ideal forecast result with a single method [15]. Conversely, combining several algorithms to make a forecast can not only acquire virtuesbut also remedy respective shortcomings. For example, combining a neural network with fuzzy theory can bring qualitative information under the frame of the neural network. This combination method is better than the single method. It is an inevitable trend to study the combination forecast method [16] in the forecast domain.

Reference

Y. B. Liu, J. M. Liu, X. Y. Qiao , “

Application of Supportive Vector Machine in Technical States Evaluation of Diesel Engine

,” Journal of Academy of Armored Force Engineering, Vol. 23, No. 2, pp. 38-40, 2009

[Cited within: 1]

F. Zhao and H. W. Wang , “

Research on Condition based Maintenance for Aero-Engine Using Hidden Markov

,” Aeronautical Computing Technique, Vol. 40, No. 5, pp. 15-19, 2010

[Cited within: 1]

X. Liang, X. S. Li, L. Zhang , “

Survey of Fault Prognostics Supporting Condition based Maintenance

,” Measurement & Control Technology, Vol. 26, No. 6, pp. 5-8, 2007

DOI:10.1631/jzus.2007.A1596      URL     [Cited within: 1]

Some important issues related to fault prognostics including its definition,functionality,algorithmic approaches,difficulties and proposed future work are discussed.By using the curve of system fault degradation,the functionality is introduced.And the recent achievements in the field of fault prognostics according to their different approaches are also reviewed.

Y. H. Cao , “

Research on Autonomic Logistics Key Technologies for Armored Equipment,” Ph.D. Dissertation,

Academy of Armored Forces Engineering, 2012

[Cited within: 1]

H.W. Wang and K. N. Teng , “

Review of Reliability Evaluation Technology based on Accelerated Degradation Data

,” Systems Engineering and Electronics, Vol. 39, No. 12, pp. 2877-2885, 2017

[Cited within: 1]

Z. G. Guo , “Reliability Analysis of Barrel’s Life based on Performance Degradation Data,” Master’s Thesis, Nanjing University of Science and Technology, 2011

[Cited within: 1]

X.X. Kou and X. P. He , “

Time Series Prediction based on RBM Neural Network

,” Mathematics in Practice and Theory, Vol. 46, No. 9, pp. 173-178, 2016

URL     [Cited within: 1]

In order to overcome the shortcoming of the dependence on the results of the neural network initialization and the weak generalization,A Neural network based on the restricted Boltzmann Machine(RBM) was presented.Through the unsupervised learning method to optimize the initialization of neural network,the paper integrated the RBM with the neural network.Through experimental comparison with neural network time series prediction model,the result shows that the neural network model based on RBM is better than the pure neural network prediction model,and it can improve the prediction accuracy.

W. B. Chen , “The Principle and Practice of Artificial Neural Network, ”Xidian University Press, Xi'an, pp. 44-64, 2016

[Cited within: 1]

Y. H. Cao, S. X. Zhang, and Y. Li , “

Design of Engine Condition Detection for a Certain Type of Armored Vehicle

,” in Proceedings of OSEC2017, pp. 15-18, 2017

[Cited within: 1]

W.B. Zhang and H.Y. Chen. , “Statistical Analysis of Practical Data and Application of SPSS12.0,” The Posts and Telecommunications Press, Beijing, 2006

[Cited within: 1]

X. P. Zheng, J. J. Gao, M. T. Liu , “Accident Prediction Theory and Method, ” Tsinghua University Press, Beijing, 2009

[Cited within: 2]

D. Zhang, C. S. Zu, C. C. Zhao , “

Principal Component and Neural Network Combined Fuel Consumption Forecast

,” Agricultural Equipment & Vehicle Engineering, Vol. 53, No. 6, pp. 47-52, 2015

[Cited within: 1]

J. J. Jiang , “

Research on Prediction of Lead-Acid Batteries’ Remaining Discharge Time

”, Electrical & Energy Management Technology, No. 7, pp. 73-78, 2017

[Cited within: 1]

Z. L. Huang, Z. B. Wang, J. W. Wang , “

Review of Reliability Evaluation Methods based on Performance Degradation Data

,” Electrical & Energy Management Technology, No. 19, pp. 35-40, 2017

URL     [Cited within: 1]

The reliability evaluation method based on performance degradation data is widely used in the research of high reliable products. Based on it,this paper introduced the research status of performance degradation data reliability evaluation methods. Also the analysis and modeling of degraded data and its application in engineering were introduced. Finally,the exsiting problems were proposed and the prospect of this method was predicted.

Y. H. Cao, Y. Li, Y. Zheng, X. Zan , “

Runtime Forecast of Military Vehicle Diesel Engine based on BP Neural Network

,” in Proceedings of QR2MSE 2016, pp. 509-512, 2016

[Cited within: 1]

H. Y. Chen, J. M. Zhu, Z. N. Ding , “

A Survey of Researches on Combination Forecasting Models and Methodologies

,” College Mathematics, Vol. 33, No. 4, pp. 1-9, 2017

URL     [Cited within: 1]

Prediction accuracy and prediction risk are the core issues of prediction research.From the perspective of complementary information,combination forecasting provides an effective way to improve prediction accuracy.We give a survey of researches on the development of combination forecasting models and methodologies in this paper.We conduct the reviews from several aspects,which are the classification of combination forecasting models,the methods of weight calculation,some kinds of methods of model constructing for combination forecasting. The problems of the optimal sub-models selection in the combination forecasting are reviewed and discussed,and finally we give the prospects and future research directions of combination forecast models under uncertain environment.

/