Subido por Adriana Mexicano

18 Hilbert-Huang Transform PAPER 2014

Anuncio
2014 10th International Conference on Natural Computation
Hilbert-Huang Transform and Neural Networks for
Electrocardiogram Modeling and Prediction
Ricardo Rodríguez
Adriana Mexicano, Salvador Cervantes, Rafael Ponce
Department of Mechatronics
Technological University of Ciudad Juarez
Ciudad Juarez, Chihuahua, México
Postgraduate Studies and Research Division
Technological Institute of Ciudad Victoria
Cd. Victoria, Tamaulipas, México
Jiri Bila
Nghien N. B.
Department of Instrumentation and Control Engineering
Czech Technical University in Prague
Prague, Czech Republic
Department of Information Technology
Hanoi University of Industry
Hanoi, Vietnam
Abstract—This paper presents a predictive model for the
prediction and modeling of nonlinear, chaotic, and nonstationary electrocardiogram signals. The model is based on the
combined usage of Hilbert-Huang transform, False nearest
neighbors, and a novel neural network architecture. This model
is intended to increase the prediction accuracy by applying the
Empirical Mode Decomposition over a signal, and to reconstruct
the signal by adding each calculated Intrinsic Mode Function and
its residue. The Intrinsic Mode Function that obtains the highest
frequency oscillation is not considered during the reconstruction.
The optimal embedding dimension space of the reconstructed
signal is obtained by False Nearest Neighbors algorithm. Finally,
for the prediction horizon, a neural network retraining technique
is applied to the reconstructed signal. The method has been
validated using the record 103 from MIT-BIH arrhythmia
database. Results are very promising since the measured root
mean squared errors are 0.031, 0.05, and 0.085 of the ECG
amplitude, for the prediction horizons of 0.0028, 0.0056, 0.0083
seconds, respectively.
obtained by the acquisition system may present various shapes
even coming from the same patient, and also, the ECG shapes
differ from patient to patient [3]. Also, some records might
present disturbances such as baseline wander and power line
interference.
Keywords— Electrocardiogram; Hilbert-Huang transform;
False nearest neighbor; neural network
In addition, several research works have been reported in
the literature for modeling, prediction, and classification of
ECG beats, and also to predict heart diseases [1][3][4][5][6].
In [7], authors used a direct and iterated neural network
method to predict an electrocardiogram signal. Authors found
that clinical information was preserved for three steps ahead
forecasting using a direct method, and that neural networks
have much potential for electrocardiogram signals forecasting.
Furthermore, in [8] authors presented a novel neural network
architecture named Hybrid-connected Complex Neural
Network (HCNN) to capture the dynamics embedded in
highly non-linear Mackey-Glass and electrocardiogram
signals. They predicted a long-term horizon with a four times
size the training signals. In [9], authors proposed a method for
modeling and predicting electrocardiographic signals of
people that suffer post-traumatic stress. Authors combined an
autoregressive moving average model, parameterized by
minimal embedding dimension, with nonlinear analysis
methods. They obtained the best prediction capability for a
prediction horizon less or equal to 4.
I.
INTRODUCTION
Electrocardiogram (ECG) analysis is one of the most
common techniques used by medical specialists during heart
diagnostics. This is mainly because an electrocardiogram
signal records electrical changes on the skin. The electrical
changes are commonly measured by electrodes placed on the
surface of the body. The electrocardiogram pattern and the
heart rate variability could be observed over several hours.
Because the length of the data is enormous and the analysis is
time consuming, monitoring and classification systems of
cardiac diseases can help during diagnostics. Computer-based
medical diagnostic systems have been developed to assist
medical specialists in the analysis of patient data [1][2].
Electrocardiogram analysis systems process the ECG
signals, which are measured under particular conditions, such
as intensive care monitoring, ambulatory ECG monitoring, or
stress test analysis. The form of the ECG waves that are
Even though the ECG may present some disturbances, it
can be characterized by the recurrent wave sequence of P,
QRS and T – waves, which are associated within each beat.
The QRS wave is the most distinguished waveform in the
sequence, which is caused by ventricular depolarization of the
human heart [3]. The QRS waveform has been the most
analyzed in order to find information more efficiently. A
typical QRS waveform in a healthy patient presents a range
from 40 ms to 100 ms [4]. The QRS wave has been used to
perform studies over Heart Rate Variability, for interpretation
of arrhythmia, and for reliable heart disease diagnosis. For
instance, the detection of the QRS complex must be accurate
and reliable to be properly studied [1][3][4].
This work was supported by PROMEP, grant No. PROMEP/103.5/13/ 9045,
and by Technological University of Ciudad Juarez.
978-1-4799-5151-2/14/$31.00 ©2014 IEEE
561
This paper is concerned on the modeling and prediction of
electrocardiogram signals. Firstly, applying the Hilbert-Huang
transform, then obtaining the minimum embedding dimension
of the reconstructed signal, and finally applying the number of
the embedding dimension to construct the neural network
architecture.
The paper is organized as follows: Section II.A describes
the Hilbert-Huang transform used to obtain the empirical
mode decomposition; Section II.B describes the False Nearest
Neighbors algorithm applied to obtain the minimum
embedding dimension of the reconstructed data, and Section
II.C presents the neural network architecture applied for the
modeling and prediction of ECG signals. Section III describes
obtained numerical results. Finally, Section IV presents
conclusions and further research.
II.
In case that x ( k ) is a non-monotonic function, then all the
local maxima and local minima of the data are found.
Furthermore, the upper envelope, and lower envelop are
obtained. The upper envelope consists in the interpolation
between local maxima [15][17]. The lower envelop consists in
the interpolation between local minima [18]. The mean of both
envelopes, x 0 ( k ) , is obtained as follows
x0 (k ) =
Hilbert Huang Transform
The Hilbert-Huang transform (HHT) is an empirically
based data-analysis method. Since its basis of expansion is
adaptive, then it can produce physically meaningful
representations of nonlinear and non-stationary systems
[13][14]. The adaptive capability of Hilbert-Huang transform
allows the processing of nonlinear and non-stationary data.
Hilbert–Huang transform presents two stages: 1) the Empirical
Mode Decomposition (EMD) to decompose the signal into
Intrinsic Mode Functions (IMFs), and 2) the Hilbert Spectral
Analysis (HSA) to calculate instantaneous frequency and
amplitude for each IMF [13][14].
An Intrinsic Mode Function (IMF) is defined as a signal
that satisfies the following two conditions: a) the number of
zero crossings and the number of extremes (in the whole
dataset) must be equal or different at most by one, and b) the
mean value of the envelope defined by the local minima and
the envelope defined by the local maxima is zero, at any point
[13][14][15].
The IMF construction starts from the highest frequency
component, and the last extracted function has usually one
extreme or is monotone. It is possible to observe trends,
stochastic components, and periodical components because of
the EMD. Considering the original signal as x ( k ) , thus the
algorithm of the EMD is described as follows
[14][15][16][17]:
(2)
x1 ( k ) = x ( k ) − x 0 ( k )
A stopping criterion is calculated by using a Cauchy type
of convergence test [15][18]. The test, as shown in eq. (3),
requires the normalized squared difference between two
successive sifting operations. The sifting process will stop
when SD < th , where th is a small threshold. Other stopping
criterion is when the number of extrema and zero crossings
remains the same, and are equal or differ at most by 1.
i
N
∑ x (k ) − x (k )
1i −1
SDi = k = 0
N
2
1i
∑ x (k )
k =0
A.
(1)
2
where x max ( k ) stands for the upper envelope, and x min ( k ) stands
for the lower envelope. The envelopes mean is subtracted
from the data to determine x1 ( k ) as shown in eq. (2).
METHOD
An electrocardiogram is a chaotic, nonlinear, and nonstationary bio-signal, because it presents maximum Lyapunov
Exponents, and strange attractors in its phase or its return
maps [10]. These characteristics make the prediction and
modeling of electrocardiogram signals a challenge. Since
several cardiac diseases could be promptly treated if they were
predicted, several research are currently running to predict and
model the behavior of electrocardiogram signals [8][11][12].
In this research, we propose a model for the prediction and
modeling of electrocardiogram signals. The model is based on
combining the Hilbert-Huang transform, False Nearest
Neighbors, and a neural network architecture with retraining
technique. The details of the design and implementation of the
proposed predictive model are presented next.
x max ( k ) + x min ( k )
(3)
2
1i−1
In case that the stopping criteria are not satisfied, then
x1 ( k ) is applied to compute a new x1 ( k ) until the stopping
criteria is satisfied.
Otherwise, when one of the stopping criteria is satisfied,
then imf1 ( k ) = x1 ( k ) corresponds to the first IMF. Then, the
residue is obtained as shown in eq. (4).
r1 ( k ) = x( k ) − imf1 ( k )
(4)
Then, let x ( k ) = r1 ( k ) , and the steps above are repeated (for
the iteration i = i + 1 ) to produce imfi ( k ) until a residue rn ( k ) is
obtained, which becomes a monotonic function or its
amplitude becomes smaller than a predetermined value
[15][19][20]. The final residue can still be different from zero,
even for data with zero mean. When the data presents a trend,
the final residue should be the trend [13][14][15].
After adding all the Intrinsic Mode Functions and the
residue (as shown in eq. (5)), the original signal is obtained.
n
x ( k ) = ∑ imfi ( k ) + rn ( k )
i =1
562
(5)
The next step of the Hilbert-Huang transform is to obtain
the Hilbert transform [14][15]. The Hilbert transform is
applied to each IMF; it provides instantaneous frequencies and
instantaneous amplitudes for each time k . The details of the
Hilbert transform are presented next.
In eq. (12), the number d of elements is called the
embedding dimension, and the time τ is referred as the delay.
Y ( k ) = [y ( k ), y ( k + τ ),
For a real signal, y ( k ) , its Hilbert transform is defined as
−1
y H ( k ) = H ( k ) = FFT (f ( k ) ∗ h (i ))
(6)
where FFT −1 is the Inverse Fast Fourier Transform, and the
vector f stores the Fast Fourier Transform (FFT) of the
y ( k ) signal. The vector h is created as follows
1 for
2
for
i = 2,3,
for
i = ( N / 2) + 2,
Dd (k ) =
d −1
∑ ⎡⎣ y k + i ⋅ τ − y
(
)
NN
(k + i ⋅ τ )
i =0
, ( N / 2)
(7)
,N
Consequently, the analytic signal z ( k ) , is given by eq. (8).
z ( k ) = y ( k ) + jy H ( k )
y 2 (k ) + y H 2 (k )
(9)
y ( k + i ⋅ τ ) − y NN ( k + i ⋅ τ )
Dd ( k )
⎛ y H (k ) ⎞
⎟
⎝ y (k ) ⎠
(10)
The instantaneous frequency ω ( k ) , in discrete time, is
defined by eq. (11).
∂
∂
y (k ) ⋅
y H (k ) − y H ( k ) ⋅
y(k )
∂ θ (k )
∂k
∂k
=
ω (k ) =
∂k
a2 (k )
(11)
2
> Rt
Dd +1 ( k )
≥ At
Ra
(13)
(14)
(15)
where Ra is the standard deviation of the given time series
data, and At = 2 . For instance, y ( k ) and its nearest neighbors
are false nearest neighbors if either eq. (14) or eq. (15) fails.
C. Neural Network predictive model
In this work, a Multilayer Perceptron (MLP) has been
used, which is a type of feedforward neural network, also
called “static neural network”. The input-output relation of our
MLP model is given by
B.
False Nearest Neighbors
The method of False Nearest Neighbors has been proposed
in [21] to find the minimum embedding dimension. The
principle of the method is based on the idea that not all points
that are close to each other will be neighbors whether the
embedding dimension is increased. False Nearest Neighbors is
used to calculate how many dimensions are enough for
embedding a time series. For a given time series
data y ( k ) where k = 1, 2,… , N . The idea of the method is to
combine sequence values into vectors. It means to construct ddimensional vectors from the observed data using a delay
embedding [22][23][24].
⎤⎦
Rt stands for a threshold. In [22], authors recommend the
range10 ≤ Rt ≤ 50 . In our case, Rt = 15 has been considered as
well as a second criterion of falseness of nearest neighbors as
been suggested in [22] (see eq. (15)).
And the instantaneous phase angle in the complex plane is
defined by eq. (10).
θ ( k ) = arctan ⎜
(12)
Then, it is compared the distance between the vectors in ddimensional space to the distance between the vectors when
embedded in dimension d + 1 as shown in eq. (14).
(8)
In which, its instantaneous magnitude or envelope is
described by eq. (9).
a( k ) =
y ( k + ( d − 1)τ )]
Each vector y ( k ) has a nearest neighbor y NN (k ) with
nearness in the sense of some distance function, in the
dimension d [21][22]. For each vector, its nearest neighbor is
found in d-dimensional space using Euclidean distance as
follows in eq. (13).
i = 1, ( N / 2) + 1
0
,
k :1, 2, 3,… , N − (d − 1)
y ( k + ns ) = w out ⋅ yk ( k ) = w out ⋅ φ ( W ( k ) ⋅ x)
(16)
where x , y ( k + ns ) , and φ are the input vector, the output
neuron, and the function defined by the interlayer
weights W of the network, respectively. The output neuron
y ( k + ns ) is predicted ns -samples ahead; for instance, it is the
neural network output at time k + ns . The weights of the
output layer are defined by w out . Fig. 1 presents the MLP
predictive model that contains one hidden layer with n1 hidden
neurons.
563
We also use a backpropagation technique as learning rule
in batch adaptation. The Levenberg-Marquardt (L-M)
algorithm is applied in this work as batch learning. The
formula for a single increment is presented in eq. (17).
T
1.5
A m plitude (m V )
Δwi , j = [ ji , j × ji , j + μ × I ] × ji , j × e
−1
T
(17)
where μ is the learning rate, which is increased or decreased in
epoch-time according to its performance. In addition, I is the
identity matrix, T stands for transposition. The Jacobian
matrix is denoted by ji , j which contains the first derivatives of
the network errors, according to the weights and biases in all
the patterns, as is presented in eq. (18).
yk(k)
w2 ⋅ x(k)
wn1 ⋅ x(k)
wout ⋅ yk
y(k + ns )
S wave
Q wave
0.5
0.6
0.7
0.8
0.9
Time (s)
1
1.1
1.2
1.3
Our method consists, first on decomposing the ECG signal
into Intrinsic Mode Functions by applying EMD. The EMD
extracted eight IMFs and a residue signal as shown in Fig. 3.
The figure shows that most of the ECG propagation
information is carried in the first and second IMFs. It is
observed that each IMF represents an oscillator, and the first
IMF shows the highest frequency oscillation.
1
⎡ 1
⎤
⎢ yk (k) ⎥
1
⎢
⎥
⎢
⎥
⎢
⎥
⎢
⎥
⎢ yk (k −1) ⎥
2
⎢
⎥
⎢
⎥
⎢
⎥
⎢
⎥
⎢
⎥
⎢
⎥
⎣⎢ykn1 (k −n+1)⎦⎥
w1 ⋅ x(k)
T wave
P wave
0
Figure 2. The ECG signal in the database MIT-BIH record 103.
imf
=
0.5
-1
0.4
1
0
-1
0
0.5
1
1.5
2
2.5
3
2
2.5
3
2
2.5
3
2
2.5
3
2
2.5
3
2
2.5
3
2
2.5
3
2
2.5
3
2
2.5
3
Time(s )
2
⎡ 1 ⎤
⎢
⎥
⎢
⎥
⎢ y(k) ⎥
⎢
⎥
⎢
⎥
⎢ y(k −1) ⎥
⎢
⎥
⎢
⎥
⎢
⎥
⎢
⎥
⎢
⎥
⎢y(k − n +1)⎥
⎣
⎦
φ(W⋅ x(k))
1
-0.5
imf
x(k)
ECG signal
R wave
2
0.5
0
-0.5
0
0.5
1
1.5
1
0
-1
0.5
1
1.5
0
0.5
1
1.5
(18)
5
Time(s )
0.5
0
-0.5
0
0.5
1
1.5
Time(s )
imf
6
∂ ( y ( k + n s ) − y ( k + n s ))
∂e
ji , j =
=
∂wi , j
∂wi , j
0
Time(s )
4
Figure 1. Multilayer Perceptron predictive model with 1 hidden layer. The
hidden layer contains n1 hidden neurons.
imf
output neuron
imf
hidden layer
imf
3
Time(s )
0.5
0
-0.5
where the vector of errors, e , is defined in eq. (19).
0.5
0
-0.5
0
0.5
1
1.5
(19)
imf
e = [e(1) e( 2 ) … e( N )]T
7
Time(s )
0.5
0
-0.5
0
0.5
1
1.5
imf
r
III.
NUMERICAL RESULTS
Multilayer Neural Network with retraining technique has
been presented for predicting electrocardiogram signals. The
record 103 from MIT-BIH arrhythmia database has been used
to test our method. The record was sampled at 360 Hz, with
11-bit resolution over ± 5 mV range [25]. Fig. 2 shows the
electrocardiogram signal from record 103.
0
0.5
1
1.5
Time(s )
n
The number of samples is denoted by N , which is the data
length.
8
Time(s )
0.5
0
-0.5
0
-0.2
-0.4
0
0.5
1
1.5
Time(s )
Figure 3. EMD expansion of the ECG signal in the database MIT-BIH record
103. It includes 8 IMFs and the residue r.
After that, the signal is reconstructed without the highest
frequency oscillation (i.e, without imf1 ( k ) ). Fig. 4 shows the
reconstructed signal after adding each Intrinsic Mode
Function, except the imf1 ( k ) .
564
neighbors algorithm (see Fig. 5). This allows decreasing the
number of inputs to the neural network predictive model, and
for instance, the time consumption for training the model is
reduced due to the reduction on the complexity of the dynamic
system. The number of input-output training patterns for the
retraining has been set to N train = 93 , and the number of
hidden neurons has been set to n1 = 2 .
Reconstructed ECG signal
1.5
A m plitude (m V )
1
0.5
0
Minimum embedding dimension of the orignal
ECG signal by false nearest neighbours
-1
0.4
0.5
0.6
0.7
0.8
0.9
Time (s)
1
1.1
1.2
The percentage of false nearest neighbours
-0.5
1.3
Figure 4. The reconstructed signal from the database MIT-BIH record 103.
Furthermore, the minimum embedding dimension of the
reconstructed signal is found by using the false nearest
neighbors algorithm. Fig. 5 illustrates the number of minimum
embedded dimension of the reconstructed signal. This
minimum embedding dimension has been obtained by the
false nearest neighbors algorithm. The procedure has
identified d = 3 as the optimal embedding dimension.
The percentage of false nearest neighbours
Minimum embedding dimension of the reconstructed
ECG signal by false nearest neighbours
100
100
90
80
70
60
50
Optimal embedding
dimension
40
30
20
10
0
1
2
3
4
6
7
8
9
10
Figure 6. The percentage of false nearest neighbors versus the embedding
dimension for 3 seconds (1080 samples) of the original ECG signal in the
database MIT-BIH record 103. d = 1, 2,… ,10 ; τ = 1 . The threshold Rt is
equal to 15.
90
80
The performance of the multilayer perceptron with
retraining technique has been measured using the root mean
squared error (RMSE), defined in eq. (20).
70
60
50
Optimal embedding
dimension
40
1 N
∑ (y ( k + ns ) − y ( k + ns ))
N k =1
30
RMSE =
20
10
0
5
Embedding dimension
1
2
3
4
5
6
7
8
9
(20)
10
Embedding dimension
Figure
Error!5. The percentage of false nearest neighbors versus the embedding
dimension for 3 seconds (1080 samples) of the reconstructed ECG signal in
the database MIT-BIH record 103. d = 1, 2,… ,10 ; τ = 1 . The threshold Rt is
equal to 15.
Then, Fig. 7 shows an example of MLP predictive model
for 1 sample ahead (i.e., t pred = 0.0028 prediction horizon).
The RMSE of the t pred = 0.0028 prediction horizon is 0.031 of
the amplitude, as can also be seen in Fig. 7.
Prediction by MLP
RMSE= 0.031076, computing time= 332.0582 s
We also have calculated the minimum embedding
dimension of the original electrocardiogram signal (record 103
from MIT-BIH arrhythmia database). The false nearest
neighbors algorithm has been applied for this calculation. Fig.
6 illustrates the minimum embedding dimension of the
original electrocardiogram signal, which was found to be
d = 5 as the optimal embedding dimension.
1.5
real
Amplitud (mV)
By decreasing the embedding dimension, the complexity
of the system decrease as well. Then, we use the reconstructed
electrocardiogram signal and the number of its optimal
embedding dimension to build our neural network
architecture.
The configuration of the predictive model was
( n + 1) inputs, where n is the number of samples of the signal
used to feed the model (see Fig. 1). The bias is also considered
as x ( k = 1) = 1 (see Fig. 1).The number of n samples that feed
the model corresponds to the number of the optimal
embedding dimension obtained with the false nearest
predicted
error
300
400
1
0.5
0
-0.5
-1
0
100
200
500
time [s]
t
pred
= 0.0028 s, n= 3, Ntrain= 93
Figure 7. Prediction of reconstructed electrocardiogram signal (in the database
MIT-BIH record 103) by the MLP predictive model with retraining. The
prediction horizon is of 0.0028 seconds. The prediction is superimposed on
the reconstructed electrocardiogram signal, and the middle red line is the
prediction error.
565
Fig. 8 presents an example of MLP predictive model of 2
samples ahead (i.e., t pred = 0.0056 prediction horizon). In Fig.
8, the RMSE of the t pred = 0.0056 prediction horizon is 0.05
of the amplitude.
Prediction by MLP
RMSE= 0.050056, computing time= 231.7582 s
1.5
real
predicted
error
embedding dimension, see Fig. 5) with the bias x ( k = 1) = 1 ,
n1 = 2 hidden neurons, and one output neuron (as seen in the
neural network architecture, Fig. 1). In the testing reported
here, Fig. 7, Fig. 8, and Fig. 9, showed 500 prediction points.
The RMSE obtained in testing provides very promising
results. The prediction performance obtained is 0.03, 0.05, and
0.085 RMSE of the amplitude for 0.0028, 0.0056, and 0.0083
seconds of prediction horizons, respectively.
Amplitud (mV)
1
IV.
0.5
0
-0.5
-1
0
100
200
300
400
500
time [s]
t
pred
= 0.0056 s, n= 3, N train= 93
Figure 8. Prediction of reconstructed electrocardiogram signal (in the database
MIT-BIH record 103) by the MLP predictive model with retraining technique.
The prediction horizon is of 0.0056 seconds. The prediction is superimposed
on the reconstructed electrocardiogram signal, and the middle red line is the
prediction error.
Next is presented the result for predicting 3 samples ahead
(i.e., t pred = 0.0083 prediction horizon) from this MLP
predictive model. The RMSE of the t pred = 0.0083 prediction
horizon is 0.085 of the amplitude. This result is presented in
Fig. 9.
Prediction by MLP
RMSE= 0.085142, computing time= 184.1529 s
1.5
real
predicted
error
Amplitud (mV)
1
0.5
Also, the false nearest neighbors calculation has been
applied to find the optimal embedding dimension space of the
reconstructed electrocardiogram signal. Our experiments
showed that the reconstructed signal decreased the optimal
embedding dimension space; therefore, the complexity of the
system is reduced. Our experiments have also shown that our
neural network architecture obtained a high prediction
accuracy by using the reconstructed signal. The prediction
results for 1, 2 and 3 steps ahead (i.e., 0.0028, 0.0056, and
0.0083 seconds) were 0.03, 0.05, and 0.085 RMSE of the
amplitude, respectively. It is important to emphasize that the
inputs to the neural network architecture corresponds to the
optimal embedding dimension space.
Future work consists on the automatic classification of
cardiac arrhythmias using multilayer perceptron neural
network with backpropagation learning technique.
0
-0.5
-1
-1.5
CONCLUSIONS
This paper has presented a method for combination of
Hilbert-Huang transform and false nearest neighbors
algorithm to find the optimal embedding dimension space of
electrocardiogram signals. Also, a neural network retraining
technique has been presented to model and predict
electrocardiogram signals. Its retraining technique allows
capturing the dynamics and time-varying of the
electrocardiogram signals. The Hilbert-Huang transform
works as an adaptive filter. Thus, the signal is reconstructed
without the Intrinsic Mode Function whose has obtained the
highest frequency oscillation. The noise of the
electrocardiogram signal has been reduced with this process.
ACKNOWLEDGMENT
0
100
200
300
400
500
time [s]
tpred= 0.0083 s, n= 3, N train= 93
Figure 9. Prediction of the reconstructed electrocardiogram signal (in the
MIT-BIH database record 103) by the MLP predictive model with retraining
technique. The prediction horizon is of 0.0083 seconds. The prediction is
superimposed on the reconstructed electrocardiogram signal, and the middle
red line is the prediction error.
In this research, we have presented a novel neural network
architecture based on traditional MLP neural network. The
neural network was trained with 300 epochs, in the window of
retraining, and then a new sample was predicted. The training
was executed with μ = 0.01 as learning rate, which is
increased or decreased according to the Levenberg-Marquardt
optimization technique (see eq. (17) - eq. (19)). The neural
network contained n = 3 inputs (according to the minimum
R.R. thanks his previous Ph.D. supervisor-specialist Doc.
Ivo Bukovsky for his valuable suggestions. R.R., also thanks
Prof. Noriyasu Homma and Kei Ichiji from Cyber Science
Center, Tohoku University for their suggestions and support.
REFERENCES
[1]
[2]
[3]
566
G. N. Golpayegani, A. H. Jafari, “A novel approach in ECG beat
recognition using adaptive neural fuzzy filter,” Journal of Biomedical
Science and Engineering, vol. 2, pp. 80-85, April 2009.
Z. Jin, Y. Sun, and A. C. Cheng, “Predicting cardiovascular desease
from real-time electrocardiographic monitoring: An adaptive machine
learning approach on a cell phone, “31st Annual International
Conference of the IEEE EMBS, Minneapolis, Minnesota, USA, pp.
6889-6892, September 2009.
S. Sharma, S. S. Mehta, and D. Nagal, “Heart monitoring via wireless
ECG, ” International Journal of Electronics Signals and Systems, vol. 2,
pp. 76 – 81, 2012.
[4]
R. Adams and A. Choi, “Using Neural Networks to Predict Cardiac
Arrythmias,” 2012 Florida Conference on Recent Advances in Robotics,
Boca Raton, Florida, May 2012.
[5] S.-N. Yu and K.-T Chou, “Integration of independent component
analysis and neural networks for ECG beat classification,” Expert
Systems with Applications, Vol. 34, pp. 2841-2846, 2008.
[6] D. Patra, M. K. Das, S. Pradhan, “Integration of FCM, PCA and Neural
Networks for Classification of ECG Arrhythmias,” IAENG International
Journal of Computer Science, Vol. 36, No. 3, 2010.
[7] R. Abbas, W. Aziz, M. Arif, “Electrocardiogram Signal Forecasting
using Iterated and Direct Methods on Artificial Neural Network,” J.
App. Em. Sc., vol. 1, issue 1, pp. 72-78, June 2004.
[8] P. Gomez-Gil, J. M. Ramirez-Cortes, S. E. Pomares, V. AlarconAquino,“ A Neural Network Scheme for Long-Term Forecasting of
Chaotic Time Series,” Neural Processing Letters, Vol. 33, No. 3, pp.
215-233, June 2011.
[9] J. J. Aguila, E. Arias, M.M. Artigao, and J.J. Miralles, “A prediction of
Electrocardiography Signals by Combining ARMA Model with
Nonlinear Analysis Methods,“ Recent Researches in Applied Computer
and Applied Computational Science, Greece, pp. 31-37, 2011.
[10] L. Glass, P. Hunter, A. McCulloch, “Theory of heart,” Springer-Verlag
New York, Inc., USA, 1991.
[11] J.L. Forberg, M. Green, J. Bjork, M. Ohlsson, L. Edenbrandt, H. Ohlin,
U. Ekelund, “In search of the best method to predict acute coronary
syndrome using only the electrocardiogram from the emergency
department,” Journal of electrocardiology, 42 (1), pp. 58-63, 2009.
[12] D. Shanthi, G. Sahoo, N. Saravanan, “Designing an artficial neural
network model for the prediction of thrombo-embolic stroke,” Int. J. of
Biometric and Bioinformatics (IJBB), vol. 3, issue 1, pp. 10-18, 2009.
[13] N. E. Huang, Z. Shen, S.R. Long, M.C. Wu, H.H. Shih, Q. Zheng, N.-C.
Yen, C.C. Tung, and H.H. Liu, “The empirical mode decomposition and
the Hilbert spectrum for nonlinear and non-stationary time series
analysis,” Procceddings of the Royal Society, London, A454, pp. 903995, 1998.
[14] N. E. Huang, M.-L. Wu, W. Qu, S. R. Long, and S. S. P. Shen,
“Applications of Hilbert-Huang transform to non-stationary financial
time series analysis,” Applied Stochastic Models in Business and
Industry, vol. 19, issue 3, pp. 245-268, 2003.
[15] N. E. Huang, S. S. P. Shen, “Hilbert- Huang Transform and Its
Applications,” World Scientific Pub Co, vol. 5, 2005.
[16] G. Rilling, P. Flandrin, and P. Goncalves, “On Empirical Mode
Decomposition and its Algorithms,” IEEE-EURASIP Workshop on
Nonlinear Signal and Image Processing (NSIP-03), Grado (I), June
2003.
[17] Y. Lu, J. Yan, and Y. Yam, “Model-based ECG Denoising Using
Empirical Mode Decomposition,” 2009 IEEE International Conference
on Bioinformatics and Biomedicine (BIBM 2009), Washington, DC,
USA, 2009.
[18] V. Kurbatsky, D. Sidorov, V. Spiryaev, N. Tomin, “Using the HilbertHuang transform for ANN prediction of nonstationary processes in
complex power systems,” In 8 th World Energy System Conference, vol.
1, issue 12, pp. 106-110, 2010.
[19] V.G. Kurbatsky, N.V. Tomin, “Application of Hybrid Neural Network
Models for Short-Term Forecasting Parameters of Electrical Power
System of Asian Region,” Joint Symposium within APEC project:
Energy links between Russia and East Asia Development Strategies for
XXI Century, Irkutsk, Russia, August 30-September 3, 2010.
[20] V. Kutbatsky, D. Sidorov, N. Tomin and V. Spiryaev, “Hybrid Model
for Short-Term Forecasting in Electric Power System,” International
Journal of Machine Learning and Computing, vol. 1, no. 2, pp. 138-147,
June 2011.
[21] M. B. Kennel, R. Brown, H. D. I. Abarbanel, ”Determining embedding
dimension for phase-space reconstruction using a geometrical
construction,” Physical Review A, vol. 45, no. 6, pp. 3403-3411, 1992.
[22] H. D. I. Abarbanel, R. Brown, J. Sidorowich, and L. S. Tsimring, “The
analysis of observed chaotic data in physical systems,” Reviews of
Modern Physics, vol 65, no. 4, pp. 1331-1392, October 1993.
[23] C.J. Cellucci, A. M. Albano, and P.E: Rapp, “Comparative Study of
Embedding Methods, ” Physics Review E 67, 66210, 2003.
[24] I. Marin, E. Arias, M. M. Artigao, J. J. Miralles, “A prediction method
for nonlinear time series analysis by combining the false nearest
neighbors and subspace identification methods,” International Journal of
Applied Mathematics and Informatics, vol. 5, issue 3, pp. 258-265,
2011.
[25] A.L. Goldberger, L.A.N. Amaral, L. Glass, J.M. Hausdorff, P.C. Ivanov,
R.G. Mark, J.E. Mietus, G. B. Moody, C.-K. Peng, H.E. Stanley,
“PhysioBank, PhysioToolkit, and PhysioNet: Components of a New
Research Resource for Complex Physiologic Signals,” Circulation, vol.
101, issue 23, pp. e215-e220, June 2000.
567
Descargar