Click here to load reader
Upload
haldun-sarnel
View
213
Download
0
Embed Size (px)
Citation preview
ORIGINAL ARTICLE
Accurate and robust image registration based on radial basisneural networks
Haldun Sarnel • Yavuz Senol
Received: 7 August 2009 / Accepted: 7 February 2011 / Published online: 1 March 2011
� Springer-Verlag London Limited 2011
Abstract Neural network-based image registration using
global image features is relatively a new research subject,
and the schemes devised so far use a feedforward neural
network to find the geometrical transformation parameters.
In this work, we propose to use a radial basis function
neural network instead of feedforward neural network to
overcome lengthy pre-registration training stage. This
modification has been tested on the neural network-based
registration approach using discrete cosine transformation
features in the presence of noise. The experimental regis-
tration work is conducted in two different levels: estima-
tion of transformation parameters from a local range for
fine registration and from a medium range for coarse reg-
istration. For both levels, the performances of the feed-
forward neural network-based and radial basis function
neural network-based schemes have been obtained and
compared to each other. The proposed scheme does not
only speed up the training stage enormously but also
increases the accuracy and gives robust results in the
presence of additive Gaussian noise owing to the better
generalization ability of the radial basis function neural
networks.
Keywords Image registration � Affine transformation �Radial basis function neural network � Discrete cosine
transformation
1 Introduction
Image registration is a procedure to determine the spatial
best fit between two images that overlap the same scene
and a fundamental stage in many image processing appli-
cations, such as medical image analysis, remote sensing,
image matching-based vehicle guidance [1, 2], and super-
resolution [3]. To register two images, a transformation
must be found so that each point in one image can be
mapped to a point in the second. Registration is often a
complicated task and includes a wide range of problems to
deal with such as image distortions, scene dependency, and
determining a suitable geometric transformation model.
Although image registration methods can be classified
with respect to various criteria, the most essential one in the
registration methods is the existence of a feature extraction
stage before the registration. According to this criterion,
most image registration methods are divided into two
categories: the feature-based and the area-based [1]. Feature-
based methods require extraction of two sets of features
(landmarks, corner points, edges, etc.) represented by the
control points from both images to be registered. After
finding the pairwise correspondence between these control
point sets (control point matching), the parameters of the
transformation function, or the function itself, by which the
two images can be aligned are estimated. No features (con-
trol points) are detected in area-based methods. Instead of
control points, pixel intensities all over the two images, or
from their subimages, are matched to find the parameters of
the transformation such as in correlation-like methods.
The core of this work was presented at ISCIS 2008 (23rd International
Symposium on Computer and Information Sciences, Istanbul,
Turkey) held on October 27–29, 2008. This paper is an extended
version of the paper presented at ISCIS 2008. The add-ons consist of
some experimental results for two additional image data, and some
revisions and extensions in the manuscript.
H. Sarnel (&) � Y. Senol
Electrical and Electronics Engineering,
Dokuz Eylul University, 35160 Izmir, Turkey
e-mail: [email protected]
Y. Senol
e-mail: [email protected]
123
Neural Comput & Applic (2011) 20:1255–1262
DOI 10.1007/s00521-011-0564-z
A new approach that does not fall into either of the two
categories given above is of interest to us here. It is called
neural network-based image registration and adopted for this
work. A neural network is trained with a set of global image
features (not control points) at inputs, representing an image
transformed by some known parameters, and the known
parameters at the outputs. Then, a trained neural network can
estimate unknown parameters of a query image when its
features of the same type are input to the network.
The most commonly used registration transformation is
the affine transformation. In this work, global affine
transformation that is composed of the Cartesian operations
of a scaling, a translation, and a rotation is assumed, and
the aim is to find these parameters for two given images.
Correlation-based methods and frequency domain methods
are disadvantageous for this type of transformation when
computational complexity is concerned since these meth-
ods generally require many geometrical transformations of
one of two images to accurately find the parameters. On the
other hand, local feature-based and control points methods,
although speed up the process, cannot be relied on when
image is noisy or feature extraction is problematic due to
scene content.
Image registration based on neural networks is relatively
a new approach and requires further consideration and
research. At the beginning, a Hopfield neural network was
used only in matching a set of landmark points extracted
from the images to be registered [4]. But registration
parameters were found by another approach on the basis of
matched control points in that work. Another use of neural
networks in image registration is seen in [5], where local
elastic registration is achieved by feedforward, Gaussian-
sigmoid, and radial basis function neural networks. It is
also based on detecting and matching control points. After
control points are matched, a neural network simply
interpolates the coordinates of the other points in one image
to those in the other image, using a local elastic model.
Similarly, support vector machines are also used to estab-
lish a nonlinear transformation between two images using
some control points from the images [6, 7]. Whatever
learning model is used, a neural network or a support vector
machine, these methods do not estimate transformation
parameters, but build a system that provides corresponding
transformed image coordinates given a pair of coordinates
from one image. Elhanany et al. [8] have proposed the first
image registration scheme that estimates the registration
parameters by a feedforward neural network (FNN) on its
own. Their registration scheme estimates the affine trans-
formation parameters of a test image with respect to a
reference image using discrete cosine transform (DCT)
features as a global image feature set. Selected DCT
coefficients of a test image are inputs to the trained net-
work, and estimated parameters are obtained at the output.
In a pre-registration phase, DCT features extracted from a
set of translated, rotated, and scaled copies of the reference
image are employed to train a FNN. Their work pioneered a
new category of registration schemes. Other registration
schemes using Zernike moments [9], principal components
[10], and kernel independent components [11] instead of
DCT features have followed. Although all the schemes
given above provide fairly accurate results for noisy ima-
ges, they have the following two drawbacks: the long
period of iterative learning process of FNN and its gener-
alization for problem-specific data to increase the accuracy
of the results. A well-generalized FNN can be obtained
by regularization techniques or early stopping method at
the cost of increased training times (or number of training
trials). Hence, the lengthy training phase that may consists
of many trials for ensuring a well-generalized FNN can
never be avoided. To overcome the drawbacks mentioned
above, replacing the FNN with a radial basis function
neural network (RBFNN) is the fundamental object of this
work. The proposed scheme here does not only avoid the
drawbacks of a FNN-based scheme but also increases the
accuracy and gives robust results in the presence of additive
Gaussian noise owing to the better generalization ability
of RBFNN.
2 Neural network-based image registration
A typical neural network-based image registration scheme
consists of two separate phases as shown in Fig. 1. In the
pre-registration phase, a reference image is scaled, rotated,
and translated by several amount of values to generate a set
of affine-transformed images. After noise is added to these
images, a global feature extraction is applied to every
image in the set in order to obtain training data for a neural
network. DCT coefficients and moments can be given as
examples to such global features.
The extracted features from the affine-transformed
image set are fed into a neural network together with cor-
responding parameter values at the output in a training
stage. Once a trained neural network is available, the reg-
istration phase is straightforward: extract the same global
Generate train-set: Apply affine transform Add noise
Extract global features
Pre-registration phase
Registration phase
Reference image
Feed features to neural network
Test image
Registration parameters
Train neural network
Extract global features
Fig. 1 Neural network-based registration scheme
1256 Neural Comput & Applic (2011) 20:1255–1262
123
features from a test image with unknown affine transfor-
mation parameters, feed them to the network, and read the
estimated parameter values at the output.
This type of registration approach reduces the entire
registration problem to vector regression by a neural net-
work. The neural network used here provides as an accu-
rate mapping as possible between a global feature space of
affine-transformed images and their affine transformation
parameters (registration parameters). Good mapping
capability and ease of use of neural networks make this
registration approach favorable for the applications which
require many fast registrations to a single reference image.
Image-based navigation, super resolution, and analysis of
time series of medical images are among those
applications.
Appending noise to the images in the training set
improves generalization and robustness of the neural net-
work. The number of affine-transformed images that rep-
resent the samples from the four-dimensional parameter
space must be sufficiently high, and this sampling must be
done suitably in order to cover the whole parameter space.
Otherwise, the parameter estimation accuracy of the net-
work will not be satisfactory.
When a FNN is used, the training stage in the pre-
registration phase is lengthy. Furthermore, the output
accuracy of the scheme with FNN in the registration phase
strongly depends on how good the FNN has been trained.
On the other hand, replacing the FNN with a RBFNN
simplifies the training stage both in terms of training time
and improving network generalization.
3 Radial basis function neural networks
RBFNNs with their structural simplicity and training
efficiency are good candidate to perform a nonlinear
mapping between the input and output vector spaces.
RBFNN is a fully connected feedforward structure and
consists of three layers, namely an input layer, a single
layer of nonlinear processing units, and an output layer
[12]. The network structure is shown in Fig. 2. Input layer
is composed of input nodes that are equal to the dimen-
sion of the input vector x. The output of the jth hidden
neuron with Gaussian transfer function can be calculated
as
hj ¼ e� x�cjk k2=r2 ð1Þ
where hj is the output of the jth neuron, x 2 <n�1 is an
input vector, cj 2 <n�1 is the jth RBF center, r is the center
spread parameter that controls the width of the RBF, and
:k k represents the Euclidean norm. The output of any
neuron at the output layer of RBFNN is calculated as
yi ¼Xk
j¼1
wijhj ð2Þ
where wij is the weight connecting hidden neuron j to
output neuron i and k is the number of hidden layer
neurons.
The mapping properties of the RBFNN can be modified
through the weights in the output layer, the centers of the
RBFs, and spread parameter of the Gaussian function. The
simplest form of RBFNN training can be obtained with
fixed number of centers. If the number of centers is made
equal to the number of training samples, then the error
between the desired and actual network outputs for the
training data set can be equal to zero. This type of network
is known as exact RBFNN. The weights wij are determined
by the linear least-squares algorithm that leads to training
in closed form. This provides very fast training compared
to the backpropagation algorithms. In this work, the num-
ber of RBFNN centers of the used RBFNN is chosen as
equal to the number of training samples.
4 Experimental work
4.1 Coarse and fine registration modes
The entire experimental registration work was divided into
two groups: local range fine registration (LRFR) and
medium range coarse registration (MRCR). In LRFR work,
the aim was to accurately estimate the registration
parameters from a relatively small range using a moderate
size RBFNN. If one wanted to accurately estimate the
registration parameters from a much wider range, then the
RBFNN needed would be enormous and impractical since
the number of neurons in the hidden layer of an RBFNN is
equal to the number of input vectors. On the other hand, an
additional RBFNN with a moderate size again can be
designed and trained to find registration parameters
roughly from a medium range (that is MRCR). This idea
1x
2x
nx
Σ
Σ
1y
my
.
.
.
.
.
.
.
Input layer Hidden layer Output layer
.
.
Fig. 2 Structure of RBFNN
Neural Comput & Applic (2011) 20:1255–1262 1257
123
apparently allows one to register an image accurately from
a wider parameter range: first, coarsely register image by
the MRCR network, then apply back affine transform to
image, and finally accurately register by the LRFR net-
work. The actual registration parameters can be later found
by joining the estimated values given by the two networks.
The joint registration parameters are extracted from a
combined affine transformation matrix [13] which is sim-
ply computed by multiplying the coarse transformation
matrix from MRCR result by the fine transformation matrix
from LRFR result. The same idea can also be applied to a
FNN-based image registration scheme.
4.2 Data set
Table 1 shows the values of affine transformation param-
eters used for experimental LRFR and MRCR modes and
the numbers of generated images for training and test data.
Both range and step size chosen for all parameter values
are wider in MRCR compared to those in LRFR. Trans-
formation parameters for the test data were chosen as the
midpoints of the transformation parameters used for
training data. This helps better discriminate the perfor-
mances of the RBFNN-based scheme and FNN-based
scheme by testing them at points farthest from the points at
which they trained to give most accurate results anyway.
The entire experimental results were obtained by applying
the registration schemes to three different reference ima-
ges, one by one. Figure 3 shows the first reference image
and two samples in a training set generated from this by
translating, rotating, and scaling according to the values
given in Table 1. Other two reference images are shown in
Fig. 4. All reference images have 256 gray levels and a size
of 400 9 400 pixels. All generated images, each of which
is 128 by 128 pixels size, were first added with white
Gaussian noise. DCT of the noisy images were taken to
obtain frequency domain coefficients. A region of 6 by 6
coefficients in the lowest frequency band in the DCT plane
was cut out and used as a feature vector for each affine-
transformed image. Discarding the zero frequency coeffi-
cients, a matrix of 35 by N coefficients, where N is the
number of generated images, was obtained and used to
train exact RBFNNs and FNNs. Some test images with the
transformation parameter values given in Table 1 were also
created and added with noise of the same strength as that in
the training set. Features from the test images were
obtained exactly in the same manner as explained for the
training data.
4.3 Experiments and results
Obtaining optimized neural networks was the main issue in
the experiments. The optimal spread values of the transfer
function for the neurons of all RBFNNs were found
empirically as Fig. 5 shows. An optimal spread value is
found by spanning a range of spread values at which reg-
istration errors for test data are computed. The optimal
spread value is chosen as the value associated with the
minimum of the registration errors in the plot. Figure 5
shows that the search for the optimal spread value of a
RBFNN yields 2,000, using the training and test data of
reference image ‘girl’, at 5 dB signal to noise ratio (SNR).
An optimal spread value was found, in this manner, for the
data generated from each reference image, at each SNR
Table 1 Affine transformation parameter values used in the experiments for LRFR and MRCR
Transform parameter LRFR MRCR
Values used in training set Values used in test set Values used in training set Values used in test set
Scaling 0.9, 0.965, 1.035, 1.1 0.93, 1, 1.07 0.70, 0.85, 1, 1.15, 1.3 0.77, 0.92, 1.07, 1.24
Rotation (degrees) -5, -2, 2, 5 -3, 1, 4 -30, -15, 0, 15, 30 -25, -12, 5, 22
Vertical translation (pixels) -5, -2, 2, 5 -4, 0, 3 -20, -10, 0, 10, 20 -18, -5, 7, 16
Horizontal translation (pixels) -5, -2, 2, 5 -3, 1, 4 -20, -10, 0, 10, 20 -16, -4, 4, 17
Number of generated images, N 256 81 625 256
Fig. 3 a Reference image aerial, b an image generated from (a) by
scaling and translating, and c another image generated by scaling,
rotating, translating, and adding noise to use in the training set of the
neural networks
1258 Neural Comput & Applic (2011) 20:1255–1262
123
value used in the experiments. Registration errors tend to
increase slowly after an abrupt fall. Please note that any
suboptimal spread value chosen within a range following
this fall will not increase estimation errors drastically.
Mean of the absolute value of LRFR errors resulted by the
RBFNN-based scheme was computed for each affine reg-
istration parameter, and given in Table 2. Experiments
were carried out for noise-free and noisy images at two
different SNR values, 20 and 5 dB. The fact that registra-
tion errors in scaling factor are very small in value com-
pared to the others, as shown in Table 2, should not be
mistaken. Since scaling factors in affine transformations
are generally small and around 1, in order the estimation of
scaling factor to have comparable accuracies, its errors are
expected to be much smaller than registration errors in
rotation and translations.
To make some comparisons, mean errors from a FNN
with a 20 neurons in one hidden layer by using the same
training and test data were also computed and given in
Table 2. The FNN had a tangent-sigmoid transfer function
for the hidden layer neurons and a linear function for the
output layer neurons and was trained using the Levenberg–
Marquardt method. The output parameter values fed to the
FNN were normalized to speed up the convergence in the
training stage, and this also helped to obtain smaller net-
work output errors in the registration stage. The actual
parameter values were obtained by denormalizing the
network output values. In order to improve generalization
of the FNN for the test data, we deliberately stopped
training early during every training stage. This required us
to repeat the training stage with early stopping as many
times as to conclude that a network giving the smallest
registration errors for the test data among the trial networks
was well enough generalized. The registration error results
for the FNN-based scheme given in Table 2 were obtained
with such best networks, each of which was found after
about 25 training trials with early stopping. An average
FNN emerged from a single training trial gives much worse
registration errors.
The same procedures explained above were repeated for
MRCR with the exception that the values used for training
and test sets were different as shown in Table 1. In order to
keep the size of the RBFNN moderate, the parameter value
steps chosen for training had to be large at the cost of much
lower accuracy in estimating the registration parameters.
Mean of the absolute value of MRCR errors resulted by
both networks is given in Table 3. Although the mean
errors for each parameter are relatively high, the estimated
registration parameters are in an acceptable range for a
coarse registration task. Recall that such a task is to be
Fig. 4 Other reference images a moon and b girl
0 2500 5000 7500 10000 12500 150000
0.2
0.4
0.6
0.8
1
spread
mea
n ab
solu
te r
egis
trat
ion
erro
r scaling
rotation
x translation
y translation
minimum error atoptimal spread value
scaling errorsmultiplied by 100
Fig. 5 Optimal spread value is found by spanning a range of values
at which registration errors for test data are computed. Scaling errors
were multiplied by 100 to make them visible
Neural Comput & Applic (2011) 20:1255–1262 1259
123
followed, after back-transforming the test image, by a fine
registration performed using a neural network trained in
LRFR mode. Coarse registration parameters estimated in
MRCR mode must be in the range for which that fine
registration network is capable of estimating fine registra-
tion parameters accurately. This can be ensured by
selecting a proper range and sufficiently small a step size
for registration parameter values used in the training stage
of that coarse registration network.
The experimental results given in Tables 2 and 3 show
that the RBFNN-based registration scheme is more accu-
rate in estimating the affine transformation parameters for
both LRFR and MRCR. LRFR accuracies of both regis-
tration schemes become nearly the same as SNR reduces to
around 20 dB. On the other hand, MRCR accuracy of the
RBFNN-based registration scheme is clearly much better.
This is explained by the fact that the parameter value steps
in the MRCR training set are so large that network
approximation errors inevitably become large and domi-
nate noise errors at the output of any of the two networks.
In this case, the RBFNN-based scheme manifests its better
affine transformation generalizations over the parameter
space. As to LRFR, the parameter value steps in the
training set are so small as to very accurately estimate the
parameter values. The RBFNN-based scheme still outper-
forms the FNN-based scheme at the high SNR values.
Noise errors naturally start dominating network approxi-
mation errors as the noise strength increases. This results in
similar estimation accuracies for both schemes at the rel-
atively low SNR values since an estimator in the highest
accuracy region is prone to larger degradations caused by
noise. In LRFR, although the FNN-based scheme seems to
be more robust to noise, this quality becomes inactive in
total registration accuracy. Table 3 also shows that the
RBFNN-based scheme is more robust to additive Gaussian
noise in MRCR in addition to being more accurate.
All the experimental work was carried out in the
MATLAB environment operating on a personal computer
with a 3-GHz Pentium 4 processor. It is more logical to
compare the computational loads of the two schemes in
terms of training time. With the data size chosen in the
experiments, training a RBFNN takes much less than one
second. On the other hand, a single training of a FNN with
the same data takes from many seconds to a few minutes
depending on the convergence speed of any training
attempt and early stopping criterion. In addition to the
longer duration of a single training attempt of FNN, the
necessity for several training attempts to acquire a well-
generalized FNN increases total training time and makes it
disadvantageous for a neural network-based registration
scheme. Table 4 summarizes the comparison of the two
schemes together with the training issues mentioned above.
5 Conclusions
This paper proposes to use a RBFNN in the neural network-
based image registration approach fed by global image
features such as DCT coefficients. All those schemes (only
a few known so far) use a FNN to estimate the registration
Table 2 Mean absolute
registration errors of RBFNN-
based and FNN-based schemes
for LRFR
Image SNR Neural
network
Scaling Rotation Horizontal
translation
Vertical
translation
Aerial Noise free RBFNN 0.0002 0.01 0.007 0.02
FNN 0.0004 0.04 0.02 0.03
20 dB RBFNN 0.0006 0.06 0.03 0.05
FNN 0.001 0.1 0.04 0.07
5 dB RBFNN 0.002 0.18 0.1 0.15
FNN 0.003 0.22 0.17 0.25
Moon Noise free RBFNN 0.0003 0.03 0.004 0.01
FNN 0.0006 0.06 0.01 0.02
20 dB RBFNN 0.001 0.09 0.06 0.04
FNN 0.001 0.1 0.05 0.06
5 dB RBFNN 0.003 0.2 0.18 0.18
FNN 0.004 0.3 0.24 0.19
Girl Noise free RBFNN 0.0001 0.005 0.004 0.004
FNN 0.0003 0.01 0.01 0.02
20 dB RBFNN 0.0007 0.05 0.05 0.03
FNN 0.0009 0.05 0.04 0.05
5 dB RBFNN 0.003 0.2 0.17 0.15
FNN 0.003 0.2 0.15 0.19
1260 Neural Comput & Applic (2011) 20:1255–1262
123
parameters. On the other hand, it is shown here, experi-
mentally, that employing a RBFNN instead of FNN to
estimate affine registration parameters gives more accurate
results in the presence of noise. Besides, the proposed
scheme shows good robustness to noise in general. This
performance superiority of the RBFNN-based scheme can
be accounted for its better affine transformation general-
izations over the parameter space. More importantly, the
proposed scheme is fast and easy to implement as a result of
avoiding the disadvantages of FNN-based scheme, such as
lengthy training iterations, multiple training attempts, and
network generalization problem which are encountered
during the training stage. Only parameter that has to be
determined to well-train an exact RBFNN is the spread
parameter of the Gaussian function. Although there is an
optimal spread value depending on the training data for a
network in the training stage, the experiments also show
that any suboptimal spread value can be easily estimated
and used without decreasing the performance drastically.
The proposed scheme can also be applied to any other
image features than the DCT coefficients.
References
1. Brown LG (1992) A survey of image registration techniques.
ACM Comput Surv 24(4):325–376
2. Zitova B, Flusser J (2003) Image registration methods: a survey.
Image Vis Comput 21(11):977–1000
3. Capel D, Zisserman A (2003) Computer vision applied to super-
resolution. IEEE Signal Process 20(3):75–86
4. Qian Z, Li J (1997) Use of hopfield neural network for complex
image registration. In: Proceedings of the 9th international con-
ference on tools with artificial intelligence. Newport Beach, USA,
pp 204–207
5. Wachowiak MP, Smolikova R, Zurada JM, Elmaghraby AS
(2002) A supervised learning approach to landmark-based elastic
biomedical image registration and interpolation. In: Proceedings
of the IEEE international joint conference on neural networks.
Honolulu, USA, pp 1625–1630
6. Peng DQ, Liu J, Tian JW, Zheng S (2006) Transformation model
estimation of image registration via least square support vector
machines. Pattern Recogn Lett 27(12):1397–1404
7. Davoodi-Bojd E, Soltanian-Zadeh H (2008) Grid based regis-
tration of diffusion tensor images using least square support
vector machines. In: Sarbazi-Azad H et al (eds) Advances in
computer science and engineering. 13th international CSI com-
puter conference, CSICC 2008, Kish Island, Iran, Springer,
pp 621–628
Table 3 Mean absolute
registration errors of RBFNN-
based and FNN-based schemes
for MRCR
Image SNR Neural
network
Scaling Rotation Horizontal
translation
Vertical
translation
Aerial Noise free RBFNN 0.014 1.3 1.0 1.3
FNN 0.022 1.7 1.2 1.6
20 dB RBFNN 0.014 1.3 1.0 1.3
FNN 0.022 1.7 1.1 1.4
5 dB RBFNN 0.018 1.5 1.1 1.5
FNN 0.032 2.2 1.7 2.2
Moon Noise free RBFNN 0.014 1.0 0.5 0.5
FNN 0.021 1.7 0.9 0.8
20 dB RBFNN 0.017 1.4 0.6 0.6
FNN 0.025 1.8 1.0 0.9
5 dB RBFNN 0.023 1.3 1.1 0.9
FNN 0.031 2.4 1.5 1.3
Girl Noise free RBFNN 0.011 0.8 0.9 0.7
FNN 0.018 1.2 1.2 1.1
20 dB RBFNN 0.011 0.8 0.8 0.9
FNN 0.020 1.4 1.3 1.0
5 dB RBFNN 0.013 1.0 0.9 0.8
FNN 0.021 1.6 1.4 1.5
Table 4 Comparison between RBFNN-based and FNN-based
schemes
Issues RBFNN-based FNN-based
Need for input/output data
normalization
No Yes
Need for network
generalization methods
No Yes
Need for multiple training trials No Yes
LRFR performance Superior for high
SNR
MRCR performance Superior
Time for single training \1 s Between 40 and
120 s
Neural Comput & Applic (2011) 20:1255–1262 1261
123
8. Elhanany I, Sheinfeld M, Beck A et al (2000) Robust image
registration based on feedforward neural networks. In: Proceed-
ings of the IEEE international conference on systems, man and
cybernetics. Nashville, USA, pp 1507–1511
9. Wu J, Xie J (2004) Zernike moment-based image registration
scheme utilizing feedforward neural networks. In: Proceedings of
the fifth world congress on intelligent control and automation.
Hangzhou, China, pp 4046–4048
10. Xu A, Jin X, Guo P (2006) Two-dimensional PCA combined with
PCA for neural network based image registration. In: Proceedings
of international conference on natural computation. Xi’an, China,
pp 696–705
11. Xu A, Jin X, Guo P, Bie R (2006) KICA feature extraction in
application to FNN based image registration. In: International
joint conference on neural networks, pp 3602–3608
12. Ham FM, Kostanic I (2001) Principles of neurocomputing for
science engineering. McGraw Hill, Singapore
13. Schalkoff RJ (1989) Digital image processing and computer
vision. Wiley, London
1262 Neural Comput & Applic (2011) 20:1255–1262
123