Upload
vulien
View
214
Download
0
Embed Size (px)
Citation preview
Developing a validation metric using image
classification techniques
A thesis submitted to the Graduate School of
the University of Cincinnati
in partial fulfillment of requirements for the degree of
Master of Science
in the Department of Mechanical and Materials Engineering
of the College of Engineering & Applied Sciences
by
Murali Mohan Kolluri
Bachelor of Technology, National Institute of Technology, Calicut
May 2009
Committee Chair: Dr. Randall J Allemang
Committee Members: Dr. David L Brown, Dr. Allyn W Phillips
Abstract
The main objective of this thesis work was to investigate different image classification
and pattern recognition methods to try to develop a validation metric. A validation metric
is a means of comparison between two sets of numerical information. The numerical
information could represent a set of measurements made on a system or its internal
characteristics derived from such measurements. A validation metric (v-metric) is used
to determine the correctness with which one of the data-sets is able to describe the other
and to quantify the extent of this correctness.
A moment descriptor method has been identified from among the most widely used
image classification and pattern recognition methods as the system most likely to give
way to an effective validation metric for reasons discussed in subsequent chapters.
Different sets of Orthogonal Polynomials have been investigated as kernel functions
for the aforementioned method to generate descriptors that depict the most significant
features of the data-sets being compared. The algorithms developed as such have been
verified using standard gray-scale and color images to establish their ability to reconstruct
the image intensity function using a subset of the features extracted.
The above Orthogonal Polynomials have then been used to extract features from two
measured data-sets and means to develop a v-metric from these descriptors have been
explored. A study of algorithms thus developed using different Orthogonal Polynomials
has been made to compare their effectiveness as well as shortcomings as kernel functions
for developing a v-metric.
An alternate form of the existing two dimensional moments has been proposed to
generate features that are more conveniently compared against each other. This method
has been examined to determine its efficiency in reducing the amount of information
that needs to be used in the final comparison for multiple pairs of data-sets. A way to
effect such a comparison using singular values of the computed moment matrix has been
proposed.
i
Preface
It requires a lot of self-motivation to uproot a tranquil life with a moderately interesting
vocation and embark upon something anew altogether. Especially for someone like me,
for whom being self-driven is not something that comes as naturally. Taking up a Masters
course a couple of years ago, therefore, seemed like a rather arbitrary musing as against
a well thought out decision. At this juncture, as I stand on the brink of finishing up
the first bit of original research I ever partook in, I look back at this journey as a most
enriching and transformative experience.
Many people have influenced me during this time and helped me to focus on the right
things and work towards the right goals. First of all, I would like to extend my sincere
gratitude to Dr. Randall Allemang and Dr. Allyn Phillips for their patient guidance and
inspiration. Under their well structured tutelage, I have had the chance not only to learn
the fundamentals of structural dynamics, but also to better my problem solving faculty. I
would like to thank Dr. David L Brown for supporting this thesis as a committee member.
I feel privileged to have had the chance to work with such distinguished individuals and
I hope to extend this association to learn further from them.
I would also like to thank Dr. John Mottershead from the University of Liverpool
and Dr. Weizhuo Wang from the Manchester Metropolitan University for their interest
in the project without which I would have had a really hard time at the very onset.
They say ”time flies by when you are having fun.” I would like to take this opportunity
to show my heartfelt appreciation for my roommates and friends at UC-SDRL for making
my life both on and off the campus all the more enjoyable so far away from home.
Last but not the least, I would like to thank my family, especially my parents, for
their incessant support and encouragement.
ii
Contents
1 Introduction 1
2 Theoretical Background 3
2.1 Validation, Verification and Calibration . . . . . . . . . . . . . . . . . . . 3
2.1.1 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.3 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Margin & Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Characteristics of a good v-metric . . . . . . . . . . . . . . . . . . . . . . 7
2.4 SVD based v-metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Image Classification Methods . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Moment Descriptor Method . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.7 Orthogonal Polynomial Moment Descriptors . . . . . . . . . . . . . . . . 12
2.7.1 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 13
2.7.2 Chebyshev Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7.3 Zernike Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7.4 Hahn and Krawtchouk polynomials . . . . . . . . . . . . . . . . . 15
2.8 Errors in digitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Implementing the 2-D moments 20
3.1 Legendre Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Zernike Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Hahn and Krawtchouk Moments . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 Chebyshev Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Comparison possibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
iii
4 Proposed alternate approach 32
4.1 Overview of the proposed approach . . . . . . . . . . . . . . . . . . . . . 32
4.2 Formulation with Chebyshev polynomials . . . . . . . . . . . . . . . . . . 34
4.2.1 Spatial decomposition . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Temporal decomposition . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 V-metric: Approach A . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4 V-metric: Approach B . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.5 Comparison between metrics . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.6 Variations of the SVD based v-metric . . . . . . . . . . . . . . . . . . . . 46
4.7 Regression through origin for the v-metrics . . . . . . . . . . . . . . . . . 47
5 Conclusion 49
5.1 Future work recommendations . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1.1 Principal Orthogonal Polynomial Based v-metric . . . . . . . . . . 50
Appendices 53
A Transmissibility margin plots 54
B Final codes 57
B.1 Legendre Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
B.2 Zernike Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
B.3 Hahn moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
B.4 Krawtchouk moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
B.5 Chebyshev 2-D moments . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
B.6 Chebyshev 1-D moments . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
B.7 Principal Orthogonal Polynomials function . . . . . . . . . . . . . . . . . 76
B.8 Regression through origin function . . . . . . . . . . . . . . . . . . . . . 78
C Modified SVD based v-metric plots 79
iv
List of Figures
2.1 SVD based v-metric example: Singular Value Plot . . . . . . . . . . . . . 9
2.2 SVD based v-metric example: Primary Singular Vector Correlation . . . 9
2.3 Discretization Error: PhD dissertation Liao [16] . . . . . . . . . . . . . . . 18
2.4 Geometric Error: Chandan et al [18] . . . . . . . . . . . . . . . . . . . . . 19
3.1 Reconstruction using Legendre moments . . . . . . . . . . . . . . . . . . 21
3.2 Reconstruction using Zernike moments . . . . . . . . . . . . . . . . . . . 22
3.3 Reconstruction using recursive exact Zernike moments . . . . . . . . . . 24
3.4 FRF reconstruction: 40 ZMDs . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5 Reconstruction using Krawtchouk moments . . . . . . . . . . . . . . . . 26
3.6 Reconstruction using Hahn moments . . . . . . . . . . . . . . . . . . . . 26
3.7 Reconstruction using Chebyshev moments . . . . . . . . . . . . . . . . . 29
3.8 Reconstruction, transmissibility data: 100 2-d Chebyshev moments . . . 30
3.9 Reconstruction, transmissibility data: V-metric: 2-d Chebsyhev 100 mo-
ments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.10 Reconstruction, transmissibility data: V-metric: Slope vs no. of moments 31
4.1 Reconstruction using Chebyshev moments . . . . . . . . . . . . . . . . . 33
4.2 Reconstruction using Krawtchouk moments . . . . . . . . . . . . . . . . 34
4.3 Transmissibility reconstruction using 60 Chebyshev moments . . . . . . . 35
4.4 Transmissibility reconstruction using 60 Chebyshev moments . . . . . . . 36
4.5 Reconstruction comparison: 10 Chebyshev moments . . . . . . . . . . . . 36
4.6 Reconstruction comparison: 30 Chebyshev moments . . . . . . . . . . . . 37
4.7 Temporal Reconstruction: 60 Chebyshev moments . . . . . . . . . . . . . 37
4.8 Temporal Reconstruction: 60 Chebyshev moments . . . . . . . . . . . . . 38
4.9 SVD plot: Approach A . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
v
4.10 SVD plot: Approach B . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.11 Spatial Comparison Approach B . . . . . . . . . . . . . . . . . . . . . . . 40
4.12 Original SVD based v-metric . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.13 C2MD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.14 C1DA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.15 C1DB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.16 Primary singular value reconstruction . . . . . . . . . . . . . . . . . . . . 45
4.17 Variations on the SVD based v-metric . . . . . . . . . . . . . . . . . . . 47
5.1 Reconstruction comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 52
A.1 Vehicle C transmissibility . . . . . . . . . . . . . . . . . . . . . . . . . . 54
A.2 Vehicle D transmissibility . . . . . . . . . . . . . . . . . . . . . . . . . . 55
A.3 Vehicle E transmissibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
C.1 Vehicle C transmissibility . . . . . . . . . . . . . . . . . . . . . . . . . . 79
C.2 Vehicle D transmissibility . . . . . . . . . . . . . . . . . . . . . . . . . . 79
C.3 Vehicle E transmissibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
List of Tables
4.1 Margin Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2 Relative standard deviation . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 Relative Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 Modifications on SVD based v-metric . . . . . . . . . . . . . . . . . . . . 47
4.5 Comparison, C1DB: RTO and NLS . . . . . . . . . . . . . . . . . . . . . 48
4.6 Comparison, SVD based v-metric: RTO and NLS . . . . . . . . . . . . . 48
vi
Chapter 1
Introduction
With the advent of Computer Aided Engineering (CAE) in the past few decades, the de-
sign process for a new system/component/product has taken a more preemptive stance.The
prototyping and initial testing stage has to a great extent been effectively substituted by
computer generated models and finite element entities that capture a good deal of the
physics behind real life phenomena that characterize the design of the system in question.
Intricate problems, such as high speed handling of road vehicles which involves a com-
plex non-linear interaction between the road surface and the tire or the surface ablation
of a hypersonic jet subjected to multiple high speed flight routines, have only further en-
hanced the need for increasingly complex and accurate virtual models which can then be
used to make predictions for the system’s behavior in the face of an environment, which
while being very plausibly present during the normal operating conditions of the system,
might still be very difficult to replicate in a controlled laboratory testing environment.
Such virtual models can be arrived upon based on the experience of the system’s
response to a given set of environments and boundary conditions or through a set of
experimental measurements made. They can be used in a pre-prototyping stage to gauge
the system’s performance under certain specific ”loadcases” or they can be used to make
a design of experiment/optimization study to effect value addition or cost saving type
analyses.
With such virtual models becoming an increasingly integral part of the product de-
sign cycle, it is paramount that there be methods developed to gauge the correctness of
the models themselves. This ”correctness” encompasses not only the use of appropriate
system properties or implementation of the right physical relations underlying the phe-
1
nomenon being simulated, but also a handle on the confidence level that can be associated
with predictions that are going to be made off of such a model.
The means of making such a quantitative assessment of the correctness of a sys-
tem/model against a reference system/data-set is broadly termed as a validation metric
(v-metric). There are many deterministic and stochastic ways of making such a compar-
ison. Most of them involve decomposing the system into measurable characteristics or
features which are then compared to the corresponding features of the model.
The feature extraction process itself may vary between the different v-metrics. Most
structural problems make use of such v-metrics in capturing and comparing the most
dominant characteristics present in the data that are generally responsible for system
failure.
One such method in use involves decomposing the data-sets in comparison into their
singular value sets and then comparing the biggest contributors, i.e., the primary singular
values and their corresponding vectors against each other to find a degree of correlation
between the two [1].
There are a multitude of analogous decomposition techniques used in the problems
like image classification, pattern recognition etc., which are used for feature extraction
from a different type of data and use them for comparison against a reference set. The
significant features here, however, do not necessarily represent the most dominant (in
magnitude) contributors to the data.
The main premise of this thesis work, therefore, is to investigate the latter to try to
find a way similar to the SVD based v-metric to develop an alternate metric that could
be used to compare any two data-sets. It further entails a comparison between the two
metrics to see if it is advantageous in any way to use one over the other and if so, under
what conditions.
2
Chapter 2
Theoretical Background
This chapter attempts to treat the theoretical background of the various concepts that
have been evaluated during the course of this thesis. A broad discussion of the premise of
Validation and Verification (V&V) has been followed up by a more detailed description
of the existing v-metric and the means proposed to develop new ones.
2.1 Validation, Verification and Calibration
The following section is aimed at qualifying the definitions for above mentioned terms as
has been employed in the rest of this thesis. The term model refers to an analytical/nu-
merical/experimental framework put together to describe a certain reference system.
2.1.1 Verification
The process of evaluating the correctness of the implementation of a model is termed as
verification [4,5]. It is a measure taken to ensure the correctness of the method with which
the equations describing the model have been used. Verification could also represent the
means to determine the correctness of the solution of a model, i.e., evaluate the accuracy
of the inputs to the model, ascertain the numerical error introduced in the formulation
of the solution and then determine the accuracy of the outputs generated [4,5].
For instance, if a finite element (FE) model for a cantilever beam is being used to pre-
dict the deflection of an actual physical beam under bending loads, the FE model is the
model, while the actual physical beam under test is the system in question. Verification
would entail the comparison of the model being implemented against a model/implemen-
3
tation which has been proven to be correct. Such a comparison could include ensuring
the usage of a correct set of element type for discretizing the FE model, ensuring that the
right type of solver is used to carry out the iterative solution and that the numerical error
resulting from each iteration keeps the solution within physically acceptable bounds en-
suring that the problem being solved remains the problem which was originally intended
to be solved.
2.1.2 Calibration
Most of the parameters that characterize the system or the model are stochastic. They are
acquired from maximum likelihood or best fit type estimates. Hence there is a probability
attached to the attribution of such a parameter to a model turning out to be a correct
parameter. Even if the parameter is fairly deterministic, most models undergo fine tuning
of the overall set of parameters in order to best match the characteristics of the system.
This is referred to as calibration [4,5].
Calibration often involves adjusting both the parameters intrinsic to the model, such
as its geometry and material properties, as well the ones that determine the model’s
interaction with its ambience, for instance the boundary conditions. The adjustment is
made so that the model best matches response from the measured physical system.
Revisiting the cantilever beam example, calibration would essentially be the part
where the underlying parameters describing the model, such as the Young’s modulus of
the beam material need to be tuned to ensure adequate replication of the system’s re-
sponse through the model output using a set of reference test conditions, before adjudging
it to be fit for making predictions regarding system responses for other conditions/load-
cases.
2.1.3 Validation
The process of determining the correctness of a model in its description of the refer-
ence system under a set of test conditions is termed as validation [4]. It is a measure
of whether the model over/under/optimally represents the system, thereby providing a
4
basis to decide the extent of tolerances, factor of safety etc. for the system design. To
paraphrase therefore, validation refers to the process of ensuring the implementation of
correct equations, whereas verification refers to ensuring the implementation of the equa-
tions correctly.
In context of the above quoted example an instance-wise comparison of the response
(most likely displacement) from the FE model and the measured system provides an
estimate of the extent to which the model describes the system accurately. The process
of determining this extent is what validation comprises.
2.2 Margin & Uncertainty
The primary focus of the V&V process is to determine a physically relevant set of esti-
mates for margin and uncertainty which then serve as guidelines to quantify a level of
confidence in the predictions made from the model and a factor of safety that needs to
be assigned to such predictions to avoid any possible failure.
Margin quantifies the degree of inadequacy of a model in its description of a system.
It provides insight on the amount of bias correction that needs to be introduced in the
model so that it optimally characterizes the system. In other words, it provides a measure
of distance to failure for many engineering situations. In the cantilever beam example,
if the response of the FE model when plotted against the experimental response of the
actual physical system turns out to be a linear fit with a slope of 0.98, then the margin
becomes 0.02.
Uncertainty on the other hand quantifies the spread of the model response about its
characterization equation. It signifies the extent of confidence with which predictions can
be made from the model to describe system response. In the above example, it could
be described by the standard deviation/variance of the responses about the 0.98 sloped
line, assuming all the responses are equally probable. In such a case, if there are response
values that fall above the unit sloped line, the model prediction is smaller than the actual
system’s response and hence it should be taken to mean that a possible failure for the
5
system is not being captured by observing the model prediction.
Among the various classifications present for the type of uncertainty, the most relevant
with regards to this thesis work is the following demarcation:
1 Aleatory Variability: This type of uncertainty tries to capture the stochasticity of
the model and the underlying parameters. It arises due to explicable causes arrived
upon based on prior knowledge or experience of system response [4,5].
2 Epistemic Uncertainty: This type of uncertainty arises from lack of knowledge of
system characteristics or incorrect assumptions made in describing the model [4,5].
The most critical form of this uncertainty is referred to as blind epistemic uncer-
tainty which arises from lack of awareness of any such misinformation.
An example for the former could be the variability in the Young’s modulus or the
geometric design tolerances that would introduce an uncertainty in the beam’s response.
For the latter, a good instance would be assuming the material to be isotropic while using
a beam with imperfections in its lattice structure without prior knowledge of either the
variability itself or of the extent of the uncertainty that such a variability introduces in
the final solution.
Together with an understanding of the above two concepts, the quantification of
margin provides the basis for validating a model. The v-metric, therefore is a means for
effectively determining the same. To paraphrase, the v-metric is a framework used to
numerically or visually understand the quantification of margin and uncertainty aspects
of a model with respect to characterizing system response. [4,5].
For subsequent discussions, a model and a system will both be represented by a set
of data each, which are being compared to establish a level of validation. Such data-sets
normally have information pertaining to the spatial location on the model or the system
at which the response is being characterized/measured and the spread of the information
along a temporal axis such as time, frequency, temperature etc.
6
2.3 Characteristics of a good v-metric
A validation related comparison can be made either instance to instance for each piece
of information present or by extracting the most significant portions of the data, thus
comparing data signatures rather than the data-sets themselves. The former is an ill
conceived method more often than not as it does not get rid of the part of the data that
potentially comprises of noise and hence does not offer any significant insight into margin
quantification. Moreover, instant-wise comparison for large data-sets is cumbersome, and
it does not provide any additional information about the most critical spatial as well as
temporal spread of the data to help predict failure among other things.
A good v-metric therefore is generally one which can:
significantly reduce the actual information content to facilitate instance-wise com-
parison.
capture the most significant information present in the data-sets.
if possible, keep the final features to be in the same physical system of units as the
quantities that are present in the original data-sets.
provide an easy and effective means of extracting information related to quantifying
margin and uncertainty.
retain most critical information along with its spatial and temporal location to
identify a margin to failure without unnecessary intuition based fudge factors.
2.4 SVD based v-metric
The following is a discussion of an existing v-metric that uses SVD to try to quantify the
margin and uncertainty between two data-sets [1]. The method involves decomposing the
spatial information into its singular values and singular vectors at each temporal point.
Retaining and comparing the set of largest singular values of one data-set against the
other results into a plot which in an ideal condition, i.e., when one of the data-sets is
7
exactly the same as the other, would result in a straight line with slope m = 1 and with
no variance about it.
The slope for the best linear fit therefore is an estimate of the margin. Assuming that
the model is the reference, a slope less than one means that the model underestimates the
measured system response. The deviation/variance of the individual data points about
the best fit line denote the extent of uncertainty. An assurance criteria (AC) computation,
analogical to the Modal Assurance Criteria (MAC) computation for the modal vectors [1],
between the primary singular vectors at each temporal point then results in a curve
that depicts the similarity in the spatial distribution of the information present in the
two data-sets. Naturally, the longer of the two sets of singular vectors is chosen for the
aforementioned comparison on account of better discernibility of any differences between
the two.
The following is an example of a pair of data-sets subjected to the SVD based v-metric
and the corresponding results. The data-sets being compared here are transmissibility
measurements taken on a vehicle being excited on a four axis road simulator rig under
different levels of excitation. Each set has dimensions 146×4×1601, which represent the
number of response measurement locations, number of reference locations and the number
of frequency lines respectively. Figure 2.1 is the plot between the primary singular values
of the two data-sets taken between frequencies 3 − 30Hz. Figure 2.2 represents an AC
plot for the primary singular vectors from either data set at each frequency point for the
same range of frequencies [3].
The line along the diagonal of Figure 2.1 has a unit slope. It is readily apparent that
the data-set 2 is less than the data-set 1 in terms of magnitude as the slope of the best
fit line between their primary singular values is about 0.636. Also as none of the data
points on this plot ever really cross the m = 1 line, one can conclude that the data-set 1
is a more conservative set as against the data-set 2 as a means to determine failure.
If the y-axis corresponds to the measured response from the system and the x-axis is
the predicted response from a model, then an envelope for failure criteria can be arrived
upon, which in this case would represent a positive factor of safety. The generally close
8
Figure 2.1: SVD based v-metric example: Singular Value Plot
to 1 values for the assurance criteria in Figure 2.2 advocate a good deal of similarity
between the spatial distributions of the responses in the two data-sets.
Figure 2.2: SVD based v-metric example: Primary Singular Vector Correlation
2.5 Image Classification Methods
Several feature extraction methods are in use in order to extract some underlying signa-
tures pertaining to an image which are then used in image classification/pattern recog-
nition type of problems. Some of those methods are listed below along with a brief
9
description of what each of those methods entail.
1. Fast Fourier Transform based image signature generation: Discrete Fourier trans-
forms of the image intensity function are used to extract and compare the most
significant features of a digital image.
2. Minutiae matching [27]: Mostly employed in fingerprint recognition. A prior collec-
tion with a classification for the different shapes of the distinguishing features in
an image is used as a reference library. These instances are located in an image
and their location and orientation with respect to a fixed reference is assimilated
into one matrix which is then compared with a database of already existing similar
matrices to determine a match. The comparison is made instance by instance.
3. Moment Descriptor Method: A set of 2-D moments are computed for the image
intensity functions using a set of basis kernels ranging from simple monomials to
orthogonal polynomials. These sets of moments form the features which are then
used for final comparison to classify the images.
Of the above methods, the moment descriptor method has been selected and explored
extensively in order to try and develop an alternate v-metric. The choice can be attributed
to the fact that the method has already been applied to similar problems such as mode
shape recognition from full field displacement data, comparing full field strain profiles for
detecting fracture etc., where the data-sets are fairly large and are more often than not
inclusive of a fair bit of noise.
The subsequent sections deal with the moment descriptor method in detail, along with
a discussion of the different constituent kernel functions that form the basis for moment
calculation.
2.6 Moment Descriptor Method
If the intensity function for an image were given as I(x, y), and the kernel function of the
order n used for moment computation as fn(x), the general form of a 2-D moment of the
10
pqth order would be given as [12] :
Mpq =
∫ ∞−∞
∫ ∞−∞
fp(x)fq(y)I(xy) dxdy. (2.1)
The kernel functions themselves can vary from simple monomials to a set of complex
polynomials that form the basis of the space on to which the intensity function is being
projected. The different orders of moments signify different characteristics of the shape
function that best fits the data, e.g., centroid, skew, averaged kirtosis, etc. Also, the set of
moments are generally found to be invariant to linear transformations such as translation,
scaling , and in the case of a few kernel functions, rotation.
The invariance is extremely useful in the pattern recognition problem as it helps
classify the intensity function, which does not need to have any underlying character to
it, into a set of numbers that have predefined physical interpretations and are not changed
by simple deformations. Hu [8] first discovered that a certain set of linear combinations
of moment descriptors generated using simple monomials, known as geometric moments,
are invariant to translation and scaling and hence could be used in discovering visual
patterns among a set of images.
As mentioned before, many kernel functions have been researched and each one of
them has some unique characteristics that can be exploited based on the requirements of
the problem to be solved. Of these, the ones that feature most regularly in the literature
related to image classification are the orthogonal polynomials (OPs).
OPs are so called, because over a domain of orthogonality, normally [−1, 1] or a unit
circle, the constituent polynomials forming the basis for the multi-dimensional space
in question are orthogonal to each other. The condition/check for orthogonality is not
unique for different OPs and would be dealt with as and when the polynomials themselves
are discussed.
The biggest advantage of using OPs as kernel functions is that the redundancy in the
information content in the resulting moment set is minimum. Also, the reconstruction of
the image intensity, if required, is fairly straightforward and has a generic structure to it.
Most of these functions can be generated using simple recursive relations between consec-
11
utive orders of polynomials. Therefore in order to deal with the v-metric problem much
the same as the image classification problem, the moment descriptors derived from OPs
seem to be the most efficient means and hence have been experimented with extensively.
2.7 Orthogonal Polynomial Moment Descriptors
Most image classification problems at hand study digital images, where the intensity is
defined at discrete pixels. For such a problem if the image intensity is described as I(i, j)
where i & j are the indices of pixel position on a given grid, then Eqn. 2.1 can be
re-written as [12]
Mpq =N∑i=1
N∑j=1
fp(i)fq(j)I(i, j). (2.2)
where the image size is N × N . Theoretically, an infinite number of such moments
are required to represent the data with absolute accuracy. In practice, however, a cap
on the largest number of moments that are required is first found by setting a threshold
on the largest allowable least square error between the original intensity matrix and the
reconstructed intensity from the moment set. The equation to compute such an error is
given as [12]
ρ(I, I′) =
∫ ∫Ω
(I′ − I ′)(I − I) dA√
[∫ ∫
Ω(I ′ − I ′2) dA][
∫ ∫Ω
(I − I2) dA](2.3)
where where ρ represents the least square error between the original intensity function
I and the reconstructed function I′. In general for a digital image, for most orthogonal
polynomials as kernel functions, I′
can be computed as [12]:
I′(ij) =
n∑p=0
n∑q=0
fp(i)fq(j)Mpq(ij). (2.4)
where n is the largest order of moments that are set to be computed. Eqn. 2.4 can
include a normalization constant, different limits to the summations or even a mapping
function factored into it based on the type of the OPs being used.
12
The following OPs have been tested as kernel functions for moment generation in this
thesis:
1. Legendre polynomials
2. Chebyshev polynomials
3. Zernike polynomials
4. Krawtchouk polynomials
5. Hahn polynomials
2.7.1 Legendre Polynomials
The nth degree Legendre polynomial is given as [12]:
Pn(x) =1
2nn!
dn
dxn(x2 − 1)n (2.5)
These polynomials are orthogonal over the range [−1, 1]. Based on this seed poly-
nomial, the following recursive relation can be used to generate higher order Legendre
polynomials:
P0(x) = 1,
P1(x) = x,
Pn+1(x) =2n+ 1
n+ 1xPn(x)− n
n+ 1Pn−1(x) (2.6)
These polynomials obey the following orthogonality relationship:
∫ 1
−1
Pm(x)Pn(x)dx =2
2n+ 1δmn (2.7)
where δ is the Dirac delta function. Based on the above, the mnth order moment can
be calculated from the following equation:
13
λmn =(2m+ 1)(2n+ 1)
4
∫ 1
−1
∫ 1
−1
Pm(x)Pn(y)I(x, y)dxdy m, n = 0, 1, 2... (2.8)
2.7.2 Chebyshev Polynomials
Chebyshev polynomials are some of the oldest set of kernel functions used for image
classification. They have the same domain of orthogonality as the Legendre polynomials.
They are solutions to the second order Chebyshev differential equation and are expressed
by the following formula [12]
Tn(x) =(x+
√x2 − 1)n + (x−
√x2 − 1)n
2(2.9)
The check for their mutual orthogonality is given as:
∫ 1
−1
Tn(x)Tm(x)1√
1− x2dx =
0, : n 6= m
π, : n = m = 0
π/2, : n = m 6= 0
(2.10)
Chebyshev moments are then expressed as:
τmn =
∫Ω
∫Tm(x)Tn(y)f(x, y)dxdy m, n = 0, 1, 2... (2.11)
2.7.3 Zernike Polynomials
Zernike polynomials are described such that they are orthogonal within the unit circle.
They were first proposed to be used in the image classification problem by Teague [9] who
found that the moment sets derived from them were invariant to orientation as well as
quite insensitive to noise. The polynomials themselves are complex, unlike the previous
two types. This set of polynomials not only has an order but also are dependent on the
number of unwrapped azimuthal rotations it takes for the polynomial to describe the
14
phase
The seed polynomial is of the form [14]:
Vn,m(x, y) = Vn,m(ρ, θ) = Rn,m(ρ)eimθ (2.12)
where i =√−1, n represents the order of the polynomial and m the number of
repetitions the aforementioned azimuthal rotation takes, ρ is the radial vector, θ is the
phase such that θ ∈ [0, 2π] and Rn,m is the radial polynomial given as:
Rn,m =
n−|m|2∑
s=0
(−1)s(n− s)!
s!(n+|m|2− s)!(n−|m|
2− s)!
ρn−2s (2.13)
The Zernike moments are then specified inside the unit circle as:
Zn,m =n+ 1
π
∫ ∫x2+y2≤1
I(x, y)V ∗n,m(x, y)dxdy (2.14)
The constant multiplication factor on the RHS is a direct result of the orthogonality
relation between the constituent polynomials given as:
∫ ∫x2+y2<=1
Vp,q(x, y)(Vn,m(x, y))∗dxdy =π
n+ 1δn,pδn,q (2.15)
During reconstruction, this factor needs to be considered along with any normalization
that has been used for moment set generation.
2.7.4 Hahn and Krawtchouk polynomials
Hahn and Krawtchouk polynomials are both hypergeometric functions that are orthog-
onal over the [−1, 1] region. While the seed polynomial is a little complex compared to
the previous polynomials, the recursive relations make them computationally inexpen-
sive. The major difference in using them as kernel functions is that the entire set of
moments is computed at once unlike the others where a set higher limit for the order of
the moments is defined.
15
The nth degree Krawtchouk polynomial is defined as [12]:
K(p)n (x,N) =2 F1(−n,−x;−N ;
1
p) (2.16)
where n = 0, 1, 2, ..., N is the degree, x denotes the index of the pixel on one of the
axes. The relation of orthogonality is given as:
N∑x=0
w(x; p,N)K(p)n (x,N)K(p)
n (x,N) = ρ(n; p,N)δmn (2.17)
where p is a weighting parameter which is normally set to 0.5, w and ρ are the weight
function and the norm respectively given as:
w(x; p,N) =
(N
x
)px(1− p)N−x
ρ(n; p,N) = (1− pp
)n1(Nn
).
From the above expressions, the following recursive relations can be derived:
K(p)0 (x,N) = 1
K(p)1 (x,N) = 1− x
Np
K(p)n+1(x,N) =
Np− 2np+ n− x(N − n)p
K(p)n (x,N)− n(1− p)
(N − n)pK
(p)n−1(x,N) (2.18)
Hahn polynomials, on the other hand are formulated as below [12]
W (c)n (s, a, b) =
(a− b+ 1)n(a+ c+ 1)nn!
×3 F2(−n, a− s, a+ s+ 1; a− b+ 1, a+ c+ 1; 1)
(2.19)
where (an) is the Pocchammer symbol, n is the order, a, b and c are parameters such
that:
−1
2< a < b, |c| < 1 + a, b = a+N
16
where N is the image size. The recurrence relation in terms of their weight functions is
given as:
W(c)0 (s, a, b) =
√ρ(s)
d20
(2s+ 1)
W(c)1 (s, a, b) = −ρ(s+ 1)(s+ 1− a)(s+ 1− b)(s+ 1− c)− ρ(s)(s− a)(s− b)(s− c)
ρ(s)(2s+ 1)
√ρ(s)
d21
(2s+ 1)
W(c)n+1(s, a, b) = A
dndn+1
W (c)n (s, a, b)−Bdn−1
dn+1
W(c)n−1(s, a, b) (2.20)
where,
A =1
n+ 1(s(s+ 1)− ab+ ac− bc− (b− a− c− 1)(2n+ 1) + 2n2)
B =1
n+ 1(a+ c+ n)(b− a− n)(b− c− n)
and the weight function and the norm are given respectively as:
ρ(s) =Γ(a+ s+ 1)Γ(c+ s+ 1)
Γ(s− a+ 1)Γ(b− s)Γ(b+ s+ 1)Γ(s− c+ 1)
d2n =
Γ(a+ c+ n+ 1)
n!(b− a− n− 1)!Γ(b− c− n)
2.8 Errors in digitization
All of the aforementioned kernel functions, when used to characterize digital images, are
approximated into some sort of a summation. Each of those formulations will be discussed
in the next chapter that deals with the implementation of the 2-D moments. There are
two primary kinds of errors that are introduced into the moment computation problem
due to the digitization [16,17,26]
Discretization error: The image intensity function for a pixel is assumed to be
concentrated onto the center of the pixel and not uniformly distributed across it.
Such an error is magnified when a zeroth order approximation (ZOA) is used to
discretize the double integral in Eqn. 2.1. Figure 2.3 is an example of such an error.
17
The most effective way around this error is to try and find an algorithm to com-
pute what are known as exact moments. There is no approximation involved in
computing the integral and hence the error drops out of consideration. The prob-
lem however is that not all of the above kernel functions can be remodeled to find
a way to compute exact moments for digitized data. The errors generally become
magnified for higher orders but can be controlled using different discrete integration
techniques.
Figure 2.3: Discretization Error: PhD dissertation Liao [16]
Geometric error: Of all the kernel functions used in this thesis, the Zernike poly-
nomials are the ones that are not orthogonal in the [−1, 1] domain. It is therefore
required to map the square/rectangular digital image onto/within the unit circle
which introduces a different digitization error as has been depicted in Figure 2.4
known as the geometric error.
A resampling of pixels to estimate the intensity for all the pixels within the unit
circle along with a correction for the curvature is employed in order to control
this error from making the moment computation from becoming unstable at higher
order moments.
18
Figure 2.4: Geometric Error: Chandan et al [18]
19
Chapter 3
Implementing the 2-D moments
This chapter deals with the implementation of the different OPs as kernel functions,
verifying such implementations using grayscale images and then attempting to develop a
v-metric using the same. Also described are the shortcomings of each of the algorithms
and the steps taken to overcome them. The choice for the kernel function has been based
on the availability of literature and research interest in the different polynomials.
3.1 Legendre Moments
Using the recursive relations as have been presented in Eqn. 2.6, a ZOA for discretization
makes the recursive relation look as below [24]
λmn =(2m+ 1)(2n+ 1)
MN
M∑i=1
N∑j=1
Pm(xi)Pq(yj)f(xi, yj) (3.1)
where the image is of size M ×N , xi and yj are obtained after mapping the original
image onto the region [−1, 1]. As discussed in the previous chapter, using ZOA for
digitization introduces a discretization error that grows with a higher number of moments
being computed leading to instability of the algorithm. Hosny [24] discusses a method to
make an exact estimation of Legendre moments from discrete Legendre polynomials.
Expressing the definite integral of the moment between any two consecutive pixels on an
axis in a difference form:
∫ xi+∆x/2
xi−∆x/2
Pm(x)dx = [Pm+1(x)− Pm−1(x)
2m+ 1]xi−∆x/2xi+∆x/2
20
and substituting the result in Eqn. 2.6, the discretization error can be eradicated. The
moments can then be computed using Eqn. 3.2.
λmn =M∑i=1
N∑j=1
Im(xi)In(yj)f(xi, yj) (3.2)
where,
Im(xi) =2m+ 1
2m+ 2[xPm(x)− Pm−1(x)]
xi−∆x/2xi+∆x/2
(a) Original (b) Moments:20 (c) Moments:100
Figure 3.1: Reconstruction using Legendre moments
Using Eqn. 3.2 not only removes the discretization error, it makes the moment com-
putation faster. From Figure 3.1 it is clear that the higher the number of moments
computed, the more accurate the representation of the image. However, even for a very
large number of moments the method was unable to reconstruct something as trivial as a
binary ink blot accurately. Figure 3.1c shows a reconstruction for such an image with size
256 × 256 using a 100 moments. As can be seen there are gray patches throughout the
image. The method further deteriorates for grayscale images. While the method does
give improved results when computing an even higher number of moments, it retains
features whose size is more than half the size of the original data-set and hence would
not make for an effective feature extraction method for developing a v-metric.
3.2 Zernike Moments
Zernike moments have been the most researched approach in optics and image classifica-
tion because of their invariance to change in orientation and ease of reconstructing the
image intensity function, even for noisy data. They have already been tested to work
21
with problems such as strain pattern recognition [11] from full field data and mode shape
recognition [13] etc. Below is a ZOA of Eqn. 2.14 to compute discrete Zernike Moment
Descriptors (ZMDs).
Zn,m =n+ 1
π
∑x
∑y
I(x, y)V ∗n,m(x, y) (3.3)
There are several complications in using them as presented above. As has been men-
tioned in the previous chapter, the digitization gives rise to both discretization and ge-
ometric errors which need to be dealt with to develop a stable moment computation
algorithm. Figure 3.2 represents a reconstruction attempt for a binary ink blot using
Eqn. 3.3 with 20 moments. Computing a higher number of moments for better accuracy
only results in divergence of the algorithm, resulting in very high image intensities for
the reconstructed image.
(a) Original (b) Moments:20
Figure 3.2: Reconstruction using Zernike moments
One of the biggest problems with the use of discrete ZMDs is the inconsistency in the
usage of a normalization constant used to map the moments from a digital image to within
the unit circle. Some of the research papers have no mention of such a constant while
some others use many different ones. Singh, et al [18,19] describe and compare most of the
prevalent algorithms used to compute the moments. The method that was finally found
to be good at computing exact ZMDs while remaining stable for a very large moment
space has been presented below in brief.
The pixels of the image are mapped onto a normalized square/rectangle, as required,
which is inscribed within a unit circle. The resulting grid is then resampled so that
there are synthesized pixels at locations described by the weighting coefficients used to
describe the forward and the backward discrete grid contributors in Gaussian numerical
22
integration. Radial polynomials are computed for this enlarged grid using Eqn. 2.14,
which is remodeled into the following set of recursive relations in order to do so.
Rnn(r) = rn
Rn,n−2 = nrn − (n− 1)rn−2
Rnm(r) = H1Rn,m+4(r) + (H@ +H3
r2)Rn,m+2(r),m = n− 4, n− 6... (3.4)
where,
H1 =(m+ 4)(m+ 3)
2− (m+ 4)H2 +
H3(n+m− 6)(n−m− 4)
8
H2 =H3(n+m+ 4)(n−m− 2)
4(m+ 3)+ n+ 2
H3 = − 4(m+ 2)(m+ 1)
(n+m+ 2)(n−m)
Using the radial polynomials thus generated in Eqn. 3.4, the ZMD set can be gen-
erated as shown in Eqn 3.5. Trials have been carried out to generate moments up to a
maximum order of 100 and the algorithm has been found to be stable. The moment set
thus generated has been used to reconstruct sample grayscale images with percent least
square errors below 5. One such example has been included in Figure 3.3. The size of
the image is 16× 16.
Zmn =p+ 1
πN2
N−1∑i=0
N−1∑k=0
f(xi, yk)n−1∑l=0
n−1∑m=0
wlwmV∗pq(tl + 2i+ 1−N
N,tm + 2k + 1−N
N) (3.5)
where, t is discrete grid coefficient, w is the weights assigned to each participant of
the summation based on their distance from the grid point at which the moment is being
computed and n=5 for the Gauss quadrature discrete integration. Also,
(tl + 2i+ 1−N
N)2 + (
tm + 2k + 1−NN
)2 ≤ 1
23
.
(a) Original (b) Moments:40
Figure 3.3: Reconstruction using recursive exact Zernike moments
The issue of the normalization coefficient can be dealt with by implementing an or-
thogonality check to see if Eqn. 2.16 holds true or not. As is evident from Figure 3.3,
the reconstruction of the images requires a large number of moments, sometimes greater
than the size of the image. ZMDs have an advantage over the other polynomials investi-
gated in that the maximum order of moments that can be computed is not limited by the
image size. Also, they are found to be able to reconstruct very small data-sets which is
in contrast to all the other polynomials. But the time taken for computation of that case
is very high [12]. Generating moments from images bigger than 64× 64 has been found to
be intractable.
Figure 3.4: FRF reconstruction: 40 ZMDs
Figure 3.4 is a representation of reconstruction of a 50×50 portion of an experimentally
collected FRF data-set from a circular aluminum plate. It is quite clear that even for a
very large number of moments, the ZMDs do not begin to describe the peaks accurately.
The results improve a little if the data is converted to log scale implying that the ZMDs
24
lose their discernibility a little as the dynamic range of the data-sets increases. Their
inability to reconstruct the peaks in such data-sets in addition to their computational
demand leads to the conclusion that ZMDs do not make for an effective feature set for
developing a v-metric.
3.3 Hahn and Krawtchouk Moments
The actual discrete formulation for Hahn and Krawtchouk moments is exactly the same
as the one shown in Section 2.7.4. The polynomial matrix thus generated is limited by
the longer dimension of the image being processed. If the digital image in question is of
size N ×M , the polynomials in both cases then are used to generate moments using the
below matrix multiplication [12,20]
Q = K1DKT2 (3.6)
where K1 and K2 are the first N and M polynomials generated respectively and D
is the discrete image intensity matrix. The features can then be chosen as either the first
few orders of the moment matrix or the biggest moments present in the matrix using
2-norm or Frobenius norm based ranking.
From Figure 3.5 it is quite clear that the reconstruction becomes more and more
accurate as a larger subset of the moment matrix Q is used in reconstruction. The image
used is of size 128 × 128. The percent least squares error reduces from 25.89 to 1.45 to
0.14 using 30, 60 and 120 moments respectively.
Similarly, Figure 3.6 depicts reconstruction results using different number of Hahn
moments. The image size used is 32× 32. The percent least square error using 15 and 30
was found to be 27.9 and 0.058 respectively. Both Hahn and Krawtchouk moments were
found to work well with an experimental data-set with a high dynamic range.
The problem with using any of the two for use in a v-metric is the fact that the image
intensity characteristics are spread across a set of polynomials which have a highest order
equal to the bigger dimension of the image. This works well for most digital images which
are square or at least have comparable dimensions along the two principal axes.
25
(a) Original (b) Moments:30
(c) Moments:60 (d) Moments:120
Figure 3.5: Reconstruction using Krawtchouk moments
The experimental data-sets are not necessarily bound to such a restriction. On the
contrary, generally the experimental data-sets have one of the dimensions much smaller
compared to the other, based upon whether the number of response locations is larger
or the number of references (load cases). In such a case, even if a subset of the moment
matrix generated is used as a feature set to be compared, the number of elements in such
a matrix far exceeds the number of elements in the original data-set.
For e.g., in the case of the data-set used to illustrate the SVD based v-metric in
Section 2.4, the moment matrix generated would be of size 146 × 146 at each temporal
location. Even if only the first 40 moments are used as feature set, the final set being
compared is made up of 1600 elements as against a 584 in the original data-set.
(a) Original (b) Moments:30 (c) Moments:120
Figure 3.6: Reconstruction using Hahn moments
26
One way around this problem could be to fold the spatial dimension into a vector
and use the resulting reshaped 2-D matrix as an image. The data-set above would
then become a 584 × 1601 matrix. The complication in working with this matrix is
that the seed polynomials for both Hahn and Krawtchouk polynomials contain factorial
terms, the biggest of which being 2N and N factorials respectively, where N is the larger
dimension of the image. The maximum image size that can be processed with the two is
therefore limited to about 60 and 150 respectively, after which the moment computation
diverges very quickly. If the data-set was to be divided into smaller parts to be compared
individually in order to work around this problem, the final moment/feature set would
end up becoming very large again rendering the two polynomials ineffective at conceiving
an efficient v-metric.
3.4 Chebyshev Moments
For an image having N pixels on its longer dimension,the nth degree discrete Chebyshev
polynomial is given by the equation [12]:
Tn(x) = n!n∑k=0
(−1)n−k(N − 1− kn− k
)(n+ k
n
)(x
k
)(3.7)
The orthogonality check for these polynomials is given as:
N−1∑x=0
TnxTmx = ρ(n,N)δmn (3.8)
The set of recursive relations used to compute a set of above polynomials after nor-
malizing them to a factor such that they become orthonormal is described as [26]:
T0(x) =1√N,
T1(x) = (2x+ 1−N)
√3
N(N2 − 1),
Tn(x) = (α1x+ α2)Tn−1(x)− α3Tn−2(x), (3.9)
27
where,
α1 =2
n
√4n2 − 1
N2 − n2
α2 =1−Nn
√4n2 − 1
N2 − n2
α3 =n− 1
n
√2n+ 1
2n+ 3
√N2 − (n− 1)2
N2 − n2
The discrete orthogonal moments of pqth order for an image with I(x, y) is the image
intensity function. are then computed as:
τpq =M−1∑x=0
N−1∑y=0
Tm(x)Tn(y)I(x, y)dxdy (3.10)
Among the kernel functions used in order to generate the two dimensional moment
sets, the most viable set that can be applied to develop a v-metric are the Chebyshev
polynomials. The discretization error is fairly controlled even for the ZOA for as high a
moment order as 150. Although some of the previous functions are better at estimating
image features, Chebyshev polynomials have been found to be fairly good at estimating
features from a very big data-set with a fair amount of noise. However, owing to the way
they are formulated, the highest order of the moments that can be computed is limited
by the larger dimension of the image, lest the polynomials themselves become complex.
Figure 3.7 represents usage of two dimensional Chebyshev moments to reconstruct a
standard grayscale image. The image size used is 512× 512 and the percent least square
error for the reconstruction is 4.27,3.6 and 3.03 for 30,60 and 100 moments respectively.
The reconstruction is not as accurate as the Krawtchouk moments of the same order for
a higher number of moments, but the least square error is fairly small even for smaller
number of moments, suggesting the first few orders of moments capture a good bit of the
significant features of the image intensity data-set.
As high as 180-200 moments can be computed before the method becomes truly
unstable, by which time percent least squares error almost always drops below 2. Figure
3.8 is a representation for a reconstruction for the reshaped transmissibility data-set
using 100 Chebyshev moments. The 584× 1451 data-set has therefore been reduced to a
28
(a) Original (b) Moments:30
(c) Moments:60 (d) Moments:100
Figure 3.7: Reconstruction using Chebyshev moments
100× 100 matrix. As is perhaps evident from the fourth plot, the dynamics of the data
has been averaged out to give a best fit type estimate of the curve.
If it was only noise that was removed then a useful estimate of margin can be obtained
from the method. However, if a part of the data was removed in the denoising process,
then the margin estimate found from this process is going to be incorrect. In both cases,
valuable information about the uncertainty estimation is lost.
3.5 Comparison possibilities
Using the above formulation of Chebyshev moments a way to compare the two data-sets
can be arrived upon. One such possibility is to decompose the moment matrix into its
singular values and vectors and plot these singular values against each other.
Figure 3.9 illustrates the same for the two data-sets. The slope of the best fit line
then helps estimate the margin, which in this case is 0.7321 as against a 0.636 in the
SVD based v-metric. Figure 3.10 shows the variation of the slope against the number
of moments computed. The slope converges to a value before the algorithm starts to
29
Figure 3.8: Reconstruction, transmissibility data: 100 2-d Chebyshev moments
become unstable.
Another fairly crude estimate for the margin could be to take the ratio of the norm
for two moment matrices. As the most significant portions of the data are described by
the moment sets, a comparison of the norms directly would give us an estimate of the
margin. In the above case it turns out to be 0.7148, which is, perhaps unsurprisingly,
close to the slope of the line from Figure 3.9.
No logical way has been found to get a good uncertainty estimate from the above
method as the data-set is decomposed and effectively de-noised in both spatial and tem-
poral directions. As previously mentioned therefore, this proposed method is of use only
in cases where such an averaging removes only noise. The only relevant information
provided by this method is the estimate for margin. If the margin is small, i.e., if the
model is close to optimal to define the reference system, then the user runs the risk of
underestimating the uncertainty and hence the factor of safety.
30
Figure 3.9: Reconstruction, transmissibility data: V-metric: 2-d Chebsyhev 100 moments
Figure 3.10: Reconstruction, transmissibility data: V-metric: Slope vs no. of moments
31
Chapter 4
Proposed alternate approach
The 2-D moment descriptors do not seem to provide a method that can be used to
develop an effective v-metric. The one possible way using Chebyshev moments is also
limited in that it does not provide a good way to measure uncertainty. This chapter
is a discussion that proposes to alter the existing moment descriptor generation a little
to acquire features that can be compared more easily and hence can lead to a v-metric
which is both more effective and more informative in terms of determining the margin
and uncertainty between two data-sets. The idea is to compute one dimensional (1-
D) moments over image intensity vectors and then compare the resultant moments to
identify a metric.
4.1 Overview of the proposed approach
The basic premise of the proposed approach entails remodeling Eqn. 2.1 so that it looks
as below:
Mp =
∫ ∞−∞
fp(x)I(x) dx (4.1)
for the pth order moments and I(x) is the image intensity vector along one of the im-
age axes. The idea is a direct extension of the logic behind the Fourier transform of a
function, where a function is expressed as a weighted summation of mutually orthogonal
sinusoidal components. The moments here are analogical to the amplitude of the indi-
vidual frequencies in that case. The method, as is perhaps obvious, can be used with
any set of polynomials as kernel functions that are orthogonal in the domain [−1, 1]. The
discrete version of Eqn 4.1 can be written as:
32
Mp =N∑i=1
fp(i)I(i). (4.2)
from which the original intensity vector can be reconstructed as follows:
I ′i =n∑p=1
fp(i)Mp. (4.3)
where n is the maximum order of the moments that are to be computed and the
image is of size N×N . Similar to the two dimensional moment descriptors a least square
error based threshold can be set in order to determine how many moments are required
in order to obtain a set of features that sufficiently characterize the data-sets and hence
can be used to make an effective comparison.
For implementing the above methods, Chebyshev and Krawtchouk polynomials were
employed. A set of some simple curves were reconstructed using Chebyshev and Krawtchouk
moments. Figure 4.1 corresponds to the same using Chebyshev polynomials. The size
of the vector used is 1001 and the number of moments computed is 100. Figure 4.2, in
contrast uses 50 Krawtchouk polynomials over a vector of length 101.
(a) Sine (b) Exponent (c) Random
Figure 4.1: Reconstruction using Chebyshev moments
The Krawtchouk polynomials face the same limitations as in the case of the 2-D de-
scriptors. Because of the presence of factorial terms in the seed polynomial, the moments
quickly diverge thereby limiting the length of the vectors which can be decomposed into
a set of moments using them as kernel functions. Add to that the fact that they require
a large set of moments to describe the simplest of curves, they become rather limited in
their utility for developing a metric.
33
(a) Sine (b) Exponent (c) Random
Figure 4.2: Reconstruction using Krawtchouk moments
The rest of the study therefore involves the use of Chebyshev polynomials only, even
though their reconstruction for random data-sets is not as desirable. As has been demon-
strated in subsequent subsections, if enough resolution is available and the data is any-
thing but random noise, this does not pose a problem.
4.2 Formulation with Chebyshev polynomials
Below is how Eqn 2.12 is modified to get the linear moments:
τp =M−1∑x=0
Tm(x)I(x)dx (4.4)
The fact that the data-sets that are used in lieu of the image intensity functions
normally have information for both the spatial and temporal distribution presents the
choice of decomposing either of these extents into moments. Either the spatial degrees of
freedoms can be decomposed into moments at each temporal location or the other way
around. Intuitively, the methods could then provide a basis for evaluating a metric for
cases where the critical temporal conditions (for e.g. peak frequencies in the response
data) or where the critical spatial locations (for e.g. fracture locations) are to be deter-
mined/matched.
34
4.2.1 Spatial decomposition
Using the OPs across the spatial domain, moment vectors could be obtained at each
temporal point. These moment vectors can then be used to make an accurate comparison
close to the temporal points that have the biggest weighting.
If the reshaped 2-D transmissibility data-set used with the Chebyshev polynomials in
the previous chapter were the base data-set, such that each column represents the spatial
spread of data at their respective frequencies, computing moments at each frequency
point for the spatial vector provides a moment matrix of size (n+ 1)× 1451.
Figure 4.3: Transmissibility reconstruction using 60 Chebyshev moments
Figure 4.3 shows the reconstruction of one such transmissibility dataset with n = 60
in both magnitude and phase. The blue curve represents the original dataset as against
the green curve which is the reconstructed dataset. Because of the characteristic shape
of the Chebyshev polynomials which are weighted less close to the origin, the degrees of
freedom that fall closer to the center of the spatial vector are not as well estimated and
require more number of moments to be computed in order to be reconstructed accurately.
Figure 4.4 represents one such inaccurate reconstruction for a different data-set.
Figure 4.5 represents the reconstruction for the original transmissibility data-sets cor-
responding to vehicles B1 and B2 using only 10 moments as against 30 moments in Fig-
ure 4.6. The figures represent the comparison between data-set A and its reconstruction,
data-set B and its reconstruction, data-set A and data-set B and the two reconstructions
respectively, in a clockwise sense.
35
Figure 4.4: Transmissibility reconstruction using 60 Chebyshev moments
Figure 4.5: Reconstruction comparison: 10 Chebyshev moments
It makes for an interesting inference that while the reconstruction is quite poor for
the former, the characteristics that are being captured by the set of moments themselves
are quite similar. An argument can therefore be made that in spite of not reconstructing
the data accurately enough, the set of moments start to capture similar properties of the
data-sets and can be compared against each other for making a v-metric. Similar to the
2-D metric, the margin line seems to converge to a value which is not so very different
from the slope acquired after, for instance, computing 20 moment orders.
4.2.2 Temporal decomposition
Instead of using the moments across the spatial vector, the temporal information at each
degree of freedom can be used to generate a moment set for comparison. Such a moment
matrix could come in handy if for instance correlating the location of failure between two
systems is the prime concern.
Using the same transmissibility data-set, the resulting moment matrix is of the size
36
Figure 4.6: Reconstruction comparison: 30 Chebyshev moments
Figure 4.7: Temporal Reconstruction: 60 Chebyshev moments
584× (n+ 1). Although one can argue that the feature content is much smaller than the
previous case and hence this decomposition provides for a more lucrative basis to develop
a v-metric, there is one serious disadvantage in using it. As is shown in Figure 4.7, the
moment set tries to fit a curve that looks like a best fit type estimate of the data across
the temporal domain, much like its 2-D counterpart. Therefore as was the case with the
2-D formulation, if the data inherently has a lot of dynamic characteristics and the best
fit is not only neglecting noise content, but some useful data as well, then margin being
determined is not inclusive of all the relevant information.
Such an estimate is also prone to missing out on some large responses as has been
shown in Figure 4.8. The Chebyshev 2-D formulation also handles the problem much
37
Figure 4.8: Temporal Reconstruction: 60 Chebyshev moments
the same way. As a way around these shortcomings therefore, the spatial decomposition
method has been used to devise a v-metric.
4.3 V-metric: Approach A
The resultant matrix from the spatial decomposition consists of moment sets as columns
at each temporal point. An attempt to rank the moments based on Plancherel’s theo-
rem [30] shows that the biggest moments in each data-set do not necessarily correspond
to polynomials of same order. While in the case of many temporal points the first 2-3
moments might indeed be of the same order, it is rather cumbersome to try to find a way
to actually find a degree of similarity between the two matrices in terms of magnitudes
of moments of similar orders.
In order to extract the most significant characteristics from the moment matrices
therefore, they are subjected to SVD. The two sets of singular values are then plotted
against each other. Figure 4.9 represents the same for the plot between singular values
obtained from moment matrices corresponding to vehicle data-sets B1 and B2. The slope
of the best fit line is 0.73 and the set of singular values lie very close to that line.
Appendix I consists of similar comparison made for transmissibility for different vehicles.
Similar to the SVD based v-metric, the slope of the best fit line represents the margin.
However, the approach falls short in determining uncertainty bounds. Having averaged
out the smaller components of the data two times over, i.e., once while decomposing
38
Figure 4.9: SVD plot: Approach A
the data for generating the moments and once while decomposing the resulting moments
using SVD. Also the set of features being finally compared is small, i.e., n+ 1.
4.4 V-metric: Approach B
In order to overcome the two limitations of Approach A mentioned in the previous section,
a slight modification of the same can be employed. Using the fact that the moments rep-
resent weights about predefined orthogonal polynomials, which have inherent directions
to themselves, similar to the singular vectors, a way to compare the moments without
subjecting them to SVD can be derived.
Each column in the moment matrix can be normalized so that it becomes unitary.
Each element of such a column is then a normalized representation of the distribution of
weights along the different orthogonal polynomial directions. Such columns can therefore
be compared against similar columns from a different data-set to arrive at a AC com-
parison used in the SVD based v-metric. Not only does this give a larger set (equal to
the no. of temporal points) of features to compare, but also helps separate the spatial
comparison part into a convenient scheme.
Figure 4.10 consists of the same transmissibility data subjected to this approach, while
39
Figure 4.10: SVD plot: Approach B
Figure 4.11 represents the corresponding spatial comparison. As can be seen, the slope of
the best fit line is slightly lesser than in the previous case. Also, there is a lot of variation
about it, which can be used to determine the uncertainty bound.
Figure 4.11: Spatial Comparison Approach B
4.5 Comparison between metrics
Based on the characteristics of a good v-metric mentioned in Chapter 2, a comparison
has been made between the four existing metrics, i.e.:
Original SVD based v-metric
Chebyshev 2-D moment metric (C2DM)
40
Chebyshev 1-D moment metric: Approach A (C1DA)
Chebyshev 1-D moment metric: Approach B (C1DB)
.
Figure 4.12: Original SVD based v-metric
Figures 4.12-4.15 represent a summary for all the v-metrics. The SVD based v-metric
is the only one among the above four that has features with the same physical units as
the original data-set. All the other metrics are somewhat removed in that respect. C2DM
combines a decomposition in the temporal domain along with a decomposition along the
spatial extent. In addition to that, the different order polynomials have different units
because of the degree that the reference axes are raised to. In that sense, there is no real
simple physical meaning attached to the margin derived.
The biggest reduction in the data-set being compared occurs in C2DM and C1DA.
The final feature set is of the size n+1×n+1 as against Ns×Nf and n×Nf in the SVD
based v-metric and C1DB respectively, where Ns is the smaller of the spatial dimensions
and Nf is the number of temporal points.
The margin derived from all the four algorithms are comparable. The SVD based
v-metric has the lowest slope estimation, which could be attributed to the fact that only
one of the significant loadcase contributing to the data is retained in the final comparison
as against the complete data-set in the other three cases. The pros and cons of choosing
41
Figure 4.13: C2MD
Figure 4.14: C1DA
42
Figure 4.15: C1DB
one over the other would depend on the problem to be solved. If there is only one prime
contributor (loadcase) to the data, e.g., if there is only one significant displacement input
reference to the tranmissibility data-sets that have been used, the margin estimation from
all the metrics should be much closer. In such a case the simplicity of implementing the
SVD based v-metric would prevail.
A list of figures for margin estimations for different vehicle data-sets as estimated
using the four metrics has been included in Appendix A. The data-sets differ in the level
of excitation input to the system, variations in boundary conditions etc. The comparison
has been made between two data-sets acquired from the same vehicle configuration. Table
4.1 shows the margin slopes from those figures. The relative covariances and ratios of the
norms for the moment matrices provide other ways to estimate margins. Table 4.3 shows
the same for different data-sets.
In the SVD based v-metric the quantities being compared have the same physical
units as transmissibility. But for all the other three, the final features being compared
have the units normalized to different degrees of lengths depending upon the order of the
moment in question. Some of the literature available hints at a possible transformation
43
Metric SVD C2DM C1DA C1DBVehicleB 0.6337 0.7285 0.7357 0.6802VehicleC 0.8537 0.8863 0.8903 0.8621VehicleD 0.8415 0.8562 0.8576 0.8168VehicleE 0.8662 0.9583 0.9505 0.9310
Table 4.1: Margin Estimation
Metric SVD C2DM C1DA C1DBVehicleB 1.7752 0.7166 0.7034 4.8976VehicleC 1.9801 0.6275 0.5390 7.0759VehicleD 2.0174 0.9043 0.8509 5.9843VehicleE 2.3900 0.9199 0.5150 8.5950
Table 4.2: Relative standard deviation
that can be used to change the feature set into quantities with physically relevant units.
Such a transformation has not been explored as a part of this thesis.
Both C2DM and C1DA smooth out the uncertainty present in the data set because of
the two-step averaging involved. The variation about the best fit line in the SVD based
v-metric and C1DB give an indication of the uncertainty. If it is assumed that the linear
model described by each of the metrics is correct for their respective comparison, the
relative sum of squared errors for the four the metrics are as listed in Table 4.2. Between
the SVD based v-metric and the C1DB metric, the uncertainty estimate for the latter is
generally much larger, meaning it would provide for a more conservative design for the
system.
The reconstructions from C1DA and C1DB are obviously the same and hence as long
as spatial decomposition is used along with a significant number of moments, all the peaks
should be scaled accurately. However, for C2DB, some of the peaks can be smoothed
Metric SVD C2DM C1DA C1DBVehicleB 0.6332 0.7166 0.7237 0.6798VehicleC 0.8517 0.8723 0.8757 0.8615VehicleD 0.8395 0.8422 0.8435 0.8162VehicleE 0.8641 0.9425 0.9350 0.9303
Table 4.3: Relative Covariance
44
Figure 4.16: Primary singular value reconstruction
out in the same fashion as in the case of temporal decomposition (Figure 4.7). The SVD
based v-metric reconstruction looks as shown in Figure. 4.16. There are several instances
in such a reconstruction where a significant peak has not been estimated accurately. But
the error is still not as large as in the case of C2DM.
After considering all the differences, it is worth noting that all these metrics have an
underlying equivalence. In the case of C1DA and C1DB, it is quite obvious as they are
essentially the same scheme up until the penultimate step. However, for the SVD based
v-metric and the C2DM the equivalence is not as readily apparent. A direct comparison
can be made, however, if the basis data-set were the reshaped transmissibility data which
had only two dimensions.
It is known that for a data-set, the singular value decomposition breaks it up into a
set of singular values which signify the magnitude characteristics and a couple of singular
vector matrices.
A = UΣV H
Analogically, in C2DM, the data-set is decomposed into a couple of sets of orthogonal
polynomials and a moment matrix. Below is the same representation for an N × M
matrix. Let the highest number of moments to be computed be n.
AM×N = OPN×nτn×nOPTM×n (4.5)
45
If instead of using orthogonal polynomials on the data-set directly, the singular vectors
obtained from SVD are subjected to moment computation, then it can be shown that the
resulting left and right moment matrices can be pre and post multiplied to the singular
value matrix to generate the 2-D moment matrix τ in Eqn. 4.5. C1DA and C1DB then
are variations of C2DM with one of the polynomial and moment set multiplied into the
singular value matrix to generate the 1-D moments.
UN×N = OPN×nM1n×N
VM×M = OPM×nM2n×M
therefore,
τn×n = M1n×NΣM2Hn×M (4.6)
4.6 Variations of the SVD based v-metric
In order to include the entire dataset for feature comparison as against using only the
primary singular values, a couple of variations in the existing SVD based v-metric can
be employed.
1. In order to include the effect of the secondary and the further smaller singular
values, the summation of all the singular values at each temporal point can be used
to complete the feature set instead of using only the primary singular values.
2. Instead of using the summation directly, the square root of the sum of squares
of the singular values at each temporal point can be used as the final feature set.
Effectively, this is equivalent to taking the autopower of the data-sets at each spatial
reference and adding them together.
3. The reshaped dataset used in the moment descriptor method can be subjected to
SVD to generate the feature set for comparison. This algorithm has a shortcoming
in that the length of the feature set being compared is limited by the smaller of the
46
Metric Original Summation Autopower Summation ReshapingVehicleB 0.633 0.648 0.634 0.694VehicleC 0.853 0.925 0.905 0.922VehicleD 0.841 0.946 0.915 0.901VehicleE 0.866 0.894 0.885 0.918
Table 4.4: Modifications on SVD based v-metric
reshaped spatial vector and the temporal spread of the data. If the spatial spread
is insufficient, the feature set would be too small to get a relevant margin estimate.
(a) Summation v-metric (b) Autopower v-metric (c) Reshaping v-metric
Figure 4.17: Variations on the SVD based v-metric
Figure 4.17 shows the margin from the three modified SVD based v-metrics. It can
be inferred that the original SVD based v-metric underestimates the margin as compared
to the other methods and hence is a more conservative estimate while determining a
distance from failure. It is also evident that, much the same as C2DM and C1DA, the
uncertainty information is averaged out in the reshaping based SVD v-metric. Table 4.4
lists out the margin values for the aforementioned v-metrics for all the transmissibility
datasets available. The respective plots have been included in Appendix C.
4.7 Regression through origin for the v-metrics
All the linear regressions employed to derive a margin estimate for the v-metrics thus far
include the intercept in the estimation of the least squares best fit line. However, it is
perhaps physically more relevant to look at the linear best fit line to be one that passes
47
Fit NLS RTOVehicleB 0.680 0.740VehicleC 0.862 0.898VehicleD 0.817 0.866VehicleE 0.931 0.955
Table 4.5: Comparison, C1DB: RTO and NLS
Fit NLS RTOVehicleB 0.634 0.684VehicleC 0.854 0.902VehicleD 0.842 0.866VehicleE 0.866 0.903
Table 4.6: Comparison, SVD based v-metric: RTO and NLS
through the origin. In other words, intuitively, a y = mx line provides a more usable
estimate for the margin as against a y = mx+ c line.
In order to verify if there is any significant difference between the aforementioned
fits, the margin estimates for C1DB were made using a regression through origin (RTO)
instead of a normal linear regression fit (NLS). Table 4.5 represents a comparison between
the margin from the two estimates. A small code written to implement RTO has been
included in Appendix B. Other than vehicle B, the slopes estimated from all the other
vehicles are close using both NLS and RTO, suggesting that the margin estimated from
both the methods are quite similar. The test extended to other metrics yields similar
results. Table 4.6 represents a comparison between RTO and NLS for the existing SVD
based v-metric.
48
Chapter 5
Conclusion
Various orthogonal polynomials have been explored as possible kernel functions to extract
features from data-sets for comparison. Three possible v-metrics have been proposed
which use Chebyshev orthogonal polynomials as basis functions to generate moments
which are then used as features for comparison.
These metrics have been compared with the existing SVD based v-metric. A couple
of variations of the existing SVD based v-metric have been tested. It can be concluded
that the results obtained for margin computation from all the metrics are comparable.
Although, there is still a need of a well defined means to estimate uncertainty.
The SVD based v-metric is computation-wise the least expensive. Although C2DM
also is fairly fast compared to C1DA and C1DB, which take a little more time. The
biggest advantage of the SVD based v-metric is that it compares entities that have the
same physical units as the original data-set as against the other three moment based
metrics.
One advantage of the moment based metrics over the original SVD metric is that they
qualitatively include the complete data-set while making the final comparison, where SVD
based v-metric does not consider the second and the further smaller singular values in
the metric. The variations on the SVD based v-metric have been proposed in order
to remedy this and formulate a comparison that is inclusive of a representation for the
complete dataset.
49
5.1 Future work recommendations
The relative standard deviation is an insufficient means to estimate an uncertainty bound,
especially as there is no readily apparent way to determine which of the metrics’ estimate
is physically relevant. There are several papers in this area that talk about uncertainty
quantification through a maximum likelihood type characterization, treating the mea-
sured data-sets as random multivariates. The method can be further explored. How
should the linear regression type of modeling be modified to make ends meet with the
v-metric problem, however, poses an interesting challenge.
The uncertainty bound needs to be represented in terms of confidence intervals based
upon the variability around the margin control axis. They need to be described in terms
of the commercially accepted nσ notation to get a more generally acceptable means of
understanding of the qualitative uncertainty bounds being predicted by the model.
In most test cases as part of this project, only one type of data-set has been used
for checking the outputs from the metrics. The methods, both SVD and the moment
based metrics, need to be subjected to more kinds of data-sets having different types of
underlying characteristics to validate and verify their effectiveness as v-metrics.
There are various other kernel functions that could be employed as a basis to acquire
features/signatures from the data-sets similar to the ones used in this thesis. They can
be explored to find if any of those methods get a better estimation or insight into the
QMU aspects as compared to the metrics proposed. Examples include Fourier Mellon
transforms, Laguerre and Gegenbaur polynomials etc.
5.1.1 Principal Orthogonal Polynomial Based v-metric
As has been discussed in the previous chapter, unlike the SVD based v-metrics, where the
feature set is arranged in its order of significance (magnitude), there is no such regularity
in the posititioning of the moments in the final feature set of any of the moment based
v-metrics. As a work around to remedy this shortcoming, it is possible to sort the degrees
of freedom in the spatial domain of the data in such a way that the biggest responses are
50
weighted the highest and hence give way to a feature set in which:
The moments are sorted in their order of significance.
A smaller number of moments capture most of the magnitude characteristics of the
data, hence reducing the size of the feature set.
One way to achieve this has been listed below:
Take the autopower of the two data-sets and add them.
Sort them such that the biggest magnitude characteristics common to both data-
sets are grouped together. All the locations where the responses are small in one
or both data-sets are sorted to represent the tail of the information vector at each
temporal point.
Use the same sorting on the individual data-sets and offset the information vectors
such that the x-axis passes through the mean of the responses constituting the
vector.
Subject the data-sets to spatial decomposition using C1DA or C1DB.
The method arguably increases the accuracy of the reconstruction for the same number
of moments computed as against the moment based v-metric with no such ordering as
shown in the Figure 5.1. The blue curve represents the original data-set and the green, the
reconstructed one, for a 100 Chebyshev 1-D moments. A method to compare the moment
sets which yields a margin estimate that is consistent with the other v-metrics has not
been found. As such, there is a need to further investigate the method for different types
of possible d.o.f ordering or moment comparison that can be used to develop a v-metric.
A sample code for the ordering employed has been included in Appendix B.
51
(a) Without d.o.f ordering (b) With d.o.f ordering
Figure 5.1: Reconstruction comparison
52
Appendices
53
Appendix A
Transmissibility margin plots
(a) SVD-metric (b) C2DM
(c) C1DA (d) C1DB
Figure A.1: Vehicle C transmissibility
54
(a) SVD-metric (b) C2DM
(c) C1DA (d) C1DB
Figure A.2: Vehicle D transmissibility
55
(a) SVD-metric (b) C2DM
(c) C1DA (d) C1DB
Figure A.3: Vehicle E transmissibility
56
Appendix B
Final codes
B.1 Legendre Moments
%Legendre Polynomial
function[Drec]=legendre(x,y,D,k);
%
D: Discrete "image" intensity
k: Highest order of moments to be computed
x,y: Image size(no. of pixels along principal image axes)
Drec: Reconstructed dataset
Note that the formulation is not a zeroth order approximation but
a recursive development to compute exact Legendre moments. A desc−
ription can be found in Reference [24]
%
%Transforming the coordinates to between [−1,1] for both x&y axes
delx=2/length(x);dely=2/length(y);
for ii=1:length(x)
xx(1,ii)=−1+(ii−0.5)*delx;
end;
for ii=1:length(y)
yy(1,ii)=−1+(ii−0.5)*dely;
end;
57
n=10;
xxx=−1:delx/n:1;yyy=−1:dely/n:1;
%Computing Legendre polynomials through a recursive relationship
Px(1,1:length(xxx))=1;Px(2,:)=xxx;
Py(1,1:length(yyy))=1;Py(2,:)=yyy;
for ii=3:k+1
Px(ii,:)=(2*(ii−2)+1)/(ii−2+1).*xxx.*Px(ii−1,:)−(ii−2)/(ii−2+1)...
.*Px(ii−2,:);
Py(ii,:)=(2*(ii−2)+1)/(ii−2+1).*yyy.*Py(ii−1,:)−(ii−2)/(ii−2+1)...
.*Py(ii−2,:);
end;
%Using the above recursive relation to compute Hpq
Ix(k+1,length(xx))=0;Iy(k+1,length(yy))=0;
for ii=1:length(xx)
U1=xx(ii)−delx/2;U2=xx(ii)+delx/2;
[u1,v1]=min(abs(xxx−U1));
[u2,v2]=min(abs(xxx−U2));
for jj=2:k+1
Ix(jj,ii)=(2*(jj−1)+1)/(2*(jj−1)+2)*((xxx(v2)*Px(jj,v2)...
−Px(jj−1,v2))−(xxx(v1)*Px(jj,v1)−Px(jj−1,v1)));
end;
end;
for ii=1:length(yy)
U1=yy(ii)−dely/2;U2=yy(ii)+dely/2;
[u1,v1]=min(abs(yyy−U1));
[u2,v2]=min(abs(yyy−U2));
for jj=2:k+1
Iy(jj,ii)=(2*(jj−1)+1)/(2*(jj−1)+2)*((yyy(v2)*Py(jj,v2)...
58
−Py(jj−1,v2))−(yyy(v1)*Py(jj,v1)−Py(jj−1,v1)));
end;
end;
L(k+1,k+1)=0;
%Computing exact Legendre moments
for iii=2:k+1
for jjj=2:k+1
for ii=1:length(x)
for jj=1:length(y)
L(iii,jjj)=L(iii,jjj)+Ix(iii,ii)*Iy(jjj,jj)*D(ii,jj);
end;
end;
end;
end;
for iii=1:k+1
for ii=1:length(x)
for jj=1:length(y)
L(iii,1)=L(iii,1)+Ix(iii,ii)*D(ii,jj)/length(yy);
end;
end;
end;
L(1,1)=0;
for jjj=1:k+1
for ii=1:length(x)
for jj=1:length(y)
L(1,jjj)=L(1,jjj)+Iy(jjj,jj)*D(ii,jj)/length(xx);
end;
end;
end;
%Reconstruction from Legendre moments
59
P1x(1,1:length(xx))=1;P1x(2,:)=xx;
P1y(1,1:length(yy))=1;P1y(2,:)=yy;
for ii=3:k+1
P1x(ii,:)=(2*(ii−2)+1)/(ii−2+1).*xx.*P1x(ii−1,:)−(ii−2)/(ii−2+1)...
.*P1x(ii−2,:);
P1y(ii,:)=(2*(ii−2)+1)/(ii−2+1).*yy.*P1y(ii−1,:)−(ii−2)/(ii−2+1)...
.*P1y(ii−2,:);
end;
Drec(length(xx),length(yy))=0;
for ii=1:length(xx)
for jj=1:length(yy)
for iii=0:k
for jjj=0:iii
Drec(ii,jj)=Drec(ii,jj)+L(iii−jjj+1,jjj+1)...
*P1x(iii−jjj+1,ii)*P1y(jjj+1,jj);
end;
end;
end;
end;
60
B.2 Zernike Moments
%Computing ZMs using GMs
function[Z,A,phi,Drec]=ZMDrecuexact(D,k)
%
D: Discrete "image" intensity
k: Highest order of moments to be computed
Drec: Reconstructed dataset
Z: Zernike moment matrix
A: Magnitude of moments
phi: Phase
The method adopted to obtain a stable set of exact Zernike moments is
described in Reference[18], algorithm 5.
%
%Precomputing the factorial terms for the radial polynomial
k=10;
fac(k+1,1)=0;fac(1,1)=1;
for ii=1:k
fac(ii+1)=fac(ii,1)*ii;
end;
%Creating a square that includes the original image
%
This step is normally taken to avoid any pixels with relevant
information falling in a region that is on the border of the unit
circle domain.
%
[a,b]=size(D);
61
% a=a+1;b=b+1;
aa=2*max(a,b);
DD=zeros(aa,aa);
for ii=ceil((aa−a)/2)+1:ceil((aa−a)/2)+a
for jj= ceil((aa−b)/2)+1:ceil((aa−b)/2)+b
DD(ii,jj)=D(ii−ceil((aa−a)/2),jj−ceil((aa−b)/2));
end;
end;
%Creating a mesh of the coordinates for exact computation
%Writing down weights for Gauss quadrature discrete integration
%n=5 for the quadrature(To remove discretization error)
w1(1,5)=0;w2(1,5)=0;
w1(1)=−0.9061798459;w1(2)=−0.5384693101;
w1(3)=0;w1(4)=−w1(2);w1(5)=−w1(1);
w2(1)=0.2369268851;w2(2)=0.4786286705;
w2(3)=0.5688888889;w2(4)=w2(2);w2(5)=w2(1);
[W21,W22]=meshgrid(w2,w2);
%Generating a mesh for the above
x1=linspace(−1,1,aa);
for ii=1:aa
x(5*(ii−1)+1:5*ii)=w1/aa+x1(ii);
end;
y=x;
[Y,X]=meshgrid(x,y);
r=(X.ˆ2+Y.ˆ2).ˆ.5; %Radial position
% rmax=max(max(r));
% r=r/rmax;
ab=aa*5;
62
%Computing the Zernike moments and reconstruction
rr=zeros(ab,ab,k+1);
for ii=0:k
rr(:,:,ii+1)=r.ˆii;
end;
r=r.*(r<=1);
t=atan2(Y,X); %Angular position
t=t.*(r<=1);
cnt=(ab−1)/2;
H1=zeros(k+1,k+1);
H2=H1;H3=H1;
for ii=0:k
for jj=0:k
if ii−jj˜=0
H3(ii+1,jj+1)=−4*(jj+2)*(jj+1)/((ii+jj+2)*(ii−jj));
H2(ii+1,jj+1)=H3(ii+1,jj+1)*(ii+jj+4)*(ii−jj−2)/4/(jj+3)+jj+2;
H1(ii+1,jj+1)=(jj+4)*(jj+3)/2−(jj+4)*H2(ii+1,jj+1)...
+H3(ii+1,jj+1)*(ii+jj+6)*(ii−jj−4)/8;
end;
end;
end;
%Computing the Zernike polynomials for the unit−circle domain
%
A cell structure has been used to reduce memory allocation to
the polynomial and moment matrices and hence better the
computation speed.
%
odr='odr';azi='azi';Vnm='Vnm';count=0;
for ii=0:k
z=zeros(ab,ab,k+1);
for jj=−ii:0
63
if mod((ii−abs(jj)),2)==0
count=count+1;
% z=zeros(ab,ab);
if ii==abs(jj)
z(:,:,jj+k+1)=rr(:,:,abs(jj)+1);
elseif ii−abs(jj)==2
z(:,:,jj+k+1)=ii*rr(:,:,abs(jj)+3)−(ii−1)...
*rr(:,:,abs(jj)+1);
else
z(:,:,jj+k+1)=H1(ii+1,abs(jj)+1)...
*z(:,:,jj+k−3)+(H2(ii+1,abs(jj)+1)+...
H3(ii+1,abs(jj)+1)./rr(:,:,3)).*z(:,:,jj+k−1);
end;
vz=squeeze(z(:,:,jj+k+1)).*exp(1i*jj*t).*(r˜=0)/cnt;
V(count)=struct('odr',ii,'azi',jj,'Vnm',vz);
end;
end;
end;
clear vz z r t;
%Computing Zernike moments
Z=zeros(k+1,2*k+1);
for mm=1:count
ii=V(mm).odr;
jj=V(mm).azi;
vv=V(mm).Vnm;
for qq=1:aa
for rr=1:aa
Vv(qq,rr)=sum(sum(W21.*W22.*conj(vv(5*(qq−1)...
+1:5*qq,5*(rr−1)+1:5*rr))));
Vvcon(qq,rr)=sum(sum(W21.*W22.*(vv(5*(qq−1)...
+1:5*qq,5*(rr−1)+1:5*rr))));
64
end;
end;
Z(ii+1,jj+k+1)=sum(sum(DD.*Vv))*(ii+1)*4/pi/pi;
Z(ii+1,abs(jj)+k+1)=sum(sum(DD.*Vvcon))*(ii+1)*4/pi/pi;
end;
clear vv Vv Vvcon;
A=abs(Z);
phi=angle(Z)*180/pi;
%Reconstruction using Zernike moments
Drec=zeros(aa,aa);
for mm=1:count
ii=V(mm).odr;
jj=V(mm).azi;
vv=V(mm).Vnm;
for qq=1:aa
for rr=1:aa
Vv(qq,rr)=vv(5*(qq−1)+3,5*(rr−1)+3)*5;
Vvcon(qq,rr)=conj(vv(5*(qq−1)+3,5*(rr−1)+3)*5);
end;
end;
if jj˜=0
Zzz=Z(ii+1,abs(jj)+k+1).*Vvcon;
else
Zzz=0;
end;
Drec=Drec+Z(ii+1,jj+k+1).*Vv+Zzz;
end;
clear vv Vv Vvcon Zzz;
Drec=Drec(ceil((aa−a)/2)+1:ceil((aa−a)/2)+a,ceil((aa−b)/2)...
+1:ceil((aa−b)/2)+b);
65
B.3 Hahn moments
%Normalized Krawtchouk moment computation
function[H,Drec]=Hahn(D,k,u,v)
%
D: Discrete image intensity function
k: Highest order of moments to be retained
u&v: Hahn parameters to shift weightage about the principal axes
Formulation: Reference[12]
%
N=max(size(D,1),size(D,2));
x=0:1:N−1;
xx=0:size(D,1)−1;
yy=0:size(D,2)−1;
%Generating Hahn orthogonal polynomials
fac=zeros(2*N+1,1);fac(1,1)=1;
for ii=1:2*N
fac(ii+1)=fac(ii,1)*ii;
end;
%Computing the norm for normalization
w=zeros(1,N);
for ii=0:N−1
w(ii+1)=fac(2*N+u+v−ii)/(fac(N+u+v−ii)*(2*N+u+v−2*ii−1))...
*1/(fac(N+u−ii)*fac(N+v−ii)*fac(ii+1)*fac(N−ii));
end;
%Computing the weight for normalization
rho=zeros(N,N);
for jj=0:N−1
rho(1,jj+1)=1/(fac(u+1)*fac(N+v)*fac(N));
66
end;
for ii=1:N−1
rho(ii+1,:)=(N−x(ii+1))*(N+v−x(ii+1))/(x(ii+1)*(x(ii+1)+u))*rho(ii,:);
end;
%Recursive relation to compute Hahn polynomials
H=zeros(N,N);
H(1,:)=(rho(:,1).'/w(1)).ˆ.5;
H(2,:)=((N+v−1)*(N−1)−(2*N+u+v−2)*x).*(rho(:,2).'/w(2)).ˆ.5;
for ii=2:N−1
A=−(ii*(2*N+u+v−ii))/((2*N+u+v−2*ii+1)*(2*N+u+v−2*ii));
B=x−(2*(N−1)+v−u)/4−(uˆ2−vˆ2)*(2*N+u+v)/4/(2*N+u+v−2*ii+2)...
/(2*N+u+v−2*ii);
C=(N−ii+1)*(N−ii+u+1)*(N−ii+v+1)*(N−ii+u+v+1)/...
(2*N+u+v−2*ii+2)/(2*N+u+v−2*ii+1);
H(ii+1,:)=(B.*H(ii,:)*(w(ii)/w(ii+1))ˆ.5+C.*H(ii−1,:)...
*(w(ii−1)/w(ii+1))ˆ.5)/A;
end;
%Orthogonality check
for ii=1:N
for jj=1:N
A(ii,jj)=sum(H(ii,:).*H(jj,:));
end;
end;
%Computing Hahn momerhonts:
H1=H(:,1:size(D,1));
H2=H(:,1:size(D,2));
67
Q=H1*D*H2.';
%Reconstruction and picking the size of the moment matrix
h1=H1(1:k+1,:);
h2=H2(1:k+1,:);
q=Q(1:k+1,1:k+1);
Drec=zeros(size(D,1),size(D,2));
Drec=h1.'*q*h2;
68
B.4 Krawtchouk moments
%Normalized Krawtchouk moment computation
function[q,Drec]=Kraw(D,k,p)
%
D: Discrete image intensity function
k: Highest order of moments to be retained
p: Krawtchouk parameter to shift weightage about the principal axes
by default set to 0.5
Formulation: Reference[12]
%
N=max(size(D,1),size(D,2));
x=0:1:N;
xx=0:size(D,1)−1;
yy=0:size(D,2)−1;
%Generating KrawtchoukD orthogonal polynomials
fac=zeros(N+1,1);fac(1,1)=1;
for ii=1:N
fac(ii+1)=fac(ii,1)*ii;
end;
%Computing the weight for normalization
w=zeros(1,N+1);
for ii=1:N+1
w(ii)=fac(N+1)/(fac(N−x(ii)+1)*fac(x(ii)+1))*pˆx(ii)*(1−p)ˆ(N−x(ii));
end;
%Computing the norm for normalization
rho=zeros(1,N+1);
69
for ii=0:N
rho(ii+1)=((1−p)/p)ˆ(ii)*fac(N−ii+1)*fac(ii+1)/fac(N+1);
end;
%Recursive relation to compute Krawtchouk polynomials
K=zeros(N+1,N+1);
K(1,:)=(w/rho(1)).ˆ.5;
K(2,:)=(1−x/N/p).*(w/rho(2)).ˆ.5;
for ii=1:N−1
A=(p*(N−ii)/(1−p)/(ii+1))ˆ.5;
B=(p/(1−p))*((N−ii)*(N−ii+1)/ii/(ii+1))ˆ.5;
K(ii+2,:)=A*((N*p−2*ii*p+ii−x)/((N−ii)*p)).*K(ii+1,:)−B*((ii*(1−p))...
/((N−ii)*p)).*K(ii,:);
end;
%Orthogonality check
for ii=1:N+1
for jj=1:N+1
A(ii,jj)=sum(K(ii,:).*K(jj,:));
end;
end;
%Computing Krawtchouk moments:
K1=K(:,1:size(D,1));
K2=K(:,1:size(D,2));
Q=K1*D*K2.';
%Reconstruction and picking the moment size
k1=K1(1:k+1,:);
70
k2=K2(1:k+1,:);
q=Q(1:k+1,1:k+1);
q=q;%.*(abs(q)>.1*max(max(abs(q))));
Drec=zeros(size(D,1),size(D,2));
Drec=k1.'*q*k2;
71
B.5 Chebyshev 2-D moments
function[Drec,PP]=chebyshev(D,k)
%
D: Discrete image intensity function
k: Highest order of moments to be retained
PP: Moment set
Drec: Reconstructed image intensity function
Formulation: Reference[12]
%
%Mapping the data to within a unit square
M=size(D,1);N=size(D,2);
O=max(M,N);OO=O−1; %OO is the edge of the square
xi=0:1:OO;yi=xi; %Creating a unit square
T(k+1,O)=0; %Initialising Chebyshev polynomial matrix
T(1,:)=1/Oˆ.5;
T(2,:)=(2*xi+1−O)*(3/O/(Oˆ2−1))ˆ.5;
%Computing the recurrence coefficients for Chebyshev polynomials
a1(1,k+1)=0;a2=a1;a3=a1;
for ii=2:k
coeff=((4*iiˆ2−1)/(Oˆ2−iiˆ2))ˆ.5;
a1(1,ii+1)=2/ii*coeff;
a2(1,ii+1)=(1−O)/ii*coeff;
a3(1,ii+1)=(ii−1)/ii*((2*ii+1)/(2*ii−3)*(Oˆ2−(ii−1)ˆ2)...
/(Oˆ2−iiˆ2))ˆ.5;
end;
for ii=2:k
T(ii+1,:)=(a1(ii+1)*xi+a2(ii+1)).*T(ii,:)−a3(ii+1)*T(ii−1,:);
end;
72
%Computing orthonormal Chebyshev moments
PP(k+1,k+1)=0;Drec(O,O)=0;
T1=T(:,1:M);T2=T(:,1:N);
PP=T1*D*T2.';
%Reconstruction
Drec=T1.'*PP*T2;
73
B.6 Chebyshev 1-D moments
function[Drec,PP]=chebsingle(D,k)
%
D: Discrete image intensity function
k: Highest order of moments to be retained
PP: Moment set
Drec: Reconstructed image intensity function
%
%Mapping the data to within a unit length vector
M=length(D);
MM=M−1;
xi=0:1:MM;
T=zeros(k+1,M); %Initialising Chebyshev polynomial vector
T(1,:)=1/Mˆ.5;
T(2,:)=(2*xi+1−M)*(3/M/(Mˆ2−1))ˆ.5;
%Computing the recurrence coefficients for Chebyshev polynomials
a1=zeros(1,k+1);a2=a1;a3=a1;
for ii=2:k
coeff=((4*iiˆ2−1)/(Mˆ2−iiˆ2))ˆ.5;
a1(1,ii+1)=2/ii*coeff;
a2(1,ii+1)=(1−M)/ii*coeff;
a3(1,ii+1)=(ii−1)/ii*((2*ii+1)/(2*ii−3)*(Mˆ2−(ii−1)ˆ2)/(Mˆ2−iiˆ2))ˆ.5;
end;
for ii=2:k
T(ii+1,:)=(a1(ii+1)*xi+a2(ii+1)).*T(ii,:)−a3(ii+1)*T(ii−1,:);
end;
A=zeros(k+1,k+1);
74
% %Orthogonality check
%
% for ii=1:k+1
% for jj=1:k+1
% A(ii,jj)=sum(T(ii,:).*T(jj,:));
% end
% end;
%Computing orthonormal Chebyshev moments
PP=zeros(k+1,1);Drec=zeros(1,M);
for ii=0:k
% for jj=0:k
for iii=1:M
% for jjj=1:O
PP(ii+1)=PP(ii+1)+T(ii+1,iii)*D(iii);
% end;
end;
% end;
end;
%Reconstruction
for ii=0:k
% for jj=0:k
for iii=1:M
% for jjj=1:O
Drec(iii)=Drec(iii)+T(ii+1,iii)*PP(ii+1);
% end;
end;
% end;
end;
75
B.7 Principal Orthogonal Polynomials function
%Ordering function
function[aa11rec,bb11rec,aaa11,bbb11,Q1,Q2]=oopsingle(a,b,n)
a=(a.*conj(a)).ˆ.5;
b=(b.*conj(b)).ˆ.5;
A=a+b;
[B,v]=sort(A,1,'descend');
a1=a−a;
b1=a1;
for ii=1:size(A,2)
a1(:,ii)=a(v(:,ii),ii);
b1(:,ii)=b(v(:,ii),ii);
end;
a11=zeros(1,size(A,2));
b11=a11;
for ii=1:size(A,2)
a11(ii)=(max(a1(:,ii))−min(a1(:,ii)))/2;
aa1(:,ii)=a1(:,ii)−a11(ii);
b11(ii)=(max(b1(:,ii))−min(b1(:,ii)))/2;
bb1(:,ii)=b1(:,ii)−b11(ii);
end;
for ii=1:size(aa1,2)
[aa11,q1]=chebsingle(aa1(:,ii),n);
aa1rec(:,ii)=aa11;
Q1(:,ii)=q1;
76
[u1,v1]=sort(q1,'descend');
U1(:,ii)=u1;
V1(:,ii)=v1;
end;
for ii=1:size(bb1,2)
[bb11,q2]=chebsingle(bb1(:,ii),n);
bb1rec(:,ii)=bb11;
Q2(:,ii)=q2;
[u2,v2]=sort(q2,'descend');
U2(:,ii)=u2;
V2(:,ii)=v2;
end;
for ii=1:size(A,2)
aaa11(v(:,ii),ii)=aa1(:,ii)+a11(ii);
bbb11(v(:,ii),ii)=bb1(:,ii)+b11(ii);
aa11rec(v(:,ii),ii)=aa1rec(:,ii)+a11(ii);
bb11rec(v(:,ii),ii)=bb1rec(:,ii)+b11(ii);
end;
77
B.8 Regression through origin function
function[m]=rto(x,y)
%Regression through origin
%
let, y=mx+error
to minimise least squares error one needs to minimise
sum((y−mx)ˆ2);
which leads to the equation:
m=sum(xy)/sum(xˆ2)
%
%Minimising the least squares error for a line through origin.
num=sum(x.*y);
den=sum(x.ˆ2);
m=num/den;
a=max(max(x),max(y));
A=0:a/10:a;
figure;
plot(x,y,x,x.*m,A,A);
axis square;
78
Appendix C
Modified SVD based v-metric plots
(a) Summation v-metric (b) Autopower v-metric (c) Reshaping v-metric
Figure C.1: Vehicle C transmissibility
(a) Summation v-metric (b) Autopower v-metric (c) Reshaping v-metric
Figure C.2: Vehicle D transmissibility
79
(a) Summation v-metric (b) Autopower v-metric (c) Reshaping v-metric
Figure C.3: Vehicle E transmissibility
80
Bibliography
[1] Allemang R.J., Validation Metrics and Quantification of Margin and Uncer-
tainty(QMU) in the Extreme Environments of the Hypersonic Vehicle Program, Final
Report-US Air Force Summer Faculty Fellowship Program, 16 pp., Sept. 2012.
[2] Allemang R.J., Refinement of Metrics Used to Quantify Margin and Uncer-
tainty(QMU) in the Extreme Environments of the Hypersonic Vehicle Program, Final
Report-US Air Force Summer Faculty Fellowship Program, 16 pp., Sept. 2013.
[3] Allemang R.J., Modal Assurance Criteria, Lecture Notes, Experimental Vibrations,
29 pp., April 2012.
[4] Oberkampf W.L., Roy C.J., Verification and Validation in Scientific Computing,
Cambridge University Press, 767 pp., 2010.
[5] Roache P.J., Fundamentals of Verification and Validation, Hermosa Publishers, 476
pp., 2009.
[6] Helton J.C., Conceptual and Computational Basis for the Quantification of Margins
and Uncertainty, Sandia Technical Report, SAND2009-3055, 400 pp., June 2009.
[7] Mottershead J.E., Friswell M.I., Model Updating in Structural Dynamics: A Survey,
Kluwer Academic Publishers, 1995.
[8] Hu M.K., Visual Pattern Recognition by Moment Invariants, IRE Transaction on
Information Theory, Vol.IT-9, pp. 179-187, 1962.
[9] Teague M.R., Image Analysis via the General Theory of Moments, Optical Society of
America, Vol. 70, pp. 920-930, 1980.
[10] Nishio M., Hemez F., Worden K., Park G. Takeda N., Farrar C., Feature Extraction
81
for Structural Dynamics Model Validation, Proceedings, International Modal Analysis
Conference, 11 pp., 2009.
[11] Wang W., Mottershead J.E., Patki A., Patterson E.A., Construction of Shape Fea-
tures for the Representation of Full-Field Displacement/Strain Data, Applied Mechan-
ics and Materials, Vol. 24-25, pp. 365-370, 2010.
[12] Flusser J., Suk T., Zitova B., Moments and Moment Invariants in Pattern Recogni-
tion, John Wiley and Sons Ltd., 296 pp., 2009.
[13] Wang W., Mottershead J.E., Mares C., Vibration Mode Shape Recognition Using
Image Processing, Journal of Sound and Vibration, Vol.326, pp.909-938, 2009.
[14] Wang W., Mottershead J.E., Mares C., Mode-Shape Recognition and Finite Element
Model Updating Using the Zernike Moment Descriptor, Mechanical Systems And Sig-
nal Processing, Vol.23, pp. 2088-2112, 2009.
[15] Wee C.Y., Paramesran R., On the Computational Aspects of Zernike Moments, Im-
age and Vision Computing, Vol.25, pp. 967-980, 2007.
[16] Liao S.X., Image Analysis by Moments, PhD Dissertation, The University of Mani-
toba, 1993.
[17] Pawlak M., On the Reconstruction Aspects of Moment Descpritors, IEEE Transac-
tions on Information Theory, Vol.36, pp. 1698-1708, Nov. 1992.
[18] Singh C., Walia E., Algorithms for Fast Computation of Zernike Moments and Their
Numerical Stability, Image and Vision Computing, Vol. 26, pp. 251-259, 2011.
[19] Singh C., Walia E., Upneja R., Accurate Calculation of Zernike Moments, Informa-
tion Sciences, Vol.233, pp. 255-275, 2013.
[20] Zhou J., Shu H., Hongqing Z., Toumoulin C., Luo L., Image Analysis by Discrete Or-
thogonal Hahn Moments, Image Analysis and Recognition Lecture Notes in Computer
Science Volume 3656, pp. 524-531, 2005.
82
[21] Hew. P.C., Orthogonal Functions on the Unit Disk Having Invariance in Form, The
University of Western Australia, 24 pp., Nov. 1996.
[22] Khotanzad A., Hong Y.H., Invariant Image Recognition by Zernike Moments, IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol.12, No.5, May 1990.
[23] Yap P.T., Paramesran R., Ong S.H., Image Analysis by Krawtchouk Moments, IEEE
Transactions on Image Processing, Vol.12, No.11, Nov. 2003.
[24] Hosny K.M., Exact Legendre Moment Computation for Gray Level Images, Pattern
Recognition, Vol.40, pp. 3597-3605, 2007.
[25] Oberkampf W.L., Deland S.M., Rutherford B.M., Diegert K.V., Alvin K.F., Esti-
mation of Total Uncertainty in Modeling and Simulation, Sandia Report, SAND2000-
0824, Unlimited Release, April 2000.
[26] Mukundan R., Ong S.H., Lee P.A., Discrete vs. Continuous Orthogonal Moments
for Image Analysis, International Conference of Imaging Science, Systems and Tech-
nology, 7 pp., 2001.
[27] Jain A., Ross A., Prabhakar S., Fingerprint matching using minutiae and texture
features, Proceedings: International Conference of Image Processing, pp. 282-85, 2001.
[28] AIAA, Guide for the Verification and Validation of Computational Fluid Dynamics
Simulations, AIAA G-077-1998, 1998
[29] ASME, Guide for Verification and Validation in Computational Solid Mechanics,
ASME V&V 10-2006, 2006
[30] Weisstein E. W., Plancherel’s Theorem, From MathWorld–A Wolfram Web Resource.
http://mathworld.wolfram.com/PlancherelsTheorem.html
83