Upload
lephuc
View
225
Download
0
Embed Size (px)
Citation preview
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 1
Accurate and Efficient Inspection of Speckle and
Scratch Defects on Surfaces of Planar ProductsHui Kong, Jian Yang and Zhihua Chen
Abstract—We propose a unified framework for detectingdefects in planar industrial products or planar surfaces ofnon-planar products based on a template-matching strategy.The framework includes three parts: an automatic selection oftemplate image for a given test one, a robust geometric alignmentbetween template and test images based on an approximatemaximum clique approach, and an illumination invariant imagecomparison method for defect detection in the aligned images.Experimental results on challenging image datasets demonstratethe excellent performance of the proposed framework.
Index Terms—Defect detection, image alignment, image match-ing
I. INTRODUCTION
W Ith the development of modern manufacturing tech-
nologies, the demand on product inspection industries
becomes high. More efficient and accurate inspection systems
are needed to find defects in mass-produced engineering
products, such as parts for automobiles, washing machines,
computers, print circuit board (PCB), engine valve, internal
crack of casting part, and appearance of tablet or capsules etc.
In industrial inspection for defects on surfaces of planar
products, one commonly used framework is based on template
matching, where a defect-free template (model) of a specific
type of products is given in advance, and the image of each test
product is matched to the template image for defect detection.
A necessary step for the template based strategy is that both
the template and test images should be well aligned. Figure 1
illustrates a general inspection scenario where all products to
be inspected are placed on a conveying belt, and carried one by
one over to a stationary camera for inspection. We notice that,
although both camera and light keep stationary, the relative
pose between camera and each test product is not totally the
same because each test product can be arbitrarily placed on
the conveying belt. Likewise, the relative pose between light
and each test product can be different. Therefore, even for
products of the same model, the image intensity at the same
surface point is not totally the same in general.
Obviously, if we apply the template matching strategy to
this scenario, we have to deal with two major issues: (1) the
Manuscript received Oct 26, 2016; revised Dec 30, 2016; accepted Jan 15,2017
Hui Kong and Jian Yang are with the School of Computer Science andEngineering, Nanjing University of Science and Technology, Nanjing, Jiangsuprovince, China 210094. E-mail: {konghui, csjyang}@njust.edu.cn
Zhihua Chen is with the Department of Computer Science and Engineering,East China University of Science and Technology, Shanghai, China 200237.E-mail: [email protected]
Copyright(c) 2009 IEEE. Personal use of this material is permitted. How-ever, permission to use this material for any other purposes must be obtainedfrom the IEEE by sending a request to [email protected].
template image and each test one are usually subject to an
RST (rotation, scaling and translation) transformation. The
two images could not be well aligned in general when no
enough corners (image features) are detected in both images.
(2) The corresponding pixels of the aligned template and test
images do not have the same intensities because the pose
of each specific product relative to the camera and light is
not exactly the same. This poses challenges to the subsequent
comparison stage (e.g., via image subtraction or correlation-
based matching). If both issues are addressed, we believe that
the template-matching strategy for defect detection is still a
good choice for this scenario.
In literature, however, most works based on template match-
ing [10], [26], [27], [30], [31], [32] only focus on the image
comparison (matching) step, whereas the geometric alignment
step is much less involved. Some of these methods assumed
that images have been well aligned with special apparatus,
or repetitive patterns are available in images. These repetitive
patterns can be used as templates for matching, e.g., in
detecting defects in wafer images. To our best knowledge,
very few defect detection works mention the alignment step
or propose an integrated framework for more general scenarios
such as the defect detection for products on conveying belt.
To meet the demand of defect detection in a general
scenario, we propose a unified defect inspection framework
for planar industrial products or non-planar products with one
or multiple planar surfaces. The contributions of the proposed
framework consist of the following points: (1) there is no need
to change model (reference) template after the inspection of
one specific type of products is completed. Our framework
automatically selects the template image for the test products.
(2) we propose a robust strategy to align the template and test
images based on an approximate maximum clique method.
(3) we propose a robust image comparison approach to find
defects in the aligned test image.
The rest of the paper is organized as follows: the related
works are introduced in section II. Our proposed framework
is given in section III. The experimental results are shown in
section IV. The last section is the conclusion. The paper is
best viewed in color.
II. RELATED WORKS
In the past decades, many automatic defect inspection
systems based on machine vision and image processing tech-
niques have been proposed. We divide the products to be
inspected into two categories according to the patterns of
their surfaces. One is the category of products that have
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 2
Fig. 1: The illustration of a very common inspection scenario
where all products to be inspected are placed on a conveying
belt and carried to one stationary camera for inspection.
periodic patterns such as textiles or LCD images. To detect
defects of this kind of products, the defects are usually
considered as local abnormities. Xie [1] provided a systematic
overview of advances in texture inspection using computer
vision and image processing techniques. Kumar [14] reviewed
the approaches in fabric defect detection. Kumar and Pang
[13] proposed to use Gabor filters for defect detection in
textured fabric. In LCD defect detection, Lu and Tsai [25],
[22] proposed a singular value decomposition (SVD) approach
to find defect in LCD images. Gan and Zhao [23] proposed
to use the active contour model to find defect region in LCD
images. Li and Tsai [24] proposed a Hough transform-based
non-stationary line detection approach to find line defects in
low-contrast LCD images.
For general products that do not have periodic patterns, the
methodologies can be divided into three categories, which are
the methods based on machine learning, local abnormalities
and template matching, respectively. In the methods based
on machine learning, Jia et al. [3] proposed to employ
support vector machine to automatically learn complicated
defect patterns and locate defects. Liao et al. [4] developed a
flexible inspection system based on statistical learning which
is very effective for inspecting a variety of PCB defects.
Sannen et al. [5] presented a multilevel information fusion
approach for visual quality inspection. In [15], the ICA basis
are obtained to detect the presence of defects in solar cell
images. Ghorai et al. [2] proposed a defect detection approach
for hot-rolled flat steel products, where it localizes defects em-
ploying kernel support vector machine. Self-organizing neural
networks (SONNs) have been proven to have the capabilities
of unsupervised auto-clustering. In [28], an automatic wafer
inspection system based on a self-organizing neural network is
proposed. Feng [21] proposed an automatic visual inspection
system for detecting partially worn and completely missing
fasteners on rail tracks using probabilistic topic model.
The second category includes methods that consider defects
as local abnormalities. To detect defect on rail tracks, Li
and Ren [6], [20] proposed a local Michelson-like contrast
(MLC) measure which can notably improves the distinction
between defects and background. Shen et al. [7] had the same
idea to enhance the defect appearances so that they can be
recognized more easily. Wavelet transform is utilized to detect
local abnormities in [8] and [9]. Li and Tsai [30] proposed a
wavelet-based discriminant measure for defect inspection in
multicrystalline solar wafer images. A 2-D DWT approach is
proposed in [31] to extract a standard image from three defect
images. Ng proposed an automatic thresholding approach for
defect detection [17], which overcame the problem of the
Otsu method by segmentation. Li et al. [18] proposed a 3D
defect inspection framework for weld bead inspection using a
structured light-based vision inspection system. The advantage
of these methods is that they do not have to train on lots of
images and the disadvantage is that some distinctive patterns
may be mistakenly considered as defects.
The third category includes the methods that are based on
template matching. Detecting defects by matching the defect-
free template and the defective one is the most intuitive
way. Methods based on template matching vary. Normalized
cross correlation (NCC) has been used extensively for many
machine vision applications. [10] and [11] used the improved
NCC to inspect defects, respectively. Tsai and Yang [12]
proposed a quantile-quantile plot based pattern matching for
defect detection. An eigenvalue-based similarity measure pro-
posed by Tsai and Yang [26] uses a scattergram of two images
for PCB inspection. This method requires a noise-free envi-
ronment and perfect alignment. Besides, neural network can
also be utilized in some defect detection systems [13] and has
achieved a good experimental result. Tsai et al. [19] proposed
a dissimilarity measure based on the optical-flow technique
for surface defect detection, aiming at inspection of light-
emitting diode (LED) wafer die. Wang et al. [29] proposed
a partial information correlation coefficient (PICC) method to
improve the traditional normalized cross correlation coefficient
(TNCCC). The PICC uses the technique of significant points to
calculate the correlation coefficient to improve defect detection
rate when applied to image samples from the IC industry.
Zontak and Cohen [27] proposed a kernel-based approach to
multi-channel defect detection. Xie and Guan [33] proposed to
generate a template from the wafer image, and then calculated
the difference between a template and test image.
In general, two categories of methods are used for the
alignment of images of planar objects. One is region-based
and the other is feature-based. In the region-based alignment
method [34], an affine transform can be estimated between
the template and target images. However, the region-based
alignment method usually fails when the illumination con-
dition is different between template and target images. In
the feature-based alignment methods [35], key features, e.g.,
SIFT/SURF or ORB, are extracted from both template and
target images. The correspondences between these key features
are established and a robust outlier rejection scheme such as
RANSAC is applied to estimate the transformation parameters.
Our alignment method belongs to the feature-based category.
The advantage of the feature-based alignment methods over
the region-based ones is that the feature-based methods are
more robust to light variations.
In the real process of industrial-product defect inspection,
the same model of industrial products are inspected in a row.
Usually, the corresponding model image should be automati-
cally selected by the computer before starting inspecting the
batch of products of the same type. Thus, automatic selection
of model (template) image for the test product image is
necessary for a fully automatic defect inspection system. Next,
the registration of test and template images is carried out. After
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 3
alignment, by comparing the aligned template and test images,
we find that the difference image is not enough to accurately
find defects. Therefore, we propose a preprocessing step to
obtain an illumination invariant image representation to find
speckle defect and utilize edge image to find scratch defect.
III. THE PROPOSED FRAMEWORK
A. Automatic template image selection
First, we introduce the template selection procedure for a
given test image. As shown in Fig.1, the test-product image
is captured by a camera which points upright downward, and
illuminated by four bar-shaped light sources. The usage of
the bar-shaped light sources can make the defect area (such
as scratches) more salient, and can make the detection task
easier. Given a test product, its image and the corresponding
template are generally subject to an RST (rotation, scaling and
translation) transformation.
To select the correct template image for each test image,
we train a bag-of-words model based on two different types
of corners and their feature representations. The first type of
corners are the ones which were proposed by Shi and Tomasi
[36] for tracking. The second type of corners are extracted
based on the salient blob method by Lindeberg [37]. The
first type of corners are mostly located at the locations which
correspond to the intersections of straight lines, while the
second type of corners correspond to the centers of some
salient blobs. The two types of features are complementary
in finding important image signatures, and can be detected
in a very fast manner. We extract the rotated BRIEF feature
[38] from each detected corner because of its efficiency and
invariance to rotation transformation.
We follow the normal bag-of-words procedure by feature
extraction, vocabulary construction, image representation and
classifier training. For classification, we have trained a linear
discriminant analysis (LDA) classifier based on the image
representation vector. The detailed procedure of automatic
template image selection is listed in Algorithm 1. Note that
the automatic template selection is needed only when the
inspection for one batch of same type of products is finished
and another type is waiting to be inspected.
B. Registration of the template and test images
Once we have chosen the template (model) image for the
test one, we need to register the two images through a 2D
geometric transformation. We have observed that the transfor-
mation between the two images can be well approximated by
an RST (rotation, scaling and translation) transform given the
imaging setup shown in Fig.1. To estimate the RST transform,
we propose a robust outlier rejection approach based on
an approximate maximal clique algorithm. Next, we give a
short introduction of the maximum clique problem, and then
we introduce our approximate maximum clique method in
estimating the RST parameters.
1) The maximum clique problem [39]: Let G = (V,E)be an undirected and weighted graph, with V = {1, 2, ..., n}being the vertex set and E ⊆ V ×V being the edge set of G.
For each vertex i ∈ V , a positive weight wi is associated with
Algorithm 1 A procedure of automatic template image selec-
tion based on BoW model.
Input: Given k × c template images of c classes (k images
per planar surface) Iij , i ∈ [1, c], j ∈ [1, k];Output: A BoW model for template selection
1: Detect locations of corners and blob centers from all
template images based on Shi’s corner detector [36] and
Lindeberg’s blob detector [37], respectively, denoted by
pi, i = 1, ..., n;
2: Extract BRIEF [38] features around pi, i = 1, ..., n,
denoted by F , F = {f1, f2, ..., fn};
3: Apply the hierarchical clustering to F to get m clusters
rj , j = 1, ...,m;
4: Encode each template image Iij by assigning a certain
BRIEF feature f to a specific cluster center r, and cal-
culating the number of assigned features for each cluster,
represented by hij ;
5: Learn a discriminate feature space, P , by linear discrim-
inant analysis (LDA) with hij , i ∈ [1, c], j ∈ [1, 5];6: Embed hij to the learned feature space by h̃ij = Phij ;
7: Given a test image I , apply Step 4 and Step 5 to get the
corresponding h̃;
8: Apply KNN to classify h̃;
Fig. 2: The illustration of the maximum clique and maximal
cliques: the three cliques represented by red, green and blue
are maximal cliques, but only one maximum clique (red).
i, collected in a weight vector w ∈ Rn. The symmetric n× nmatrix AG = (a(ij))(i,j)∈V ×V , where ai,j = 1 if (i, j)∈E is
an edge of G, and ai,j = 0 if (i, j)/∈E, is called the adjacency
matrix of G. For any node v, let N(v) = {j ∈ V : avj = 1}denote the neighbor of v in G, i.e., the set of all nodes adjacent
to v. A graph G = (V,E) is complete if all its vertices are
pairwise adjacent, i.e., ∀i, j ∈ V , with i 6= j, we have (i, j) ∈E. A clique C is a subset of V such that G(C) is complete.
The maximum clique problem asks for cliques of maximum
cardinality (the cardinality of a set S denoted by |S|):
w(G) = max{|S| : S is a clique in G} (1)
Note that the maximal clique is different from the maximum
clique. The maximal clique is a clique that is not a proper
subset of any other clique. The maximum clique is a maximal
clique that has the maximum cardinality. Figure 2 illustrates
the concepts of the maximal and maximum clique.
To find the maximum clique in a graph, quite a lot of algo-
rithms have been proposed so far. The exact approaches are
highly computational complex. Therefore, some approximate
maximum clique algorithms are sought to solve the real-world
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 4
Fig. 3: The illustration of establishing correspondence by
forward and backward matching. See texts for more details.
Fig. 4: The illustration of the concept of approximate maxi-
mum clique: the red correspondences belong to the approxi-
mate maximum clique whereas the dark ones do not.
applications [39]. In this paper, we propose an approximate
maximum clique approach to estimate the RST transformation
parameters in a robust way.
2) Establishing correspondences between two sets of cor-
ners: Figure 4 illustrates the idea of utilizing the approximate
maximum clique approach to estimate the transformation
between the template and test images. The circles in the two
images represent the detected corner locations based on the
Shi-Tomasi corner detector [36] and the blob detector [37].
The correspondences between the two sets of corners are
linked by lines. Figure 3 illustrates how the correspondences
are established for the detected corners in the model and
test images. We take the corners p2 and p′
2 as examples. A
rotated BRIEF feature vector [38] is extracted for each corner
in both images. In the forward matching process, the feature
vector extracted at p2 is compared to the ones extracted at
p′
i, i = 1, ..., 6. The red arrow with solid line represents the
best forward match. In the backward matching process, the
feature vector extracted at p′
2 is compared to the ones extracted
at pi, i = 1, ..., 6. The red arrow with dotted line represents
the best backward match. Since both matches are consistent,
we viewed p2 and p′
2 as a pair of correspondence.
Figure 4 shows the established correspondences for all
the corners in both images. Note that some correspondences
might not be correct. Given a pair of corresponding corners,
p=(px,py) and p′=(p′x,p′y), they are subject to an RST trans-
formation if p and p′ satisfies
pxpy1
=
s · cos(α) sin(α) tx− sin(α) s · cos(α) ty
0 0 1
p′xp′y1
(2)
3) The proposed approximate maximum clique algorithm:
Apparently, we need two pairs of correspondences to deter-
mine the four parameters in the RST transformation exactly.
Through the above established correspondences, we could
solve it in a least-square way if all the correspondences
are correct ones. However, we usually could not avoid the
erroneous correspondences. Therefore, we have to seek a
Fig. 5: (a) the template and test images; (b) the enlarged
local regions (highlighted in red boxes) of the test image
contain defects such as single isolated dots and scratch; (c)
the established correspondences; (d) the correspondences that
belong to the obtained approximate maximum clique.
robust parameter estimation approach which is able to only
utilize the correct correspondences and ignore the wrong ones
in parameter estimation.
In Fig.4, the correct correspondences are represented by red
dotted lines, while the rest ones are erroneous. The common
approach in solving such a problem is the Random Consensus
and Sampling (RANSAC) algorithm. However, the RANSAC
approach is not robust when the number of inliers is less than
50% of the total number of correspondences. Therefore, the
RANSAC algorithm has to randomly sample enough number
of times to deal with this issue, which incurs very low
efficiency in estimation.
The proposed approximate maximum clique algorithm can
meet such a requirement in terms of efficiency and robustness.
We first create a graph and its adjacency matrix based on
the correspondences shown in Fig.4. The details are listed
in Algorithm 2. Note that the corners in the template image
are represented by pi, i = 1, ..., n, and the corresponding
corners in the test image are represented by p′i, i = 1, ..., n. In
the constructed graph, each vertex is represented by a pair
of corresponding corners. Thus, the number of vertices is
equal to the number of established correspondences. Next,
we initialize an intermediate affinity matrix with zeros. The
affinity matrix is used to store the similarity between any two
correspondences. Specifically, we represent a correspondence
by Li = {pi ↔ p′i}, i = 1, ..., n, where pi = (pxi , pyi ) and
p′i = (p′xi , p
′yi ). We calculate the distance between the corners
pi and pj , denoted by d(pi,pj), and the distance between
the their counterparts p′i and p′j , denoted by d(p′
i,p′
j). The
similarity between correspondences Li and Lj is calculated
as the absolute difference between d(pi,pj) and d(p′
i,p′
j).
We may more easily understand the similarity between
the correspondences Li and Lj by re-examining Fig.4. The
RST transformation is rigid, and the distance between two
corners in an image (called intra-image corner distance) should
be preserved after the transformation. Therefore, for any
two pairs of corresponding corners, if they are both correct
correspondences, the intra-image corner distances should be
approximately equal. For example, L1 (p1 ↔ p′1) and L3
(p3 ↔ p′3) are both correct correspondences in Fig.4. There-
fore, the distance between p1 and p3 and the one between
p′1 and p′3 should be approximately equal. L2 (p2 ↔ p′2) is
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 5
Algorithm 2 A procedure for creating an undirected graph and
its adjacency matrix based on the established correspondences
of corners.
Input: Given n pairs of correspondences of corners Li,
i=1,...,n; and a threshold for distance consistency, dt;Output: An undirected graph, G = {V,E} and its adjacency
matrix, AJ
1: Let Li = {pi ↔ p′i}, where pi = (pxi , pyi ) and p′i =
(p′xi , p
′yi ) are the corresponding corners (Fig.4);
2: Each correspondence is treated as a vertex of G, V ={L1, L2, ..., Ln};
3: Create an n × n intermediate similarity matrix, S, and
initialize S with zeros;
4: for j = 1 to n do
5: for j = 1 to n do
6: Let d(pi,pj) = ||(pxi − pxj , pyi − pyj )||2;
7: Let d(p′
i,p′
j) = ||(p
′xi − p
′xj , p
′yi − p
′yj )||2;
8: S(i, j)=|d(pi,pj) − d(p′
i,p′
j)|;
9: end for
10: end for
11: Let AJ = (S ≤ dt);
Fig. 6: (a) and (b) the difference between the template and
test images before and after alignment, respectively; (c) the
enlarged highlighted area of (a); (d) the enlarged highlighted
area of (b). Note that the illumination condition of the template
and test images is not totally the same. This is why the
intensities of pixels of some areas (around the holes) are still
not the same.
not a correct correspondence. The distance between p1 (or
p3) and p2 and the one between p′1 (or p′3) and p′2 are quite
different. By thresholding the similarity matrix S (the 11th
step of Algorithm 2), we can obtain AJ , where AJ (i, j) = 1means that both Li and Lj are correct correspondences, and
AJ (i, j) = 0 means that either Li or Lj is not a correct
correspondence. In the constructed graph, G = {V,E},
AJ (i, j) = 1 means that there is a direct edge between vertex
i and vertex j. Otherwise, there is no such an edge.
So far, we have constructed the graph G and its adjacency
matrix AJ . Next, we propose an approximate maximum clique
algorithm based on AJ to find a clique which is large enough
so as to contain as many correct correspondences as possible
(inliers) and can simultaneously reject those false correspon-
dences (outliers). The details of the approximate maximum
Algorithm 3 An approximate maximum clique algorithm
Input: G = {V,E}: an undirected graph; AJ : the graph’s
adjacency matrix;
Output: c: the obtained clique (a vector containing node
indices);
1: Let M = sum(AJ , 1);2: c=find(M == max(M(:)));3: pNode = findPotentialNode(c, AJ );
4: let i = 0;
5: while |pNode| > 0 do
6: c=update c(pNode, c, AJ );
7: pNode = findPotentialNode(c, AJ );
8: i+ +;
9: end while
Algorithm 4 The function findPotentialNode
Input: c: the current clique (a vector containing node in-
dices); AJ : the graph’s adjacency matrix
Output: pNode: a set of potential nodes that can be added to
the current clique
1: Let pNode = AJ (:, c(1));2: if |c| > 1 then
3: for i = 1 to |c| do
4: pNode = pNode & AJ (:, c(i));5: end for
6: end if
7: for i = 1 to |c| do
8: pNode(c(i))=0;
9: end for
clique are listed in Algorithm 3. Specifically, we first find
the vertex v1 that has the largest degree through the 1st and
2nd steps in Algorithm 3. Thus we can find the vertices (not
including v1) that are connected to v1, denoted by nv1 (the
3rd step of Algorithm 3). Next we choose from nv1 the vertex
that has the largest degree, denoted as v2, and find the vertices
that are connected to v2 (not including v1 and v2). Iteratively,
we can obtain a set of vertices (v1, v2, ...) in a similar way.
Algorithms 3, 4 and 5 provide the Matlab-like psudo-code of
this algorithm, where |a| denotes the cardinality of a set a or
the number of elements in vector a.
Once we have the approximate maximum clique of ver-
Algorithm 5 The function update c
Input: c: the current clique (a vector containing node in-
dices); AJ : the graph’s adjacency matrix; pNode: a set
of potential nodes that can be added to the current clique
Output: c: updated clique (a vector containing node indices);
1: Let p = find(pNode==1);
2: pNode = repmat(pNode, 1, |p|);3: nMatch = sum(pNode & AJ(:,p));4: maxInd = p(nMatch == max(nMatch));
5: if nMatch> 0 then
6: c(|c| + 1) = maxInd(1);7: end if
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 6
Fig. 7: The edges around scratches can be easily detected,
while the edges around speckles cannot be detected sometimes.
tices, we have also obtained the inliers that could be used
to estimate the RST parameters. These inliers represent the
maximum number of correct correspondences, and we use
them to estimate the RST parameters based on the least square
approach. Figure 5 (c) shows the established correspondences
of the detected corners in the two images, and (d) shows
the correspondences that belong to the obtained approximate
maximum clique. Figure 6 illustrates the result of alignment of
the two images displayed in Fig. 5 based on the estimated RST
parameters, where (a) is the difference of two images before
alignment, and (b) is the difference image after alignment. The
accuracy of the parameter estimation can be reflected by the
difference image. Mathematically, the difference of the two
aligned images should be close to zero at any pixel location
if the parameters are correctly estimated. However, we notice
that the pixels around the small holes in the difference image
have non-zero intensities. This is because the pixel intensities
are slightly different between the two images when they are
subject to a small rigid displacement.
4) Defect detection in the aligned image: Given two
aligned images, we aim at finding the defects in the test
image. The defect areas should be those regions that are at
the same corresponding locations of both images, but are
different in pixel intensities. As mentioned above, although
the illumination condition keeps unchanged, but the model
and test products are subject to a small rigid displacement.
This displacement gives rise to the intensity variation at the
same pixel location even if there is no defect in these areas.
Edge is insensitive to small illumination variation. When
scratches are present, the edges of scratches are noticeable
enough for detection. However, it is a different story for very
small speckles (e.g., dark dots). Sometimes, edges cannot
be detected around small speckles. Figure 7 illustrates this
situation. In addition, small speckles are not differentiable
from some noise in the edge field. Therefore, we treat scratch
and speckle defects in different ways. Specifically, we utilize
edge information for the detection of scratches, and detect the
speckle defect based on an image enhancement technique.
We observed that speckles on the test product surfaces are
mostly dark or white round defects. To highlight them, the
image enhancement method used in the blob detector [37]
should be a good choice. But the scale-space method [37] is
not efficient because multiple scales are needed to construct
Fig. 8: Top: the enhanced image for speckle regions (note that
the image has been zoomed out for a better view). Bottom:
the speckle candidates by a threholding segmentation.
the scale space. Therefore, we propose to enhance the images
in a simplified way. We first use a gaussian filter to smooth
the original test image, and then the difference between the
original and filtered images is obtained. The difference is
similar to the difference of gaussian (DoG) image, which is
an approximation of the laplacian of gaussian (LoG) image.
In practice, a mean-filter could also be utilized through an
efficient implementation based on the integral image tech-
nique. In general, the size of filter is set to 7 or 9. The
top image of Fig. 8 shows examples of the enhanced speckle
regions. We separate the speckle candidates from background
via threholding the enhanced image. In general, the threshold,
t, is set to the range between 0.03 and 0.05. The bottom image
of Fig. 8 shows the speckle candidates after thresholding the
top image.
There are still a lot of false positives in the speckle
candidate image. To remove them, we apply the same steps
(enhancement and thresholding) to the template image. Figure
9 shows the processed template and test images. Then we
compare these two processed images to remove the false
positives. Specifically, the two processed images are denoted
by Tm (model) and Tt (test), respectively. For each candidate
(dark pixel) in the Tt, we should look for whether there
is a correspondence at the same pixel location of Tm. If
there exists a dark pixel within the neighborhood of the
corresponding pixel location, which are illustrated by the blue
arrows (pointing at two corresponding locations) and boxes in
Fig.9, this candidate is not a speckle pixel. Otherwise, it is a
speckle pixel (illustrated by the red arrows and boxes). The
size of the neighborhood is set to 7 by 7.
So far, we have detected the speckle defects. For the scratch
defect detection, we extract edges for both the template and
test images. We adopt a similar detection procedure as the
speckle detection: for each edge pixel in the test image, we
look for an edge pixel within the neighborhood of the corre-
sponding location in the template image. If there exists such an
edge pixel in the template image, the edge pixel being checked
in the test image is not scratch pixel. Otherwise, it belongs to
a scratch. Likewise, the size of the neighborhood is set to 7
by 7. In Fig.10, the top image shows a detection example for
speckle defect, the middle image being an example for scratch
defect, and the bottom one being the fusion of both results.
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 7
Fig. 9: The top and bottom images: the template and test
images after enhancement and thresholding operations, re-
spectively. For each candidate (dark pixel) in the test image
(bottom), we look for whether there is a correspondence at the
same pixel location in the template image (top).
Fig. 10: Top: a detection example for speckle defect. Middle:
a detection example for scratch defect. Bottom: the fusion of
both speckle and scratch detection results.
The details of the Matlab-like psudo-code for defect detection
are given in Algorithm 6.
IV. EXPERIMENTAL RESULTS
We have altogether 10 planar products to be tested. For each
test product, we need to detect defects for both top and bottom
surfaces, respectively. Correspondingly, we have collected 100
template images altogether with five images for each surface.
To test the performance of our approach in defect detection,
we have collected two sets of image data for test. The first set
consists of 260 test images with 13 images per surface, and
the illumination condition of these images is different from the
template images. They are used to test the performance when
the illumination condition is varying between the template
and test images. Figure 14 shows some exemplar images
of the first data set. The second one is for defect detection
when there is little illumination variation between template
and test images (Fig.15), and it consists of 400 test images
with 20 images per surface. Because positions and poses of the
template and test products relative to the light and camera are
not exactly the same when capturing images, the intensities
of the captured images are still different even if the light
1 2 3 4 5 6 7 8 9 100
10
20
30
40
50
60
70
80
90
The index of tries
Mean of absolute errors of two aligned imagesThe percentage of inliers
Fig. 11: The percentage of the number of correspondences in
the obtained approximate maximum clique when the threshold
dt changes from 1 to 10. Correspondingly, the mean absolute
errors of the aligned template and test images are also plotted.
Fig. 12: Examples of the geometric alignment with different
dt. The first row: the initial correspondences and the dif-
ference of the template and test images before alignment.
From the second to last row: the first and second columns
are the correspondences within the maximum cliques that
correspond to different dt (0.5, 1.0, 4.0, and 8.0, respectively).
Correspondingly, the third column shows the difference of the
aligned template and test images, respectively.
condition keeps unchanged. Therefore, for the second dataset,
the image intensities of the corresponding pixels of the aligned
template and test images are still different.
In evaluating the performance of our framework, we have
manually checked each test image, and counted the number
of defects and recorded their positions in image. We can get
the statistics of the true- and false-positive rates, respectively.
First of all, the given bag-of-worlds model for automatic
template image selection is very accurate for our application.
When we use 20 template images for training the BoW model
(one template image per class), and when the number of words
is 30, the template selection can achieve the best performance,
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 8
Algorithm 6 The defect detection procedure
Input: Im: the model (template) image; It: the image of test
product; Φ: the estimated RST parameters; wn: the size
of neighborhood;
Output: Id: the output defect map (a binary image)
1: Tm = preprocessIm(Im);
2: Tt = preprocessIm(It);3: Isp = zeros(sizeof(Im)); ⊲ The speckle image
4: Let Tm = 1-Tm;
5: Let Tt = 1-Tt;
6: Tm = transform(Tm, Φ); ⊲ Warping Tm based on Φ7: Tt = transform(Tt, Φ);
8: [Y,X] = find(Tt >0); ⊲ Nonzero pixels of Y
9: for i = 1 to |Y | do
10: Let YY = Y(i)-wn:Y(i)+wn;
11: Let XX = X(i)-wn:X(i)+wn
12: Let neighbori = Tm(YY,XX);
13: if sum(neighbori(:))== 0 then
14: Isp(Y(i),X(i))=1;
15: end if
16: end for
17: Let Em = edge(Im,’canny’);
18: Let Et = edge(It,’canny’);
19: Isc = zeros(sizeof(Im)); ⊲ The scratch image
20: [Y,X] = find(Et >0);
21: for i = 1 to |Y | do
22: Let YY = Y(i)-wn:Y(i)+wn;
23: Let XX = X(i)-wn:X(i)+wn
24: Let neighbori = Em(YY,XX);
25: if sum(neighbori(:))== 0 then
26: Isc(Y(i),X(i))=1;
27: end if
28: end for
29: Id = Isp|Isc; ⊲ The union of Isp and Isc
Algorithm 7 The procedure of preprocessIm
Input: I: the input image; t: a threshold for binarization
Output: T : the preprocessed image
1: Let I = (I −min(I(:)))./(max(I(:)) −min(I(:)));2: If = imfilter(I ,’mean’ or ’gaussian’);
3: Let I = If − I;
4: Let I = (I > t);5: Let T = 1− I;
96.8%. To ensure a 100% accuracy in template selection, we
have adopt a KNN classifier, where K is set to 5. That means
that we use five template images of each surface for training.
When the number of words is 45 and the number of LDA basis
vectors used for discriminant embedding is 19, the template
selection accuracy on 660 images is 100%.
The reasons that we can achieve such a high accuracy
are in two folds. First, we only have ten products for test
(20 templates), which means that the number of classes is
quite small. Actually, in real-world industrial applications,
the number of product models that need to be inspected for
defect in the same batch is quite limited. Second, all test
and template images are all captured by a camera which is
pointing upright downward to the product. Therefore, there is
little out-of-plane rotation variation between template and test
images. Although there exists light variation which is caused
by in-plane rotation between template and test images, but still
limited. In addition, the BRIEF feature is illumination and
rotation invariant. Therefore, the template selection method
proposed in this paper is feasible for real-world applications.
A. Performance of geometric alignment
Before we evaluate the defect detection performance, we
look at the geometric alignment performance first. Based on
our description in Section III-B2, the only parameter that we
need to tune for a good alignment performance is the threshold
dt. Figure 11 shows plot of the percentage of the number
of correspondences in the obtained approximate maximum
clique when the threshold dt changes from 1 to 10. The
statistics are obtained based on averaging on 30 alignments,
and the image resolution is 1920×1280. Correspondingly,
the mean absolute errors of the aligned template and test
images are also displayed. We observe that the alignment
is getting more accurate when dt increases within a range,
and getting worse when dt is larger than a certain value. It
makes sense because there are only a few correspondences
within the obtained maximum clique when dt is too small,
which incurs an inaccurate alignment based on the very limited
corresponding key points. However, when dt is getting too
big, it would include some false correspondences into the
estimated maximum clique, and the alignment would not be
accurate with these false correspondences. In practice, the
threshold dt is set to 5 pixels when the image resolution is
about 1920×1280, and may be scaled accordingly based on
the actual image size.
We have compared with the conventional image alignment
based on RANSAC and SURF (RANSAC+SURF) and the
one based on the transformation parameters estimated from
the exact maximum clique (exactMC) algorithm [39]. The
alignment time is listed in Table I. To compare the alignment
accuracy, we run each compared alignment approach to obtain
the optimal alignment results on 30 pairs of images, and
calculate the mean squared root errors (Table II). It shows
that our alignment approach is the fastest among the three
compared methods. In the RANSAC+SURF approach, in
order to achieve the equivalent alignment accuracy as our
method, the number of random sampling in RANSAC is set to
1000. The exact maximum clique approach has a complexity
of O(N3), where N is the number of correspondences. In
comparison, our approach is an approximate one, which has
only a complexity of O(N2). In terms of accuracy, because our
method extracts both corner and blob centers, we can extract
more features than the RANSAC+SURF method. In addition,
the RANSAC usually cannot estimate parameter accurately
when the number of inliers is less than 30% of the number of
all correspondences. Therefore, our method can achieve more
accurate alignment result than the RANSAC+SURF method.
Since our method is only an approximate maximum clique
method, the cardinality of the estimated clique is no larger
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 9
Fig. 13: The enhanced illumination-invariant images by our preprocessing step (as described in Algorithm 7 of Section III-B2).
Fig. 14: Examples of defect detection when the template and
test images are captured with different illuminations. The first
and second columns: the aligned template and test images,
respectively. The third column: detected defects. The fourth
column: the difference of the aligned images.
than that of the true maximum clique. The alignment accuracy
by the exactMC method is a little better, but the time cost is
much higher.
TABLE I: Computational complexity in alignment
our method RANSAC+SURF exactMC
140ms 200ms 1800ms
TABLE II: Comparison of alignment accuracy
our method RANSAC+SURF exactMC
38.2 39.8 37.4
B. Performance of the defect detection
First, we look at the key preprocessing step in dealing
with the image intensity variation. Algorithm 7 gives the
details of the preprocessing step, where the threshold t is used
to remove the unwanted background noise and segment the
foreground region of interests. Figure 13 shows examples of
the preprocessed images, where the aligned template and test
images are shown in the first column, and the preprocessed
images based on different value of t are shown in the other
columns, respectively. In practice, the threshold t is set to 0.03
in all of our experiments. Tables III and IV show the defect
inspection performance of the compared methods on the two
Fig. 15: Examples of defect detection. The first and second
columns: the geometrically aligned template and test images,
respectively. The third column: the detected defects.
datasets, respectively, based on the settings of parameter as
described in this paper.
We have observed that most of the missed defects are very
shallow (gentle) scratches which are very similar to its nearby
background, even though we have used four bar lights to
enhance the contrast. Most of the false positives are from prints
regions such as the defects on bar-code or printed texts. These
together with shallow scratches are still the most challenging
tasks to our framework. We will improve on these problems
in our future work.
TABLE III: Defect detection performance on dataset 1
true positive rate number of false positives
ours 79.2% 5.4
RANSAC+SURF 76.5% 8.3
exactMC 79.8% 5.2
TABLE IV: Defect detection performance on dataset 2
true positive rate number of false positives
ours 92.7% 0.32
RANSAC+SURF 92.1% 0.75
exactMC 93.3% 0.30
We have tested our method on images with the resolution
of 1920x1280. The whole procedure is about 350ms on a
1551-3203 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2017.2668438, IEEETransactions on Industrial Informatics
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. X. 2017 10
PC with i7 processor and 8G memory, where the template
selection part covers about 110ms, the alignment part 140ms
and the detection part 100ms. In contrast, the time cost of the
whole detection procedure based on RANSAC alignment and
the exactMC alignment is 410ms and 2010ms.
V. CONCLUSION
We propose a general framework for detecting defects in
planar industrial products or planar surfaces of non-planar
products based on a template-matching strategy. This frame-
work can handle cases where the lighting condition of the
template and test-product images is slightly different and the
template and test-product images are not aligned. This is due to
the excellent performance of three components: an automatic
selection of template image for a given test one, a robust
geometric alignment of template and test images based on
an approximate maximum clique approach, and an illumina-
tion invariant image comparison method for defect detection
in the aligned images. Experimental results on challenging
image datasets demonstrate the excellent performance of the
proposed framework.
ACKNOWLEDGMENT
Hui Kong is supported by the Jiangsu Province Natural
Science Foundation (Grant No. BK20151491), and the Natural
Science Foundation of China (Grant No. 61672287). Zhihua
Chen is supported by the National Nature Science Foundation
of China (Grant 61370174, 61672228).
REFERENCES
[1] X. Xie, A review of recent advances in surface defect detection usingtexture analysis techniques. ELCVIA Electronic Letters on ComputerVision and Image Analysis, 2008, 7(3).
[2] S. Ghorai, A. Mukherjee, M. Gangadaran et al. Automatic defect detectionon hot-rolled flat steel products. IEEE Transactions on Instrumentationand Measurement, 62(3): 612-621, 2013.
[3] H. Jia, Y. L. Murphey, J. Shi et al. An intelligent real-time vision systemfor surface defect detection. IEEE Proceedings of the 17th InternationalConference on Pattern Recognition, 2004.
[4] C. T. Liao, W. H. Lee, S.H. Lai, A flexible PCB inspection system basedon statistical learning. Journal of Signal Processing Systems, 2012, 67(3):279-290.
[5] D. Sannen, H. Van Brussel, A multilevel information fusion approach forvisual quality inspection. Information Fusion, 2012, 13(1): 48-59.
[6] Q. Li and S. Ren, A visual detection system for rail surface defects. IEEETransactions on Systems, Man, and Cybernetics, Part C: Applications andReviews, 2012, 42(6): 1531-1542.
[7] H. Shen, S. Li, D. Gu et al. Bearing defect inspection based on machinevision. Measurement, 2012, 45(4): 719-733.
[8] C. H. Yeh, F. C. Wu, W. L. Ji et al. A wavelet-based approach indetecting visual defects on semiconductor wafer dies. IEEE Transactionson Semiconductor Manufacturing, 2010, 23(2): 284-292.
[9] H. D. Lin, Computer-aided visual inspection of surface defects in ce-ramic capacitor chips. Journal of Materials Processing Technology, 2007,189(1): 19-25.
[10] D. M. Tsai and C. T. Lin, Fast normalized cross correlation for defectdetection. Pattern Recognition Letters, 2003, 24(15): 2625-2631.
[11] A. J. Crispin and V. Rankov, Automated inspection of PCB componentsusing a genetic algorithm template-matching approach. The InternationalJournal of Advanced Manufacturing Technology, 2007, 35(3-4): 293-300.
[12] D. M. Tsai and C. H. Yang, A quantile-quantile plot based patternmatching for defect detection. Pattern Recognition Letters, 2005, 26(13):1948-1962.
[13] A. Kumar, Neural network based detection of local textile defects.Pattern Recognition, 36(7): 1645-1659, 2003.
[14] A. Kumar, Computer-vision-based fabric defect detection: a survey.IEEE Transactions on Industrial Electronics, 55(1), 348-363, 2008
[15] D. M. Tsai, S. C. Wu and W. Y. Chiu, Defect detection in solar modulesusing ICA basis images. IEEE Transactions on Industrial Informatics,9(1), 122-131, 2013.
[16] A. Kumar and G. K. Pang, Defect detection in textured materials usingGabor filters. IEEE Transactions on Industry Applications, 38(2), 425-440, 2002.
[17] H. F. Ng, Automatic thresholding for defect detection. Pattern recogni-tion letters, 27(14), 1644-1649, 2006
[18] Y. Li, Y. F. Li, Q. L. Wang, D. Xu, M. Tan, Measurement and defectdetection of the weld bead based on online vision inspection. IEEETransactions on Instrumentation and Measurement, 59(7), 1841-1849,2010
[19] D. M. Tsai, I. Chiang, and Y. H. Tsai, A shift-tolerant dissimilaritymeasure for surface defect detection. IEEE Transactions on IndustrialInformatics, 8(1), 128-137, 2012
[20] Q. Li and S. Ren, A real-time visual inspection system for discretesurface defects of rail heads. IEEE Transactions on Instrumentation andMeasurement, 61(8), 2189-2199, 2012
[21] H. Feng, Z. Jiang, F. Xie, P. Yang, J. Shi and L. Chen, Automatic fastenerclassification and defect detection in vision-based railway inspectionsystems. IEEE Transactions on Instrumentation and Measurement, 63(4),877-888, 2014
[22] C. J. Lu and D. M. Tsai, Automatic defect inspection for LCDs usingsingular value decomposition. The International Journal of AdvancedManufacturing Technology, 25(1-2), 53-61, 2005.
[23] Y. Gan and Q. Zhao, An effective defect inspection method for LCDusing active contour model. IEEE Transactions on Instrumentation andMeasurement, 62(9), 2438-2445, 2013
[24] W. C. Li and D. M. Tsai, Defect inspection in low-contrast LCDimages using Hough transform-based nonstationary line detection. IEEETransactions on Industrial Informatics, 7(1), 136-147, 2011
[25] C. J. Lu and D. M. Tsai, Defect inspection of patterned thin filmtransistor-liquid crystal display panels using a fast sub-image-based sin-gular value decomposition. International Journal of Production Research,42(20), 4331-4351, 2004
[26] D.M. Tsai and R.H. Yang, An eigenvalue-based similarity measure andits application in defect detection. Image Vis Comput 23(12):1094C1101.
[27] M. Zontak and I. Cohen, Defect detection in patterned wafers usingmultichannel Scanning Electron Microscope. Signal Processing, 89(8),1511-1520, 2009
[28] C. Y. Chang, C. Li, J. W. Chang and M. Jeng, An unsupervised neuralnetwork approach for automatic semiconductor wafer defect inspection.Expert Systems with Applications, 36(1), 950-958, 2009
[29] C. C. Wang, B. C. Jiang, J. Y. Lin and C. C. Chu, Machine vision-baseddefect detection in IC images using the partial information correlationcoefficient. IEEE Transactions on Semiconductor Manufacturing, 26(3):378-384, 2013
[30] W. C. Li and D. M. Tsai. Wavelet-based defect detection in solar waferimages with inhomogeneous texture. Pattern Recognition, 45(2): 742-756,2012
[31] H. Liu, W. Zhou, Q. Kuang, L. Cao and B. Gao, Defect detection ofIC wafer based on two-dimension wavelet transform. MicroelectronicsJournal, 41(2), 171-177, 2010.
[32] N. G. Shankar and Z. W. Zhong, Defect detection on semiconductorwafer surfaces. Microelectronic Engineering, 77(3), 337-346, 2005.
[33] P. Xie, S.U. Guan, A golden-template self-generating method for pat-terned wafer inspection. Mach Vis Appl, 2000, 12(3):149C156
[34] S. Baker and I. Matthews, Lucas-Kanade 20 years on: A unifying frame-work, International Journal of Computer Vision, vol.56, no.3, pp.221-255,Feb. 2004.
[35] R. Szeliski, Image alignment and stitching: A tutorial. Foundations andTrends in Computer Graphics and Vision, 2(1), pp.1-104. 2006
[36] J. Shi and C. Tomasi, Good features to track. Computer Vision andPattern Recognition, 1994. Proceedings CVPR’94., 1994 IEEE ComputerSociety Conference on. IEEE, 1994.
[37] T. Lindeberg, Detecting salient blob-like image structures and theirscales with a scale-space primal sketch: a method for focus-of-attention.International Journal of Computer Vision, 11(3), 283-318, 1993
[38] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, ORB: an efficientalternative to SIFT or SURF. IEEE International Conference on ComputerVision (pp. 2564-2571), 2011
[39] I. M. Bomze, M. Budinich, P. M. Pardalos and M. Pelillo, The maximumclique problem. In Handbook of combinatorial optimization (pp. 1-74).Springer US.
本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。
学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,
提供一站式文献检索和下载服务”的24 小时在线不限IP
图书馆。
图书馆致力于便利、促进学习与科研,提供最强文献下载服务。
图书馆导航:
图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具