Machine Vision.ppt

Embed Size (px)

Citation preview

  • 7/27/2019 Machine Vision.ppt

    1/50

    Memorial University of Newfoundland

    Faculty of Engineering & Applied Science

    Engineering 7854

    Industrial Machine Vision

    INTRODUCTION TO MACHINE VISION

    Prof. Nick Krouglicof

  • 7/27/2019 Machine Vision.ppt

    2/50

    10/19/2013 Introduction to Machine Vision

    LECTURE OUTLINE

    Elements of a Machine Vision SystemLens-camera model2D versus 3D machine vision

    Image segmentation pixel classificationThresholding

    Connected component labelingChain and crack coding for boundary representations

    Contour tracking / border following

    Object recognitionBlob analysis, generalized moments, compactness

    Evaluation of form parameters from chain and crack

    codes

    Industrial application

  • 7/27/2019 Machine Vision.ppt

    3/50

    10/19/2013 Introduction to Machine Vision

    MACHINE VISION SYSTEM

  • 7/27/2019 Machine Vision.ppt

    4/50

    10/19/2013 Introduction to Machine Vision

    LENS-CAMERA MODEL

  • 7/27/2019 Machine Vision.ppt

    5/50

    10/19/2013 Introduction to Machine Vision

    HOW CAN WE RECOVER THE DEPTH

    INFORMATION?

    Stereoscopic approach: Identify the same

    point in two different views of the object and

    apply triangulation.

    Employ structured lighting.

    If the form (i.e., size) of the object is known,

    its position and orientation can be determined

    from a single perspective view.

    Employ an additional range sensor

    (ultrasonic, optical).

  • 7/27/2019 Machine Vision.ppt

    6/50

    10/19/2013 Introduction to Machine Vision

    3D MACHINE VISION SYSTEM

    Digital Camera

    Field of View

    Granite Surface Plate

    Laser Projector

    Plane of Laser Light

    XY Table

    P(x,y,z)

  • 7/27/2019 Machine Vision.ppt

    7/50

    10/19/2013 Introduction to Machine Vision

    3D MACHINE VISION SYSTEM

  • 7/27/2019 Machine Vision.ppt

    8/50

    10/19/2013 Introduction to Machine Vision

    3D MACHINE VISION SYSTEM

  • 7/27/2019 Machine Vision.ppt

    9/50

    10/19/2013 Introduction to Machine Vision

    3D MACHINE VISION SYSTEM

  • 7/27/2019 Machine Vision.ppt

    10/50

    10/19/2013 Introduction to Machine Vision

    3D MACHINE VISION SYSTEM

  • 7/27/2019 Machine Vision.ppt

    11/50

    10/19/2013 Introduction to Machine Vision

    2D MACHINE VISION SYSTEMS

    2D machine vision deals with image analysis.

    The goal of this analysis is to generate a high leveldescription of the input image or scene that can be used

    (for example) to:

    Identify objects in the image (e.g., character recognition)

    Determine the position and orientation of the objects in the image(e.g., robot assembly)

    Inspect the objects in the image (e.g., PCB inspection)

    In all of these examples, the description refers to specific

    objects orregions in the image.To generate the description of the image, it is first

    necessary to segmentthe image into these regions.

  • 7/27/2019 Machine Vision.ppt

    12/50

    10/19/2013 Introduction to Machine Vision

    IMAGE SEGMENTATION

    How many objectsare there in the image below?

    Assuming the answer is 4, what exactly defines an

    object?

    Zoom In

  • 7/27/2019 Machine Vision.ppt

    13/50

    10/19/2013 Introduction to Machine Vision

    8 BIT GRAYSCALE IMAGE

    197 197 197 195 195 194 193 193 194 194 194 193 193 191 190 188 185 174 142 101

    191 192 193 194 195 195 194 193 191 189 187 190 191 190 186 181 158 119 86 66

    193 195 196 196 195 196 195 194 194 193 191 192 191 187 175 145 105 73 58 51

    196 197 197 197 196 195 194 193 193 193 192 188 183 161 121 86 67 59 52 48

    192 193 194 193 194 195 196 195 195 192 189 176 144 102 72 59 56 53 51 52

    192 194 196 195 195 195 195 196 195 189 167 124 87 68 57 53 52 51 51 50

    194 195 195 194 194 195 194 193 184 155 107 73 60 55 53 55 60 60 58 54

    194 193 194 194 191 191 188 172 134 94 68 56 51 51 53 57 57 58 56 54

    193 193 194 195 193 184 156 112 77 60 53 52 51 53 56 58 58 58 56 53

    192 190 189 188 178 140 92 68 57 52 50 50 52 53 56 57 60 60 58 54

    193 193 194 193 189 170 125 85 63 55 54 54 55 58 63 66 67 68 64 59

    194 195 195 195 193 191 183 153 107 76 60 55 54 54 55 57 57 56 55 53

    195 194 195 196 193 192 192 190 173 123 83 63 57 53 51 54 59 62 57 54

    196 197 196 195 197 195 195 194 192 179 143 99 69 58 56 56 59 58 55 54

    195 195 196 196 194 192 194 194 194 194 190 168 117 78 61 54 51 51 52 52

    196 195 195 193 194 195 194 191 191 192 193 193 179 134 90 66 53 50 47 46

    194 192 192 193 193 194 195 195 195 195 195 195 194 187 156 110 74 57 51 46

    194 193 192 192 192 194 194 193 192 193 193 192 192 192 189 173 129 84 62 52

    196 194 194 195 195 196 195 194 193 193 193 194 193 195 194 192 185 150 99 69

    192 190 189 189 192 192 192 191 192 190 192 194 194 194 193 192 192 187 163 114

  • 7/27/2019 Machine Vision.ppt

    14/50

    10/19/2013 Introduction to Machine Vision

    8 BIT GRAYSCALE IMAGE

    0

    50

    100

    150

    200

  • 7/27/2019 Machine Vision.ppt

    15/50

    10/19/2013 Introduction to Machine Vision

    GRAY LEVEL THRESHOLDING

    Many images consist of two regions thatoccupy different gray level ranges.

    Such images are characterized by a bimodal

    image histogram.An image histogram is a function h defined on

    the set of gray levels in a given image.

    The value h(k) is given by the number ofpixels in the image having image intensity k.

  • 7/27/2019 Machine Vision.ppt

    16/50

    10/19/2013 Introduction to Machine Vision

    GRAY LEVEL THRESHOLDING (DEMO)

    http://localhost/var/www/apps/conversion/tmp/scratch_3/Lab1.exe
  • 7/27/2019 Machine Vision.ppt

    17/50

    10/19/2013 Introduction to Machine Vision

    BINARY IMAGE

  • 7/27/2019 Machine Vision.ppt

    18/50

    10/19/2013 Introduction to Machine Vision

    IMAGE SEGMENTATION CONNECTED

    COMPONENT LABELING

    Segmentation can be viewed as a process ofpixel

    classification; the image is segmented into objects or

    regions by assigning individual pixels to classes.

    Connected Component Labelingassigns pixels tospecific classes by verifying if an adjoining pixel (i.e.,

    neighboring pixel) already belongs to that class.

    There are two standard definitions of pixel

    connectivity: 4 neighbor connectivity and 8 neighborconnectivity.

  • 7/27/2019 Machine Vision.ppt

    19/50

    10/19/2013 Introduction to Machine Vision

    IMAGE SEGMENTATION CONNECTED

    COMPONENT LABELING

    Pi,j

    4 Neighbor Connectivity

    Pi,j

    8 Neighbor Connectivity

  • 7/27/2019 Machine Vision.ppt

    20/50

    10/19/2013 Introduction to Machine Vision

    CONNECTED COMPONENT LABELING:

    FIRST PASS

    A A

    A A A

    B B C C

    B B B C C

    B B B B B B

    EQUIVALENCE:

    B=C

  • 7/27/2019 Machine Vision.ppt

    21/50

    10/19/2013 Introduction to Machine Vision

    CONNECTED COMPONENT LABELING:

    SECOND PASS

    A A

    A A A

    B B C C

    B B B C C

    B B B B B B

    TWO OBJECTS!

    B B

    B B

  • 7/27/2019 Machine Vision.ppt

    22/50

    10/19/2013 Introduction to Machine Vision

    CONNECTED COMPONENT LABELING:

    EXAMPLE (DEMO)

    http://localhost/var/www/apps/conversion/tmp/scratch_3/CONNECT.EXE
  • 7/27/2019 Machine Vision.ppt

    23/50

    10/19/2013 Introduction to Machine Vision

    CONNECTED COMPONENT LABELING:

    TABLE OF EQUIVALENCES

    2 = 5 16 = 27 16 = 50 50 = 81 112 = 127

    5 = 9 5 = 28 5 = 39 50 = 86 112 = 134

    2 = 5 27 = 34 34 = 51 50 = 86 112 = 137

    5 = 10 16 = 37 5 = 39 5 = 87 112 = 138

    5 = 10 5 = 39 34 = 46 111 = 112

    5 = 10 5 = 39 5 = 66 112 = 113

    5 = 12 40 = 41 34 = 72 112 = 119

    5 = 16 5 = 39 34 = 72 112 = 120

    5 = 18 34 = 46 50 = 76 112 = 120

    16 = 23 34 = 46 50 = 81 112 = 122

  • 7/27/2019 Machine Vision.ppt

    24/50

    10/19/2013 Introduction to Machine Vision

    CONNECTED COMPONENT LABELING:

    TABLE OF EQUIVALENCES

    2 = 5 2 = 37 2 = 86 111 = 138

    2 = 9 2 = 39 2 = 87

    2 = 10 40 = 41 111 = 112

    2 = 12 2 = 46 111 = 113

    2 = 16 2 = 50 111 = 119

    2 = 18 2 = 51 111 = 120

    2 = 23 2 = 66 111 = 122

    2 = 27 2 = 72 111 = 127

    2 = 28 2 = 76 111 = 134

    2 = 34 2 = 81 111 = 137

  • 7/27/2019 Machine Vision.ppt

    25/50

    10/19/2013 Introduction to Machine Vision

    IS THERE A MORE COMPUTATIONALLY

    EFFICIENT TECHNIQUE FOR SEGMENTING

    THE OBJECTS IN THE IMAGE?

    Contour tracking/border following identify the pixels

    that fall on the boundaries of the objects, i.e., pixels

    that have a neighbor that belongs to the backgroundclass or region.

    There are two standard code definitions used to

    represent boundaries: code definitions based on 4-

    connectivity (crackcode) and code definitions basedon 8-connectivity (chain code).

  • 7/27/2019 Machine Vision.ppt

    26/50

    10/19/2013 Introduction to Machine Vision

    BOUNDARY REPRESENTATIONS:

    4-CONNECTIVITY (CRACK CODE)

    CRACK CODE:

    10111211222322333300

    103300

    3

    0 2

    1

  • 7/27/2019 Machine Vision.ppt

    27/50

    10/19/2013 Introduction to Machine Vision

    7 6 5

    0 4

    1 2 3

    CHAIN CODE:

    12232445466601760

    BOUNDARY REPRESENTATIONS:

    8-CONNECTIVITY (CHAIN CODE)

  • 7/27/2019 Machine Vision.ppt

    28/50

    10/19/2013 Introduction to Machine Vision

    CONTOUR TRACKING ALGORITHM

    FOR GENERATING CRACK CODE

    Identify a pixel Pthat belongs to the class objectsand a neighboring pixel (4 neighbor connectivity) Q

    that belongs to the class background.

    Depending on the relative position ofQ relative to P,

    identify pixels U and V as follows:

    CODE 0 CODE 1 CODE 2 CODE 3

    V Q Q P P U U V

    U P V U Q V P Q

  • 7/27/2019 Machine Vision.ppt

    29/50

    10/19/2013 Introduction to Machine Vision

    CONTOUR TRACKING ALGORITHM

    Assume that a pixel has a value of 1 if it belongs to

    the class object and 0 if it belongs to the classbackground.

    Pixels U and Vare used to determine the next move

    (i.e., the next element of crack code) as summarized in

    the following truth table:

    U V P Q TURN CODE*

    X 1 V Q RIGHT CODE-1

    1 0 U V NONE CODE

    0 0 P U LEFT CODE+1

    *Implement as a modulo 4 counter

  • 7/27/2019 Machine Vision.ppt

    30/50

    10/19/2013 Introduction to Machine Vision

    *Implement as a modulo 4 counter

    CODE+1LEFTUP00

    CODENONEVU01

    CODE-1RIGHTQV1X

    CODE*TURNQPVU

    *Implement as a modulo 4 counter

    CODE+1LEFTUP00

    CODENONEVU01

    CODE-1RIGHTQV1X

    CODE*TURNQPVU

    V

    Q

    CODE 1

    U

    P

    V

    U

    Q

    P

    CODE 2

    U

    V

    CODE 0

    P

    Q

    P

    U

    CODE 3

    Q

    V

    V

    Q

    CODE 1

    U

    P

    V

    U

    Q

    P

    CODE 2

    U

    V

    CODE 0

    P

    Q

    P

    U

    CODE 3

    Q

    V

    CONTOUR TRACKING ALGORITHM

    PQ

    UV

    V

    U

    1

    3

    20

    1

    3

    20

    Q

    V

    P

    UQ

    V

    P

    UQ

    V

    P

    U

  • 7/27/2019 Machine Vision.ppt

    31/50

    10/19/2013 Introduction to Machine Vision

    CONTOUR TRACKING ALGORITHM

    FOR GENERATING CRACK CODE

    Software Demo!

    http://localhost/var/www/apps/conversion/tmp/scratch_3/TRACK-1.EXEhttp://localhost/var/www/apps/conversion/tmp/scratch_3/TRACK-1.EXE
  • 7/27/2019 Machine Vision.ppt

    32/50

    10/19/2013 Introduction to Machine Vision

    CONTOUR TRACKING ALGORITHM

    FOR GENERATING CHAIN CODE

    Identify a pixel Pthat belongs to the class objects and aneighboring pixel (4 neighbor connectivity) R0 that belongs

    to the class background. Assume that a pixel has a value

    of 1 if it belongs to the class object and 0 if it belongs

    to the class background.

    Assign the 8-connectivity neighbors ofP to R0, R1, , R7

    as follows:

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

  • 7/27/2019 Machine Vision.ppt

    33/50

    10/19/2013 Introduction to Machine Vision

    CONTOUR TRACKING ALGORITHM

    FOR GENERATING CHAIN CODE

    2

    6 57

    31

    40

    2

    6 57

    31

    40R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    R3R2R1

    R4PR0

    R5R6R7

    ALGORITHM:

    i=0

    WHILE (Ri==0) { i++ }Move P to Ri

    Set i=6 for next search

  • 7/27/2019 Machine Vision.ppt

    34/50

    10/19/2013 Introduction to Machine Vision

    OBJECT RECOGNITION BLOB ANALYSIS

    Once the image has been segmented into classesrepresenting the objects in the image, the next step is to

    generate a high level description of the various objects.

    A comprehensive set of form parameters describing

    each object or region in an image is useful for objectrecognition.

    Ideally the form parameters should be independent of

    the objects position and orientation as well as the

    distance between the camera and the object (i.e., scalefactor).

  • 7/27/2019 Machine Vision.ppt

    35/50

    10/19/2013 Introduction to Machine Vision

    What are some examples of form parameters

    that would be useful in identifying the objects

    in the image below?

  • 7/27/2019 Machine Vision.ppt

    36/50

    10/19/2013 Introduction to Machine Vision

    OBJECT RECOGNITION BLOB ANALYSIS

    Examples of form parameters that are invariant withrespect to position, orientation, and scale:

    Number of holes in the object

    Compactness or Complexity: (Perimeter)2/Area

    Moment invariants

    All of these parameters can be evaluated during

    contour following.

  • 7/27/2019 Machine Vision.ppt

    37/50

    10/19/2013 Introduction to Machine Vision

    GENERALIZED MOMENTS

    R

    qp

    pq dxdyyxfyxm ),(

    Shape features or form parameters provide a high leveldescription of objects or regions in an image

    Many shape features can be conveniently represented

    in terms of moments. The (p,q)th moment of a region R

    defined by the function f(x,y) is given by:

  • 7/27/2019 Machine Vision.ppt

    38/50

    10/19/2013 Introduction to Machine Vision

    GENERALIZED MOMENTS

    n

    x

    m

    y

    ji

    ij yxfyxM1 1

    ),(

    In the case of a digital image of size n by m pixels, thisequation simplifies to:

    For binary images the function f(x,y) takes a value of 1for pixels belonging to class object and 0 for class

    background.

  • 7/27/2019 Machine Vision.ppt

    39/50

    10/19/2013 Introduction to Machine Vision

    GENERALIZED MOMENTS

    0 1 2 3 4 5 6 7 8 9

    1

    2

    3

    4

    5

    6

    X

    Y

    i j Mij

    0 01 0

    0 1

    2 0

    0 2

    1 1

    7 Area

    20

    33

    159 Momentof Inertia64

    93

    n

    x

    m

    y

    ji

    ij yxfyxM1 1

    ),(

  • 7/27/2019 Machine Vision.ppt

    40/50

    10/19/2013 Introduction to Machine Vision

    SOME USEFUL MOMENTS

    00

    10

    MMX

    00

    01

    MMY

    The center of mass of a region can be defined in

    terms of generalized moments as follows:

  • 7/27/2019 Machine Vision.ppt

    41/50

    10/19/2013 Introduction to Machine Vision

    SOME USEFUL MOMENTS

    The moments of inertia relative to the center of masscan be determined by applying the general form of

    the parallel axis theorem:

    00

    2

    010202

    MMMM

    00

    2

    102020

    MMMM

    00

    01101111

    MMMMM

  • 7/27/2019 Machine Vision.ppt

    42/50

    10/19/2013 Introduction to Machine Vision

    SOME USEFUL MOMENTS

    The principal axis of an object is the axis passing

    through the center of mass which yields the minimummoment of inertia.

    This axis forms an angle with respect to the X axis.

    The principal axis is useful in robotics for determiningthe orientation of randomly placed objects.

    0220

    112

    2 MM

    M

    TAN

  • 7/27/2019 Machine Vision.ppt

    43/50

    10/19/2013 Introduction to Machine Vision

    ExampleX

    Y

    Principal Axis

    Center of Mass

  • 7/27/2019 Machine Vision.ppt

    44/50

    10/19/2013 Introduction to Machine Vision

    SOME (MORE) USEFUL MOMENTS

    The minimum/maximum moment of inertia about an

    axis passing through the center of mass are given by:

    2

    4)(

    2

    2

    112

    20022002 MMMMM

    IMIN

    2

    4)(

    2

    2

    112

    20022002 MMMMM

    IMAX

  • 7/27/2019 Machine Vision.ppt

    45/50

    10/19/2013 Introduction to Machine Vision

    SOME (MORE) USEFUL MOMENTS

    The following moments are independent of position,orientation, and reflection. They can be used to

    identify the object in the image.

    02201

    MM

    2

    112

    02202 4)( MMM

  • 7/27/2019 Machine Vision.ppt

    46/50

    10/19/2013 Introduction to Machine Vision

    SOME (MORE) USEFUL MOMENTS

    The following moments are normalized with respectto area. They are independent of position, orientation,

    reflection, and scale.

    2

    00

    11

    M

    4

    00

    22

    M

  • 7/27/2019 Machine Vision.ppt

    47/50

    10/19/2013 Introduction to Machine Vision

    Generalized moments are computed by evaluatinga double (i.e., surface) integral over a region of the

    image.

    The surface integral can be transformed into a line

    integral around the boundary of the region byapplying Greens Theorem.

    The line integral can be easily evaluated during

    contour tracking.

    The process is analogous to using a planimeter to

    graphically evaluate the area of a geometric figure.

    EVALUATING MOMENTS DURING

    CONTOUR TRACKING

  • 7/27/2019 Machine Vision.ppt

    48/50

    10/19/2013 Introduction to Machine Vision

    { switch ( code [i] )

    {

    case 0:

    m00 = m00 - y;

    m01 = m01 - sum_y;

    m02 = m02 - sum_y2;

    x = x - 1;

    sum_x = sum_x - x;

    sum_x2 = sum_x2 - x*x;

    m11 = m11 - (x*sum_y);break;

    case 1:

    sum_y = sum_y + y;sum_y2 = sum_y2 + y*y;

    y = y + 1;

    m10 = m10 - sum_x;

    m20 = m20 - sum_x2;

    break;

    EVALUATING MOMENTS DIRECTLY FROM

    CRACK CODE DURING CONTOUR TRACKING

  • 7/27/2019 Machine Vision.ppt

    49/50

    10/19/2013 Introduction to Machine Vision

    case 2:

    m00 = m00 + y;

    m01 = m01 + sum_y;

    m02 = m02 + sum_y2;m11 = m11 + (x*sum_y);

    sum_x = sum_x + x;

    sum_x2 = sum_x2 + x*x;

    x = x + 1;

    break;

    case 3:

    y = y - 1;

    sum_y = sum_y - y;

    sum_y2 = sum_y2 - y*y;m10 = m10 + sum_x;

    m20 = m20 + sum_x2;

    break; }

    }

    EVALUATING MOMENTS DIRECTLY FROM

    CRACK CODE DURING CONTOUR TRACKING

  • 7/27/2019 Machine Vision.ppt

    50/50

    QUESTIONS

    ?

    10/19/2013 Introduction to Machine Vision