28
Robotic Vision System Robot vision may be defined as the process of extracting, characterizing and interpreting information from images of a three dimensional world The process can be divided into the following principal areas I.Sensing II.Preprocessing III.Segmentation IV.Description V.Recognition VI.Interpretation

dfgdf

Embed Size (px)

DESCRIPTION

fgfdg

Citation preview

Page 1: dfgdf

Robotic Vision System

Robot vision may be defined as the process of extracting, characterizing and interpreting information from images of a three dimensional world

The process can be divided into the following principal areas

I. Sensing

II. Preprocessing

III.Segmentation

IV.Description

V. Recognition

VI.Interpretation

Page 2: dfgdf

Vision system

Two-dimensional (or) Three dimensional model of the scene According to the gray levels

1. Binary Image

2. Gray Image

3. Color Image

Page 3: dfgdf

Vision System – Stages

Analog to digital conversion Remove noise Find regions (objects) in space Take relationships(measurements) Match the description with similar description of known

objects

Page 4: dfgdf

Block Diagram Of Vision System

FRAMEGRABBER

I/FLIGHT

Page 5: dfgdf

Function 1. Sensing and digitizing image data

2.Image processing and analysis

3.Applications

Typical Techniques & Applications

Signal conversion-Sampling-Quantization-EncodingImage storage/Frame grabber lighting-Structured light-Front/back lighting-Beam splitter-Retro reflectors-Specular illumination-Other techniques

Data reduction-Windowing-Digital ConversionSegmentation-Thresholding-Region growing-Edge ddetectionFeature Extraction-DescriptorsObject Recognition-Template matching-Other algorithms

-Inspection-Identification-Visual servoing and navigation

Page 6: dfgdf

The Image and Conversion

The image presented to a vision system’s camera is light, nothing more (varies intensity and wave length)

Designer ensure the pattern of light presented to the camera is one that can be interpreted easily

Designer ensure what camera sees, the image has minimum clutter Designer ensure blocking extraneous light – sunlight, etc.., that might

affect the image Conversion Light energy converted to electrical energy Image divided into discrete pixels Note: A color camera considered as three separate cameras, each

basic color The best portion of the image is produced by the light passing through

a lens along the len’s axis

A pixel is generally thought of as the smallest single component of a digital image

Page 7: dfgdf

The camera

Common Imaging device used for robot vision system: Charge couple device (CCD) Vidicon camera Solid state camera Charge Injection device Pinhole camera

Page 8: dfgdf

Charge couple device (CCD)

The Charge couple device (CCD) is a silicon based integrated circuit, provided to the user as single chip

Page 9: dfgdf

Vidicon Camera

Page 10: dfgdf

Pin Hole Camera

A pinhole camera is a simple camera without a lens and with a single small aperture – effectively a light-proof box with a small hole in one side. Light from a scene passes through this single point and projects an inverted image on the opposite side of the box. The human eye in bright light acts similarly, as do cameras using small apertures.

Up to a certain point, the smaller the hole, the sharper the image, but the dimmer the projected image. Optimally, the size of the aperture should be 1/100 or less of the distance between it and the projected image.

Because a pinhole camera requires a lengthy exposure, its shutter may be manually operated, as with a flap of light-proof material to cover and uncover the pinhole. Typical exposures range from 5 seconds to several hours.

A common use of the pinhole camera is to capture the movement of the sun over a long period of time. This type of photography is called Solargraphy.

The image may be projected onto a translucent screen for real-time viewing (popular for observing solar eclipses; see also camera obscura), or can expose photographic film or a charge coupled device (CCD). Pinhole cameras with CCDs are often used for surveillance because they are difficult to detect.

Page 11: dfgdf

Frame Grabber

A hardware electronic device used to capture and store the digital image This captures individual, digital still frames from an analog video signal (or) a

digital video stream Frame grabbers were the predominant way to interface cameras to pc's Analog frame grabbers

Which accept and process analog video signals, include these circuits Digital frame grabbers

Which accept and process digital video streams, include these circuits Circuitry common to both analog and digital frame grabbers

A bus interface through which a processor can control the acquisition and access the data

Memory for storing the acquired image

Page 12: dfgdf

Functions of Machine vision system

Image formation Processing of Image Analyzing the Image Interpretation of Image

Page 13: dfgdf

Image formation

There are two parts to the image formation process: The geometry of image formation, which determines where in

the image plane the projection of a point in the scene will be located.

The physics of light, which determines the brightness of a point in the image plane as a function of illumination and surface properties.

The image sensor collects light from the scene through a lens, using photo sensitive target, converts into electronic signal

Page 14: dfgdf

Processing of Image

An analog to digital convertor is used to convert analog voltage of each into digital value

Voltage level for each pixel is given by either 0 or 1 depends on threshold value

On the other hand grey scale system assigns upto 256 different values depending on intensity to each pixel

Page 15: dfgdf

Image digitization

Sampling means measuring the value of an image at a finite number of points.

Quantization is the representation of the measured value at the sampled point by an integer.

Page 16: dfgdf

Image digitization

Page 17: dfgdf

256 gray levels (8bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels (4 bits/pixel)

8 gray levels (3 bits/pixel) 4 gray levels (2 bits/pixel) 2 gray levels (1 bit/pixel)

Image quantization(example)

Page 18: dfgdf

original image sampled by a factor of 2

sampled by a factor of 4 sampled by a factor of 8

Image quantization(example)

Page 19: dfgdf

Analysis of Image

Image analysis is the extraction of meaningful information from images that is prepared by Image processing techniques; and to identify objects (or) facts of the it (or) its environment

This analysis takes place at the central processing unit of the system Three important tasks performed here

Measuring the distance of an object – 1-Dimensional Determining object orientation – 2-Dimensional Define Object Position

Page 20: dfgdf

Interpretation of Image

The must common image interpretation is template matching

In binary system, the image is segmented on the basics of white and black pixels

The complex images can be interpreted by grey scale technique and algorithms

Page 21: dfgdf

Image Understanding

A computer needs to locate the edges of an object in order to construct drawings of the object within a scene, which lead to shapes, which lead to image understanding.

The final task of robot vision is to interpret the information (such as object edges, regions, boundaries, colour and texture) obtained during image analysis process.

This is called image understanding or machine perceptions

A robot vision system must interpret what the image represents in terms of information about its environment. Threshold decides which elements of the differentiated picture matrix should be considered as edge candidates.

Page 22: dfgdf
Page 23: dfgdf

Vision System and Identification of Objects

Vision system is concerned with the sensing of vision data and its interpretation by a computer

The typical vision system consists of the camera and digitizing hardware, a digital computer and hardware & software necessary to interface them

The operation of the vision system consists of the following functions:

(a) Sensing and digitizing image data; (b) Image processing and Analysis; (c) Application

Page 24: dfgdf

Possible Sensors for Identification of Objects

Robot system interfaces to a vision system can provide excellent opportunity to produce a better quality outputs

Page 25: dfgdf

Use of a sensing array to determine the Orientation of Object moving on a conveyor belt

Page 26: dfgdf

Sensing array to identify the presence of an object moving on a conveyor belt and to measure the width of the object

Page 27: dfgdf

Robot Welding System with Vision

Teach box is used to position the end-effector at various points The terminal is used for communicating with robot and also to indicate system

conditions, editing and executing the robot work program The welding path is traversed by the robot manipulator and can be

programmed using programming languages such as VAL, RAIL, etc. The various welding parameters such as feed rate, voltage, current, etc. can

be incorporated in the program

Page 28: dfgdf

The data is processed by a set of algorithms and the relevant information is analyzed by the computer and compared by the programmed path for welding

Any kind of deviations from the programmed path can be taken care by the system itself, giving welds of uniforms and consistent quality

Robot Welding System with Vision