Lop Report 2005p3ps055

Embed Size (px)

Citation preview

  • 8/8/2019 Lop Report 2005p3ps055

    1/45

    0

    A REPORT

    ON

    COMPARATIVE IMPLEMENTATION OF LASER VISION BASED SEAM

    TRACKING SYSTEM FOR WELDING AUTOMATION USING PC AND FPGA

    BY

    Name of the Student: 1) Adithya Reddy Gangidi I.D. No.: 1) 2005P3PS055

    Under the Guidance of

    Prof. Jagmohan Singh

    In the Partial fulfillment of the Course

    Lab Oriented Project BITSGC331

    BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE, PILANI

    JULY, 2007

  • 8/8/2019 Lop Report 2005p3ps055

    2/45

  • 8/8/2019 Lop Report 2005p3ps055

    3/45

    2

    Table of Figures

    Page No.

    Fig 1: Different Top-down and Bottom-up methods in SIMULINK modeling 10

    Fig 2: System generator model example 11

    Fig 3: System Generator Design flow 11

    Fig. 4: Spartan 3E X3CS500E 12

    Fig. 5: Apparatus of the system with interconnections 13

    Fig. 6: Simulink window showing the calibration process. The difference in points of sudden

    rise and fall in the graph gives the length of specimen in terms of pixels. Images areprocessed on FPGA and Chip scope is used to view the processed images on PC screen

    17

    Fig. 7: Gray-scale Intensity computation expression 18

    Fig 8: Five lines Buffer modeled with System generator 20

    Fig 9: 5X5 two dimensional filtering modeled with 5 MAC'S in parallel 21

    Fig 10: Difference between the unfiltered and filtered images after edge detection 21

    Fig. 11: Edge detected image obtained plotted with debugger 22

    Fig 12: Complete block diagram with M-code block used for Parameter measurement 23

    Fig 13: M-code written to get parameters from the edge detected image 24

    Fig. 14: Co-ordinates plotted on a graph 24

    Fig 15: Final results, the edge and root centers generated by FPGA for a test image as

    observed from MATLAB workspace with help of Chip scope Pro

    25

    Fig 16: System Generator model made for motor control 26

    Fig 17: Gate-way In and Gateway out blocks showing the pin location supplied to them

    which is used in User constraints file automatic generation 27

    Fig 18: Stepper motor inputs to the four wires (0011),(0110),(1001) and (0011) in this order

    gives clockwise rotation. One output, for example (0011) makes the motor move by one step.

    28

    Fig 19: Circuit connected for driving the stepper motors 29

  • 8/8/2019 Lop Report 2005p3ps055

    4/45

    3

    Fig 20: Block diagram showing the interface between ADSP-Bf533 kit and FPGA Spartan-

    3E Kit

    30

    Fig 21: Design summary FPGA based system 34

  • 8/8/2019 Lop Report 2005p3ps055

    5/45

    4

    ACKNOWLEDGEMENT:

    First of all, my heartfelt thanks to Prof. Jagmohan Singh who has been a constant source of

    guidance and inspiration.

    I am also thankful to Mr. Ananthan, DGM, WRI who provided us test images to test our

    system. I am also grateful to Abhishek Agarwal for rendering me help and support at every

    stage of the project. We also thank Mr. Patrick Hain, Cypress Semiconductors for providing

    us the Image sensor.

    Finally, I would like to thank my parents and friends for all their support and encouragement.

  • 8/8/2019 Lop Report 2005p3ps055

    6/45

    5

    EXECUTIVE SUMMARY:

    Visual checking of weld joints by operators is no longer sufficiently reliable or cost effective.

    Seam tracking technologies that use machine vision are often cost effective and they enhance

    the productivity of the welding process. The project deals with designing such a seam

    tracking system.

    PC or dedicated processor based machine vision systems can be used for online control of

    welding process. In this project, a seam tracking system which uses high intensity laser

    source, image sensor and an FPGA is designed. FPGA based system is compared with the PC

    based system already developed at Welding Research Institute. The system aims to take

    images at regular intervals of time and estimate the edge gap and root gap of a groove. The

    position of edge centre is thus calculated and feedback is given to torch so as to keep it at the

    edge centre.

    The images are processed by the FPGA chip and the corresponding feedback is given to

    stepper motor. For generating Verilog HDL codes, programming is done with Simulink

    Video and Image processing block-set, System Generator and Xilinx ISE 9.2. The stepper

    motor is interfaced to the FPGA port through JTAG cable and interface circuit. A PCB is

    routed on for the interface circuit. Thus the machine vision algorithm is implemented real-

    time (27 fps) on Spartan 3E FPGA.

  • 8/8/2019 Lop Report 2005p3ps055

    7/45

    6

    1. INTRODUCTION:

    The use of robots in manufacturing industry has increased rapidly during the past decade. Arc

    welding is an actively growing area and many new procedures have been developed for use

    with new lightweight, high strength alloys. One of the basic requirements for such

    applications is seam tracking. Seam tracking is required because of the inaccuracies in joint

    fit-up and positioning, war page, and distortion of the work piece caused by thermal

    expansion and stresses during the welding. These effects cannot be compensated for by most

    robotic welding machines, especially those using open-loop control, and frequently lead to

    poor welding quality. As a consequence, to maintain weld integrity, automated seam tracking

    based on real-time feedback is required.

    Robotic welding sensors, researched during the past few years, have used a variety of

    different techniques for seam tracking. The most commonly used techniques include acoustic,

    magnetic, electrical, and mechanical methods. The electrical through-the-arc sensor based on

    the welding arc properties is the dominant method, where the current (voltage) through the

    weld arc is used to control the position of the welding torch. However, the preferred systems

    are based on optical or visual sensors.

    These vision based seam trackers have many advantages. The sensor system is less sensitive

    to electrical and magnetic interferences from the welding arc compared to the traditional

    through-the-arc method. Also the vision sensor can provide more information about the joint

    than merely the seam position. It is possible for the same sensor system to achieve seam

    tracking and also obtain dimensional parameters about the seam, during a single pass.

    With the sensor mounted on the welding torch, the seam tracking can be realised online. The

    problems encountered by the early vision system applications are the speed of processing and

    the cost of the hardware to implement such systems. This has since become a less important

    factor as developments have taken place in parallel processing which have decreased both

    the cost of the vision system hardware and the processing time. PC or dedicated hardware can

    be used to process the signal and thus control the process of welding.

    Need for FPGA based system:

    PC based machine vision systems can be used for online control of welding process. A seam

    tracking system which uses high intensity laser source, camera and a PC has been developed

    by me during summer internship under Mr. Ananthan, DGM, Welding Research Institute.

    The system takes images at regular intervals of time and estimates the edge gap and root gap

    of a groove. The position of edge centre is thus calculated and feedback is given to torch so

    as to keep it at the edge centre. MATLAB platform is used for coding.

    The PC based machine vision system takes 0.41 seconds for image acquisition and

    subsequent control of torch based on results, which is not real-time. The processing time

    limits the use of system for higher speeds of welding. A dedicated hardware such as an FPGA

  • 8/8/2019 Lop Report 2005p3ps055

    8/45

    7

    or a DSP processor when used for processing images can serve for very high speeds of

    welding.

    FPGA is chosen for the implementation because of its ability to work at very high clock rate,

    which would be specifically useful for the acquisition of images at high frame rate. DSP

    processor which shares a single bus for all its operations would be slow especially when

    acquiring images.

  • 8/8/2019 Lop Report 2005p3ps055

    9/45

    8

    2. SIMULINK, SYSTEM GENERATOR AND SPARTAN 3E:2.1 Introduction:

    A. SIMULINK:

    Simulink is software for modeling, simulating, and analyzing dynamic systems. It supports

    linear and nonlinear systems, modeled in continuous time, sampled time, or a hybrid of the

    two. Systems can also be multirate, i.e., have different parts that are sampled or updated at

    different rates.

    The following topics highlight key aspects of Simulink:

    Tool for Model-Based Design

    Tool for Simulation

    Tool for Analysis

    Tool for Model-Based Design:

    With Simulink, you can move beyond idealized linear models to explore more realistic

    nonlinear models, factoring in friction, air resistance, gear slippage, hard stops, and the other

    things that describe real-world phenomena. Simulink turns your computer into a lab for

    modeling and analyzing systems that simply wouldn't be possible or practical otherwise,

    whether the behavior of an automotive clutch system, the flutter of an airplane wing, the

    dynamics of a predator-prey model, or the effect of the monetary supply on the economy.

    For modeling, Simulink provides a graphical user interface (GUI) for building models as

    block diagrams, using click-and-drag mouse operations. Simulink includes a comprehensiveblock library of sinks, sources, linear and nonlinear components, and connectors. You can

    also customize and create your own blocks.

    Models are hierarchical, so you can build models using both top-down and bottom-up

    approaches. You can view the system at a high level, and then double-click blocks to go

    down through the levels to see increasing levels of model detail. This approach provides

    insight into how a model is organized and how its parts interact.

    Figure 1 Different Top-down and Bottom-up methods in SIMULINK modeling

  • 8/8/2019 Lop Report 2005p3ps055

    10/45

    9

    Tool for Simulation

    After you define a model, you can simulate it, using a choice of mathematical integration

    methods, either from the Simulink menus or by entering commands in the MATLAB

    Command Window. The menus are convenient for interactive work, while the command line

    is useful for running a batch of simulations (for example, if you are doing Monte Carlo

    simulations or want to sweep a parameter across a range of values). Using scopes and other

    display blocks, you can see the simulation results while the simulation runs. In addition, you

    can change many parameters and see what happens for "what if" exploration. The simulation

    results can be put in the MATLAB workspace for post-processing and visualization.

    Tool for Analysis:

    Model analysis tools include linearization and trimming tools, which can be accessed from

    the MATLAB command line, plus the many tools in MATLAB and its application toolboxes.

    Because MATLAB and Simulink are integrated, you can simulate, analyze, and revise your

    models in either environment at any point.

    B. XILINXSYSTEM GENERATOR:

    System Generator is a DSP design tool from Xilinx that enables the use of The Mathworks

    model-based design environment Simulink for FPGA design. Designs are captured in the

    DSP friendly Simulink modeling environment using a Xilinx specific block-set

    Figure 2: System generator model example

    The System Generator Design Flow:

  • 8/8/2019 Lop Report 2005p3ps055

    11/45

    10

    System Generator works within the Simulink model-based design methodology. Often an

    executable spec is created using the standard Simulink block sets. This spec can be designed

    using floating-point numerical precision and without hardware detail. Once the functionality

    and basic dataflow issues have been defined, System Generator can be used to specify the

    hardware implementation details for the Xilinx devices. System Generator uses the XilinxDSP blockset for Simulink and will automatically invoke Xilinx Core Generator to generate

    highly-optimized netlists for the DSP building blocks. System Generator can execute all the

    downstream implementation tools to product a bitstream for programming the FPGA. An

    optional testbench can be created using test vectors extracted from the Simulink environment

    for use with ModelSim or the Xilinx ISE Simulator.

    Figure 3: System Generator Design flow

    C. SPARTAN 3E FPGA (X3CS500E):

  • 8/8/2019 Lop Report 2005p3ps055

    12/45

    11

    Figure 4: Spartan 3E X3CS500E

    Spartan-3E FPGA Features and Embedded Processing Functions:

    The Spartan-3E Starter Kit board highlights the unique features of the Spartan-3E FPGA

    family and provides a convenient development board for embedded processing

    applications. The board highlights these features:

    Spartan-3E specific features

    Parallel NOR Flash configuration MultiBoot FPGA configuration from Parallel NOR Flash PROM

    SPI serial Flash configuration

    Embedded development

    MicroBlaze 32-bit embedded RISC processor

    PicoBlaze 8-bit embedded controller

    DDR memory interfaces

  • 8/8/2019 Lop Report 2005p3ps055

    13/45

    12

    3. APPARATUS :

    3.1 Apparatus used:

    Laser line source

    Image Sensor, CYII5SM1300AB-QDCField Programmable Gate Array (FPGA Spartan 3E X3CS500E)

    Electronic components for interfacing the motor to FPGA

    Stepper motor

    JTAG connectors

    Figure 5: Apparatus of the system with interconnections

    3.2Description of apparatus arrangement:A laser beam is projected on to a measurement surface where it is scattered from surface and

    its image is detected by an optical detector, a camera. The images contain the detail of groovemeasurements. PC processes the images, infers the details and gives the feedback to motor.

    3.3Laser Source:A monochromatic laser source is used so that the ambient and weld torch light will not have

    much effect on source light .Thus information at a precise point can be obtained correctly.

    3.4Image Sensor:

  • 8/8/2019 Lop Report 2005p3ps055

    14/45

    13

    Image sensor CYII5SM1300AB-QDC would perfectly serve to acquire images of a welding

    groove. The maximum acquisition rate possible is around 27fps. The image sensor digital out

    is SPI compatible. JTAG cables are used to interface.

    3.5Spartan 3E (X3CS500E) FPGA:Xilinx XC3S500E Spartan-3E FPGA has up to 232 user-I/O pins, 320-pin FBGA package

    and over 10,000 logic cells. The IO pins (10 pins, 8 for data and 2 for handshaking) are used

    for interfacing the image sensor to FPGA. FPGA takes the images obtained serially from the

    image sensor by using buffers to block the image data temporarily and thus process it.

    3.6Interface between FPGA and Stepper motor:The electronic components are soldered on PCB so as to make a secure interface between

    the FPGAs JTAG port and the stepper motor. L23D motor drivers form the heart of

    device. Interface is connected to the port trough a JTAG female-female connector.

    Detailed description of circuit diagram is given in later part.

    3.7 Stepper motor:Stepper motor used is of 12V and full step angle 1.8 degree. The four output lines of the

    interface are taken in by the stepper motor. It corrects the torch position so as to align it

    with the edge centre.

  • 8/8/2019 Lop Report 2005p3ps055

    15/45

    14

    4 ALGORITHM:Calibrate the image with help of set of blocks of known dimensions.

    Get the image from the image sensor and process it as it comes.

    Convert the true color image into gray scale image.Select a region of interest for further processing.

    Filter the image by removing noises.

    Detect the edge of the images using edge detection operator.

    Once edge detection is done program, various parameters required are found.

    Calculate the pixel values of edge and root centers.

    Estimate the amount of deviation in successive values of edge center using the

    calibration measurements.

    Give corresponding feedback to the motor so as to correct the deviation.

  • 8/8/2019 Lop Report 2005p3ps055

    16/45

    15

    5 IMPLEMENTATION OF ALGORITHM USING FPGA:5.1 Calibration:

    The result of processing the acquired image is the deviation of edge centre value in terms of

    number of pixels. But this cant be directly given as an input to the feedback mechanism. So

    we need to know the length which each pixel corresponds to. For this purpose before starting

    of the process we put a specimen of known length and measure the number of pixels it

    corresponds to. This process is repeated for different lengths and the results are combined to

    obtain a calibration curve.

    The figure below (fig.2) shows the calibration process. After processing such an image of a

    specimen of known length, it is converted to a binary image. From the profile of edge we get

    the value of length in terms of pixels. The process is repeated for various lengths and thus a

    calibration factor is calculated.

    Figure 6: Simulink window showing the calibration process. The difference in points ofsudden rise and fall in the graph gives the length of specimen in terms of pixels. Images

    are processed on FPGA and Chip scope is used to view the processed images on PC

    screen

    5.2 Image acquisition:

  • 8/8/2019 Lop Report 2005p3ps055

    17/45

    16

    Image is taken from the image sensor and it is transferred on to the FPGA through 10 IO

    pins, 8 for data and 2 for handshaking. Corresponding pin numbers are given as an input to

    the system generator model used to generate the HDL codes. These Pin numbers are used to

    generate the user constraints file by the Xilinx ISE file. Image data is thus serially obtained

    from Image sensor.

    5.3 Conversion to gray scale image:

    RGB images are converted to grayscale by eliminating the hue and saturation information

    while retaining the luminance. It is achieved by constructing a block for this conversion in the

    system generator. The block is constructed by taking care that minimum use of multipliers is

    done.

    Gray-scale Intensity Computation:

    Y = 0.299*R + 0.587*G + 0.114*B (three multipliers)

    = 0.299*(R-G) + 0.114*(B-G) + G (two multipliers)

    This stage is modeled in system generator with the multiplier and adder blocks present in the

    library

  • 8/8/2019 Lop Report 2005p3ps055

    18/45

    17

    Figure 7: Gray-scale Intensity computation expression

    5.4 Selection of region of interest:

    Processing of unnecessary regions of image will increase the processing time. Further as our

    region of interest remains almost constant, we can crop the region of interest and processing

    can be done on the selected region.

    For implementing this on FPGA depending upon the region of interest (taken as input from

    user) an acquire bit is generated which gives information about the relevant bytes of

    information falling in the region of interest coming from image sensor.

  • 8/8/2019 Lop Report 2005p3ps055

    19/45

    18

    5.5 Filtering the image:

    It has to be taken care that the image is free from noise (due to surface reflections) before

    performing edge detection. Thus filtering is a very important and sensitive step in the

    algorithm. A simple smoothening filter (5X5) with all elements as 1 would serve us.

    To implement this on FPGA we need to store the sequentially coming input samples into a

    buffer. In this case filtering being with 5 X 5 masks, we need to buffer 5 lines of information.

    The end sample of each buffer line is sent as an input to 5 tap MAC filter. The 5x5 operator is

    computed by using 5 MAC FIR filters in parallel and then summing the results. The absolute

    value of the FIR filters is computed and the data is narrowed to 8-bits.

    Figure 8 : Five lines Buffer modeled with System generator

    smooth mask= [ 1 1 1 1 1; ...

    1 5 5 5 1; ...

    1 5 44 5 1; ...

    1 5 5 5 1; ...

    1 1 1 1 1];

    smoothDiv = 1/100;

  • 8/8/2019 Lop Report 2005p3ps055

    20/45

    19

    Figure 9 5X5 two dimensional filtering modeled with 5 MAC'S in parallel

  • 8/8/2019 Lop Report 2005p3ps055

    21/45

    20

    Figure 10 Difference between the unfiltered and filtered images after edge detection

    5.6 Edge detection:

    Now we need to carry out edge detection on the filtered image so that we get a clear profile

    of the focused light. This helps us in getting the required parameters.

    Edge detection is a type of filtering itself. It can be implemented with the same 5X5 MAC

    filter used above for simple averaging filter by changing the Mask

    edge = [ 0 0 0 0 0; ...

    0 -1 -1 -1 0; ...

    0 -1 8 -1 0; ...

    0 -1 -1 -1 0; ...

    0 0 0 0 0];edgeDiv = 1;

  • 8/8/2019 Lop Report 2005p3ps055

    22/45

    21

    Figure 11 Edge detected image obtained plotted with debugger

    5.7 Parameter evaluation:

    The output of the edge detection is the binary image with the required edges detected. Nowwe need to exactly graph of the edge for parameter evaluation. The parameter required

    ultimately from each image is the location of the edge centre.

    For graphing the edge we search for white pixel in each column of pixels and store the row

    number of the white pixel in an array. In our case we begin the search from bottom of a

    column and continue the search till we get the first white pixel in nthrow and it is this n we

    store into an array variable. The array variable readily graphs the edge profile.

    This implementation on FPGA requires looping statements to be executed on it. But to

    construct this block with conventional system generator blocks requires huge effort. So as analternative M-code block of system generator is used. It can compile HDL codes certain

    specific MATLAB codes ( looping statements).

    The full system generator block diagram with the M-Code block can be seen in the following

    figure.

  • 8/8/2019 Lop Report 2005p3ps055

    23/45

    22

    Figure 12 Complete block diagram with M-code block used for Parameter

    measurement

    \

    Figure 13 M-code written to get parameters from the edge detected image

  • 8/8/2019 Lop Report 2005p3ps055

    24/45

    23

    Figure 14 Co-ordinates plotted on a graph

    Figure 15 Final results, the edge and root centers generated by FPGA for a test image as

    observed from MATLAB workspace with help of Chip scope Pro

  • 8/8/2019 Lop Report 2005p3ps055

    25/45

    24

    5.8 Feedback:

    For every image, edge centre value is compared to that of the previous image and the

    difference in values with the help of calibration data is given as correction to the feedback

    mechanism. The stepper motor by moving in either clockwise or anti-clockwise direction

    corrects the torch position. Thus the torch position is always aligned with the edge centre.

  • 8/8/2019 Lop Report 2005p3ps055

    26/45

    25

    6. GENERATION OF HDL CODES FOR STEPPER MOTOR CONTROL:The stepper motor moves clockwise when FPGA outputs four bit data in sequence: [1100

    0110; 0011; 1001]; that is four sets of four bit data are sent out in a sequential order as

    shown in fig.7. If the order is reversed, then the motor moves anti-clockwise.

    At first codes were written in Verilog HDL directly. Though coding the sequence generator

    was simple, integration of this module with HDL codes generated for other steps of algorithm

    was a complicated.

    For this reason, the stepper motor control module was also made with system generator

    blocks. HDL codes were then generated for the combined system.

    The model for Stepper motor control through FPGA is as follows:

    1. The model has reset pin and bi-directional control2. This makes it easy to integrate with image processing algorithm3. We can run the FPGA based stepper at variable speeds by adjusting the sample times.4. Corresponding screenshots are as follows:

    Figure 16 System Generator model made for motor control

  • 8/8/2019 Lop Report 2005p3ps055

    27/45

    26

    Figure 17 Gate-way In and Gateway out blocks showing the pin location supplied to

    them which is used in User constraints file automatic generation

  • 8/8/2019 Lop Report 2005p3ps055

    28/45

    27

    7. HARDWARE INTERFACE BETWEEN STEPPER MOTOR AND FPGA:7.1. Stepper motor:

    A Stepper Motor is a digital motor, because it is moved in discrete steps as it traverses

    through 360 degrees. Thus stepper motor has an advantage of precise rotation. The steppermotor which we are using has step angle of 1.8 degree. It has four input wires which need to

    be energized in particular order so as to move the motor in desired fashion.

    Figure 18 Stepper motor inputs to the four wires (0011),(0110),(1001) and (0011) in this

    order gives clockwise rotation. One output, for example (0011) makes the motor move

    by one step.

    7.2.The interface circuit:As mentioned above, the stepper motor moves clockwise when inputs to it are [1 1 0 0; 0

    1 1 0; 0 0 1 1; 1 0 0 1]; that is four sets of four bit data are sent out in a sequential order.

    If the order is reversed, then the motor moves anti-clockwise.

    The Spartan 3E FPGA kit is programmed with HDL codes to generate the above

    sequence using Xilinx ISE software.

    The sequence is outputted on the JTAG pins which in turn are connected to the hardware

    interface circuit. The circuits output drives the stepper motor.

  • 8/8/2019 Lop Report 2005p3ps055

    29/45

    28

    Figure 19 : Circuit connected for driving the stepper motors

  • 8/8/2019 Lop Report 2005p3ps055

    30/45

    29

    8. INTERFACING FPGA SPARTAN 3E AND ADSP Bf 533:Systems implementing Machine Vision algorithms require huge amounts of processing

    power. Though FPGA is fast as compared to ADSP Bf 533, a system which interfaces

    Blackfin processor and Xilinx FPGA for computation would be very much useful when the

    algorithm implementation demands silicon resources beyond the capability of FPGA. The

    FPGA being very fast at data acquisition acquires images at 27 fps from the image sensor and

    preprocesses them and hands over the image to DSP.

    DSP which is meant to multiply and accumulate (filter) processes the images using Image

    Processing Algorithms like RGB-to-Gray Conversion, filtering and Edge Detection. The

    Region-of-Interest in the image is extracted and that edge-detected region (binary image) is

    sent to the Xilinx FPGA using an interface which makes the Blackfin Processor and FPGA to

    work in synchronization with each other.

    The FPGA then calculates the parameters of the welding groove and accordingly, controls the

    stepper motors.

    Figure 20 Block diagram showing the interface between ADSP-Bf533 kit and FPGA

    Spartan-3E Kit

  • 8/8/2019 Lop Report 2005p3ps055

    31/45

    30

    9. APPLICATIONS OF THE PROJECT:

    The project primarily aims at developing a very fast and precise system for automation of

    welding process. It specially finds application in Boiler construction and Ship building

    industries.

    In addition, many more real-time applications can be developed using the Image Processing

    codes that have been implemented on the Spartan 3E FPGA kit. Some of the examples are:

    Mango grading Finger-Print Scanning. Handwriting Recognition. IP-based Robotics.

  • 8/8/2019 Lop Report 2005p3ps055

    32/45

    31

    10.RESULTS AND ANALYSIS:

    For the PC-based system,

    Image acquisition, processing and control are achieved in 0.41 seconds, (in a PC with

    1.66 GHz Core Duo processor)

    The stepper motor was controlled with a maximum rpm of 50 and a precision of 1.8

    degrees.

    For the system using Xilinx Spartan 3E FPGA:

    The Verilog HDL codes were generated for the full algorithm implementation with512X512 image input (64X64 images ROI). The Design summary and the Xilinxresource estimator shows the silicon resources needed for the implementation:

    Timing Summary:

    ---------------

    Speed Grade: -5

    Minimum period: 8.862ns (Maximum Frequency: 112.841MHz)

    Minimum input arrival time before clock: 1.731ns

    Maximum output required time after clock: 4.040ns

    Maximum combinational path delay: No path found

    ================================================================

    =========

    -------------------------------------------------------------------------

    Slack: 15.310ns

    Source:

    default_clock_driver/xlclockdriver_5/pipelined_ce.ce_pipeline[0].ce_reg/has_latency.fd_

    array[1].reg_comp_1/fd_prim_array[0].rst_comp.fdre_comp (FF)

  • 8/8/2019 Lop Report 2005p3ps055

    33/45

    32

    Destination:

    relief_x0/virtex2_5_line_buffer_1a89d64527_x0/virtex2_line_buffer_173185b8a3_x0/co

    unter2/comp10.core_instance10/BU2/U0/the_addsub/no_pipelining.the_addsub/i_lut4.i_l

    ut4_addsub/i_q.i_simple.qreg/fd/output_1 (FF)

    Data Path Delay: 4.690ns (Levels of Logic = 3)

    Source Clock: clk rising at 0.000ns

    Destination Clock: clk rising at 20.000ns

    Figure 21 Design summary FPGA based system

    Above design summary indicates that less than 50% of the resources of the Spartan

    3E FPGA (X3CS500E) are used for the given model.

    The system can operate at a maximum speed of 112 M Hz. The images used are of

    512X512 resolutions. Each would be of 512X512X3 bytes which equals 0.7864MBand hence 142 images per sec

    The stepper motor was controlled using the Spartan 3E FPGA kit with an rpm of 500

    and precision of 1.8 degrees.

    Analysis of results:

    From above results it is evident that PC based system can process at a maximum of 3 frames

    per second while the FPGA can process at maximum of 142 images per second. But this

    speed is limited by the speed of image sensor which equals 27 fps. Even if 27 frames per

  • 8/8/2019 Lop Report 2005p3ps055

    34/45

    33

    second is the speed of operation of the new system is 9 times faster than the PC based

    system,.

    The stepper motor can moved at speeds around 500 rpm with FPGA, while only 50 rpm can

    be achieved by the PC based control

  • 8/8/2019 Lop Report 2005p3ps055

    35/45

    34

    11.CONCLUSCION:The real-time systems for automated welding have been implemented using an FPGA. A

    seam tracking system which uses high intensity laser source, image sensor, a FPGA has been

    developed. The results and analysis indicate that FPGA based implementation of seam

    tracking system is a good alternative to the PC based system.

    PC-based system processes 3 frames per second while the FPGA can acquire and process at

    27 frames per second. Stepper motor control by the FPGA is much faster as compared to that

    by the PC. This indicates that FPGA can be used to implement complex machine vision

    algorithms real-time.

    If in case the algorithm is that complex that it falls beyond the resource availability of FPGA

    the algorithms can be implemented on the designed interface between Xilinx Spartan 3E

    FPGA and ADSP-Bf533. FPGA acquires and pre-processes the image and hands over it to

    the DSP for any filtering involved in the algorithm. FPGA takes the filtered image back to

    control the hardware based on the parameter measurement. This interface designed can best

    suit for the implementation of Image processing algorithms.

  • 8/8/2019 Lop Report 2005p3ps055

    36/45

    35

    12. REFERENCES:1. SMITH, J.S., and LUCAS, J.: A vision-based seam tracker for butt-plate TIG

    welding, J. Phys. E; Sei. Instrum., 1989, 22, pp. 739-744

    2. UMEAGUKWU, C., and McCORMICK, J.: Investigation of an array technique forrobotic seam tracking of weld joints, IEEE Trans. Ind. Electron., 1991, 38, (3), pp.

    223-229

    3. CHAMBERS, S.P., SMITH, J.S., and LUCAS, J.: A real time vision system forindustrial control using transputers, Electro-techno[. (IEEIE) February/March 1991,

    pp. 32-37

    4. WLESE, D.R.: Laser triangulation sensors: A good choice for high speedinspection, I&CS Control Technol. Eng. Mngmt., 1989, 62, (9), pp. 21-29

    5. MATLAB-Help file6. Datasheet Spartan 3E FPGA.7. XILINX ISE 9.2 help file8. Digital Design (Third Edition) by M. Morris Mano (Author)9. Verilog HDL, A Guide to Digital Design and Synthesis, (Second edition) by Samir

    Palnitkar.

  • 8/8/2019 Lop Report 2005p3ps055

    37/45

    36

    13.APPENDIX A:MATLAB CODE FOR PC based Implementation:

    1. Initialization of video: contents of file initial.m%---------------------intialisation of video for calibration---------

    vid=videoinput('winvideo',1,'RGB24_352x288')

    %creation of vedio input objects

    preview(vid)

    %previews the information shot

    2. Calibration: contents of file calibration.m%---------------------once the arrangement ready, through GUI, execute this code

    p= getsnapshot(vid);

    % get the snap

    figure,imview(p);

    % for viewing pixel numbers,this will help in cropping

    pe=imcrop(p)

    % manual cropping by user

    p1=rgb2gray(pe);

    %converting normal image into gray scale

  • 8/8/2019 Lop Report 2005p3ps055

    38/45

    37

    background=imopen(p1,strel('square',95));

    %SE = strel('square',W) creates a square structuring element whose width is W pixels.

    %IM2 = imopen(IM,SE)----------->

    %performs morphological opening on the grayscale IM with the structuring element SE

    p4=imsubtract(p1,background);

    %This command gives the difference of respective co-ordinates of both images

    p5=imadjust(p4,stretchlim(p4),[0 1]);

    h=ones(5,5)/25;

    p6=imfilter(p5,h);

    % filters an image with a 5-by-5 filter withequal weights,called an averaging filter

    level=graythresh(p6);

    %level = graythresh(I) computes a global threshold (level) that can be used to convert anintensity image to a binary image

    bw=im2bw(p6,.48);

    %converts normal image to binary image

    s=edge(bw,'canny');

    %carries out edge detection using the specified algorithm

    imview(s)

    %----------------------code to plot the edge into array

    t=0

    trow=0;

  • 8/8/2019 Lop Report 2005p3ps055

    39/45

    38

    trow1=0;

    [r1,c1]=size(s);

    for j=1:c1

    n=0;

    for i=1:r1

    if s(i,j)==1

    c=c+1;

    if c==1

    trow=i;

    tcol=j;

    elseif c==2

    trow1=i;

    tcol1=j;

    end

    c=1;

    end

    end

    distance(j)=r1-trow1;

    t(1,j)=[distance(j)];

    c=0;

    end

    %---------------------close the preview window

    closepreview;

    3. Image processing and measurement: Contents of measurement.m file:

    %initilise the video with initial.m file, thus you now have image as array i

  • 8/8/2019 Lop Report 2005p3ps055

    40/45

    39

    Old= ( p1+p4)/2

    i1=rgb2gray(i);

    i2=imcrop(i1);

    background=imopen(i2,strel('square',95));

    i4=imsubtract(i2,background);

    i5=imadjust(i4,stretchlim(i4),[0 1]);

    h=ones(5,5)/25;

    i6=imfilter(i5,h);

    level=graythresh(i6);

    bw=im2bw(i6,.48);

    s=edge(bw,'canny');

    imview(s)

    t=0

    trow=0;

    trow1=0;

    [r1,c1]=size(s);

    for j=1:c1

    n=0;

    for i=1:r1

    if s(i,j)==1

    c=c+1;

    if c==1

    trow=i;

    tcol=j;

    elseif c==2

    trow1=i;

  • 8/8/2019 Lop Report 2005p3ps055

    41/45

  • 8/8/2019 Lop Report 2005p3ps055

    42/45

    41

    end

    new=(p1+p4)/2

    4. Stepper motor control: contents of stepper.msteps=[1 1 0 0; 0 1 1 0; 0 0 1 1; 1 0 0 1];

    i=1;

    k=1;

    parport=digitalio('parallel','LPT1');

    % creates a port object on

    line=addline(parport,0:3,'out');

    %declares the output lines of parallel port

    deviation=new-old

    direction=sign(deviation)

    %depending upon the sign of deviation we give the input to motor, if it is to move clockwise

    or anti-clockwise.

    if direction=1

    for j=1:deviation

    pval=steps(i,:);

    i=i+1;

    if i>4

    i=1;

    end

    putvalue(parport,pval);

    %puts one row specified by pval of matrix on parallel port

    gval = getvalue(parport);

    tic

    pause(0.001);

  • 8/8/2019 Lop Report 2005p3ps055

    43/45

    42

    toc

    end

    end

    if direction=-1

    for j=1:(-deviation)

    pval=steps(i,:)

    i=i-1

    if i

  • 8/8/2019 Lop Report 2005p3ps055

    44/45

    43

    APPENDIX B:

    Figure 22 RTL Schematic of the system

    Figure 23: Screenshot showing the limitation of MODELSIM Starter Edition; No

    surprise as ours is a data dominated design

  • 8/8/2019 Lop Report 2005p3ps055

    45/45

    Figure 24 XILINX ISE 9.2 Environment used to simulate and dump codes on to the

    Xilinx FPGA. It takes the generated Verilog HDL files as input to generate the bit file

    needed for generation