61
 http://irc.nrc-cnrc.gc.ca Integ rat ion of Machine V is ion and V irt ual Reality for Computer-Assisted Micro Assembly RR-244 Fauchreau, C.; Pardasani, A.; Ahamed, S. ; Wong, B. December 2007  The mat er ia l in t hi s d oc umen t is co ve re d by t he pr ov isi on s of t he Cop yr ig ht Ac t , b y Can ad ian laws, po licies, re gu la t io ns and in t er na t io nal agreements. Such provisions serve to identify the information source and, in specific instances, to prohibit reproduction of materials without writt en perm ission. For more inform ation visit http://laws.justice.gc.ca/en/showtdm/cs/C-42 Les renseignements dans ce document sont protégés par la Loi sur le droit d'auteur, par les lois, les politiques et les règlements du Canada et des accords internationaux. Ces dispositions permettent d'identifier la source de l'information et, dans certains cas, d'interdire la copie de documents sans permission écrite. Pour obtenir de plus amples renseignements : http://lois.justice.gc.ca/fr/showtdm/cs/C-42  

vrml pag 29

Embed Size (px)

Citation preview

Page 1: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 1/61

 

http://irc.nrc-cnrc.gc.ca

In tegration of Machine V is ion and V i r tualReal i ty for Computer-Ass isted Micro

Assembly

RR - 2 4 4

F auchreau, C .; P ardasani, A.; Ahamed, S. ;Wong, B.

December 2007

 The material in this document is covered by the provisions of the Copyright Act, by Canadian laws, policies, regulations and internationalagreements. Such provisions serve to identify the information source and, in specific instances, to prohibit reproduction of materials withoutwritten permission. For more information visit http://laws.justice.gc.ca/en/showtdm/cs/C-42 Les renseignements dans ce document sont protégés par la Loi sur le droit d'auteur, par les lois, les politiques et les règlements du Canada etdes accords internationaux. Ces dispositions permettent d'identifier la source de l'information et, dans certains cas, d'interdire la copie dedocuments sans permission écrite. Pour obtenir de plus amples renseignements : http://lois.justice.gc.ca/fr/showtdm/cs/C-42

Page 2: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 2/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 2

Integration of Machine Vision and Virtual Reality forComputer-assisted Micro Assembly

ABSTRACT

 The technical report describes the design and implementation of machine vision

and virtual reality modules and their integration with the motion system of the in-house developed micro assembly system. Though the underlying techniques canbe applied to a range of micro assembly scenarios, the demonstration of thetechniques is applied for the “peg in a hole” assembly that involves operationssuch as picking pins from holes in a block and then subsequently inserting pins indesired locations in empty holes. The automation of pick and place operationsrequires addressing the issues of machine vision, sensing of position of holesand pins in a block.

 The work described in this technical report is the continuation of the previouswork to control motion system hardware through virtual reality interface and

machine vision [PORCIN-RAUX, Pierre-Nicolas, Motion Control of Microassembly System using Virtual Modeling and Machine Vision, Technical Report]. The current work advances the previous work of human-assisted pick and placeprocedure to achieve full automation of “peg in a hole” assembly.

 The first part of this report describes technical issues that need to be addressedto achieve full automation of “peg in a hole” assembly. The alternative solutionsare listed with all their advantages and drawbacks to help select the finalsolution. The second part explains the National Instruments LabVIEW®VirtualInterface (VI) and subVIs and describes how to execute the LabVIEW®application program. The last section of the document describes the procedure

for creating and use of VRML model in a labVIEW®environment.

Page 3: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 3/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 3

 Table of contents 

1  Introduction....................................................................................................7 

2  Machine vision assisted automated pick and place .......................................9 

2.1  Finding the parts and their orientation....................................................9 

2.1.1  Finding the position of the block.....................................................9 

2.1.2 

Finding the accurate position of the block .....................................11 2.1.3  Picking and placing the pins ..........................................................13 

2.1.4  Fine tuning the machine vision system for accuracy.....................18 

2.1.4.1  Calibration of the cameras .....................................................18 

2.1.4.2  Keeping the accuracy along the cycle....................................23 

3  Procedure to run the demonstration ............................................................24 

3.1  Initializing the system............................................................................24 

3.2  Running the demonstration...................................................................26 

3.3  Operating the “vacuum station” and the Gassmann grippers ...............28 

4  VRML models in labVIEW and its synchronization to the motion system? ..29 

4.1  Import files from SolidWorks to a 3D scene..........................................29 

4.1.1 

Example 1: Importing the base and the x slide..............................31 

4.1.2  Example 2: Importing the rotary table............................................35 

4.2  Initialize the model................................................................................38 

4.2.1  Calibration step 1 ..........................................................................38 

4.2.2  Calibration step 2 ..........................................................................39 

4.2.3  Import and initialization of the X, Y, and Rotary stages. ................41 

4.3  Set up the motions................................................................................43 

5  References ..................................................................................................45 

Page 4: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 4/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 4

 Table of Figures

Figure 1: Template tried for the pattern matching.................................................9 

Figure 2: Same snap shot with a black and a white background........................10 

Figure 3: Template used to find the block...........................................................10 

Figure 4: Image from the top camera before the processing ..............................11 

Figure 5: Parameters of the image processing...................................................11 Figure 6: Processed image for use in the pattern matching................................12 

Figure 7: Template used to find the accurate position of the block.....................12 

Figure 8: Issues related to the pin recognition....................................................13 

Figure 9: Position returned by labVIEW when it cannot find the correct matching.............................................................................................................14 

Figure 10: Arrays for the position of the pins ......................................................14 

Figure 11: Gripper fully opened on top of a hole.................................................15 

Figure 12: Cycle without collision issue (small angle).........................................15 

Figure 13: Cycle with a collision issue (angle bigger than 0.3rad) ......................16 

Figure 14: Collision issues pictures.....................................................................16 

Figure 15: Starting window of the NI Vision acquisition software........................18 

Figure 16: Acquisition board...............................................................................18 

Figure 17: Camera choice...................................................................................19 

Figure 18: Image of the whole rotary table..........................................................19 

Figure 19: Selecting “image calibration” in the “image” menu.............................19 

Figure 20: Selecting the type of calibration.........................................................20 

Figure 21: Changing the unit in mm for the calibration and choosing the center of the grid...............................................................................................20 

Figure 22: Specify calibration axis ......................................................................20 

Figure 23: Specifying the user defined four other points.....................................21 

Figure 24: Saving the calibration file...................................................................21 

Figure 25: Overwrite the previous file if needed..................................................22 

Figure 26: Icon for step 1....................................................................................24 

Figure 27: Icon for step 2....................................................................................25 

Figure 28: Icons for the demonstration...............................................................26 

Figure 29: Loading every subVIs ........................................................................26 

Figure 30: Tabs of the interface..........................................................................26 

Figure 31: Icon for outputdrive_subVI.................................................................27 

Figure 32: Icon and initialization of the outputdrive subVI...................................28 

Figure 33: Hiding parts from an assembly..........................................................31 

Figure 34: Base and x stage base to export in wrl..............................................31 

Figure 35: Saving a SolidWorks part as wrl file...................................................32 

Figure 36: Checking the VRML options ..............................................................32 

Figure 37: LabVIEW code to import VRML files..................................................33 

Figure 38: Result if the two VRML files have been created from two different filesin SolidWorks.....................................................................................33 

Figure 39: Result if the two VRML files have been created from the sameSolidWorks assembly.........................................................................34 

Figure 40: Origin issue for the rotary plate..........................................................35 

Figure 41: Open a part from an assembly...........................................................35 

Figure 42: Rotary plate file with its origin............................................................36 

Page 5: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 5/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 5

Figure 43: SolidWorks measurements................................................................36 

Figure 44: How to match the offset in labVIEW ..................................................37 

Figure 45: 3D scene of the bottom stages ..........................................................37 

Figure 46: Calibration step 1 code......................................................................38 

Figure 47: SolidWorks measurement between the TCP and the center of thezoom lens...........................................................................................39 

Figure 48: Calibration step 2 reading the first position........................................39 

Figure 49: Calibration step 2 reading the second position of Zaber and computingthe transformation..............................................................................40 

Figure 50: SolidWorks measurement of distance between the gripper and therotary table.........................................................................................40 

Figure 51: Initialization of the X and Y stages.....................................................41 

Figure 52: Initialization of the X,Y, and Rotary stages ........................................41 

Figure 53: Updating the position of the virtual model of X stage.........................43 

Figure 54: Updating the rotary table....................................................................43 

Figure 55: Updating the whole VRML model ......................................................44 

Figure 56: Creating the motions..........................................................................44 

Figure 57: Main VI part 1 ....................................................................................52 

Figure 58: Main VI part 2 ....................................................................................52 

Figure 59: Main VI part 3 ....................................................................................53 

Figure 60: Main VI part4 .....................................................................................53 

Figure 61: Main VI part5 .....................................................................................54 

Figure 62: Main VI part 6 ....................................................................................54 

Figure 63: Aerotech stages control.....................................................................55 

Figure 64: VRML importation and update ...........................................................55 

Figure 65: Link between the hardware stages and the virtual stages .................56 

Figure 66: Control of the 3D scene (Virtual Model).............................................56 

Page 6: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 6/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 6

 Table of Appendix

APPENDIX A: Calibration of the system.............................................................46 

APPENDIX B: Main VI ........................................................................................52 

APPENDIX C: Find block....................................................................................57 

APPENDIX D: Find the holes..............................................................................59 

APPENDIX E: Computing holes position............................................................61 

Page 7: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 7/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 7

1 Introduction

 The trend and growing demand for the miniaturization of industrial andconsumer products has resulted in an urgent need for assembly technologiesto support economical methods to produce micro systems in large quantities.

More powerful, next generation micro systems will consist of a combination of sensors, actuators, mechanical parts, computation and communicationsubsystems in one package. This will require significant research anddevelopment of new methods for transporting, gripping, positioning, aligning,fixturing and joining of micro components.

In response, a team at NRC-IMTI began an initiative to develop computer-assisted assembly processes and a production system for the assembly of micro systems. Since 2005, the project team has designed, assembled, andcalibrated an experimental test system for the precision assembly of micro-devices. The system makes use of high precision motion stages,

micromanipulators, grippers, and the machine vision system to develop aplatform for computer assisted assembly of Microsystems [5.1]. Examples of Microsystems that the production system is aimed to assemble are: pumps,motors, actuators, sensors and medical devices.

 The basic operations in an automated assembly involve picking, moving, andmating relevant parts. To develop software modules that can be reused forother applications (e.g. assembly of micro pumps, actuators, etc.), a simple“peg in a hole” assembly problem was chosen. The problem scenarioinvolves picking 210 micron pins from holes in a plastic block using a microgripper and then placing it in another empty hole in the same block.

 The following describes the control loop for the pick and place of 210 micronpins:1. Start the execution of the program after placing the plastic block on thepositioning table.2. The user fills in a two dimensional array on the user interface to let thesystem know where the pins are and the destinations.3. The system should find the exact location and 2D orientation of the plasticblock:

3.1 The machine vision system finds the position and 2D orientation of theplastic block on the positioning table. The image analyses of the picture

taken by the global camera computes the location of the block. Thelocation thus obtained is an approximate location but good enough for thenext step of finding the exact location of the holes. This positioninformation is then used to place the VRML model of the block in 3Dspace.3.2. Now that the approximate location of the block on the rotary plate isknown, the block is to be brought under the objective of the vertical zoomlens.

4. Next, move the gripper to the pick position.

Page 8: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 8/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 8

 The motion system is commanded to lift the micro gripper up to the clearanceplane. It is then commanded to move over the pick position.4. Open Gripper The gripper is commanded to open, and advance towards the pin keepingcentre of the opened jaw aimed at the pin.5. Lower the Gripper: the gripper is commanded to lower to the Working

Plane.6. Grip the pin: the gripper is commanded to close when the pin is at thecentre of the open gripper.7. Lift the gripper to the Clearance Plane.8. Move the pin to the desired hole: The motion system commands thegripper to go to the pre-calculated location of the hole on the block.9. Release the pin by opening the gripper10. The same operations are followed for the second pin and then theprocess is reversed to put the pins back.

Above is the architecture of the micro assembly system built at NRC Londonto achieve the assembly of microsystems. The components that play a keyrole in the assembly process are pointed to on the picture.

Automating all these steps require numerous issues to be solved for the “pegin a hole” example to have consistent results. Finding the position andorientation of the block through the vision system is described in Section 2. The accuracy of measurements of the machine vision system depends onaccurate calibration. The method for machine vision calibration is describedin Section 2.1.4

High magnification images from the zoom lenses suffer from very low depthperception. Therefore to assist users in the handling of parts, a virtual realitymodel of the system has been created to visualize the system in 3D. Thisvirtual model is synchronized to the motion system to provide good feedback. The virtual model is integrated with the motion system through the positioninformation obtained through machine vision and displacement sensors onthe motion system as described in Section 4.

Page 9: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 9/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 9

2 Machine vision assisted automated pick and place

2.1 Finding the parts and their orientation

At a micro level, finding parts with their orientation is a tough challenge due to thegeometry and accuracies required. Machine vision is used to locate parts on therotary table. Two steps are needed to get the highest accuracy on the partlocation and orientation. The first step is that of finding the position of the blockon the positioning table and the second step involves finding the position of theholes on the block. Pattern and geometry matching were used to find the positionof the parts.

2.1.1 Finding the posit ion of the block

Pattern matching relies on the uniqueness of the template. Therefore, either aunique shape or color contrast that can be easily recognized is earmarked as atemplate.

Different templates (with various shapes and colors (see Figure 1) were tried tofind out which one was the best. ”The best” here implies that the machine visionwill find it repeatedly wherever it is located on the positioning table without anyerrors.

Figure 1: Template tried for the pattern matching

During the testing of different templates, the issue of illumination was found to bea very important variable. At the micro level, adequate and the right type of illumination is very important. Even if the light intensity is varied slightly, themachine vision will not be able to find the template consistently.

 Through experiments we observed that the color pattern matching was not agood solution because the illumination greatly influences the results. Moreover,the system uses black and white cameras. To obtain a color image, cameraswere calibrated with Bayer filters. But this calibration is probably not accurateenough to obtain the best results. Even having the same intensity of light, therecognition did not yield similar results each time. But for a fully automated cycle,the recognition must work every single time without human intervention.

Page 10: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 10/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 10

Due to the black color of both the block and the table, it is hard to find the blockwith the machine vision system. To distinguish the block on the rotary table,greater contrast between these two parts was needed. Placing white, blankpaper on the rotary table improved the contrast significantly. (Figure 2).

Another advantage of the black and white image is that the light intensity is lessimportant than it is for the color image allowing small variations in illuminationwhich improves the flexibility of the system.A sharp black and white image is obtained by fine tuning the software-basedcamera parameters such as frames per second and by adjusting the focus andaperture rings manually. Further, if needed, the Image Acquisition software fromNational Instruments allows users to change a number of parameters such ascontrast, brightness and to process the images.By adjusting the parameters, a clear image is obtained with the block easilyrecognizable on the table. Using this image (Figure 3) as a template, the patternmatching combined with geometry matching can find the position of the block.

Figure 2: Same snap shot with a black and a white background

Figure 3: Template used to find the block

If the global camera has been properly calibrated (see section 2.1.4), themachine vision system computes the location of the block which is used in thenext step by bringing the block under the zoom lens such that all holes are

visible.

Page 11: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 11/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 11

2.1.2 Finding the accurate posit ion of the block

Mostly, due to the small size of the parts, a location error of less than half amillimeter from the global camera is acceptable [5.2]. For the positional higher

accuracy, the top zoom lens camera system, which has very high magnificationcapability, is used. After having brought the block under the zoom lens (by usingthe position returned by the global camera), the exact location of the block andthe holes are found using the pattern matching algorithm.Before the image is processed, its contrast and brightness is adjusted such that itwill yield consistent results when processed.

Figure 4: Image from the top camera before the processing

Starting fromFigure 4, and changing parameters such as contrast and brightness

(Figure 5) an image, which can be used in pattern matching, was created (Figure6). This image has also been used to create the template (Figure 7) for patternmatching.

Figure 5: Parameters of the image processing

Page 12: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 12/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 12

Figure 6: Processed image for use in pattern matching

Figure 7: Template used to find the accurate position of the block

 The template is processed by the pattern matching algorithm to find the exactlocation of the block and its angle. The angle can be found with the aid of themark on the block.

 This step of finding the exact location of the block and its angle is dependent onthe previous step of bringing the block under the zoom lens. If the block is notproperly brought under the zoom lens, (so that all holes are visible in the image)the matching may not work. That is why accurate calibration of the global camerais very important and a calibration method has been developed to do so (seesection 1.1.4). Pattern matching using the template is a very powerful tool toaddress this issue.

Natural mark

Page 13: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 13/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 13

2.1.3 Picking and placing the pins

After the exact location of the block is known, finding the pins in a fully automatedway is the next step. Pattern matching has been tried but the recognitions did notyield consistent results each time. Most of the time the system was able to findthe pins but not all of them. Geometric matching and shape recognition have

been tried as well without consistent results.

 The main reason is that it is hard to obtain a unique image that can be easilyfound by the system. When the pin is fully vertical in the hole, there is a gapbetween the hole’s edge and the pin’s edge (Figure 8). In this case the pin canbe found by geometry or shape recognition because the white circle can beeasily extracted from the black background.But, on the other hand, if there is no gap between the hole and the pin’s edges,the vision system will not be able to recognize it (Figure 8). It is not a well definedcircle anymore because of the white background around the holes.

Figure 8: Issues related to the pin recognition

So, if the matching doesn’t work correctly the system will find holes where thereare none or completely miss an existing hole. Moreover, when the patternrecognition fails, labVIEW® returns to the original image (Figure 9) with thewrong holes positions.

Pin with a gapbetween thehole and itself.

Pin stuck to thehole’s border.

Page 14: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 14/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 14

Figure 9: Position returned by labVIEW® when it cannot find the correct matching

As a first step in the process, the user input is required to indicate the initial

location of the pins and their destination. Two arrays are used to ask the userwhere the pins are located in the block and where the user wants the pins to goas shown in Figure 10.

Figure 10: Arrays for the position of the pins

One of the developed main programs “run_demo_finding_pins_w_matching.vi”

finds the pin automatically but it lacks consistency. Its performance depends onthe orientation and angle of the pins in the holes as explained earlier. Theseissues have been solved and the system is tuned to yield consistent results. Withthe input data of the arrays (Figure 10), combined with the accurate position of the block, the coordinates of every single pin and its destination can becomputed.

 The specific example of (peg in a hole) has posed several challenges. First, theholes are much bigger in diameter than the pins resulting in the pins not beingperpendicular to the surface of the block. Thus grasping pins is a problem as thecamera finds the location of the pin by looking at the top of the pin (as the circle).

But due to the tilt of the pin, this computed position is different from the positionwhere the gripper actually grasps the pin. Though the wide opening range of thegripper (Figure 11), helps resolve this problem, it creates a secondary problemby shifting the block. The solution of this issue is explained in 2.1.4.2.

Wrong positionreturned

Page 15: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 15/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 15

Figure 11: Gripper fully opened on top of a hole

 The other issue is about the system knowing which pin it has to pick up first.Actually, in several cases, if the choice of pin is wrong the gripper will most likely

not be able to pick it up because it will collide with another pin. However, in thedemonstration, the system deals with two pins only. The only criteria being thatthe block has to be perfectly perpendicular to the axis of the lens. If theperpendicularity is more than 0.1 rad collision issues will result.

Following sequence of images explain these collisions issues:

Step 1 Step 2 Step 3

Step 4 Step 5

Figure 12: Cycle without collision issue (small angle)

Page 16: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 16/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 16

Step 1: Original state of the block and the pins.Step 2: Moving the first pin.Step 3: Moving the second pin.Step 4: Moving back pin number 2.Step 5: Moving back pin number 1.

With a large angle

Step 1 Step 2 Step 3

Step 4 Step 5

Figure 13: Cycle with a collision issue (angle bigger than 0.3rad)

When step 5 is reached, the gripper is not able to pick up the last pin. It willactually collide with the first pin (Figure 14). This can either bend the pin ordamage the gripper.

Figure 14: Collision issues pictures

Page 17: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 17/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 17

 This issue can be overcome by rotating the positioning plate. The magnitude of rotation would depend on the angle ot the block on the positioning table. It couldbe either 90 or 180 degrees. For this particular application, a control logic couldhave been added to choose the pin to be picked up first. But, it was chosen tonot demonstrate the flexibility of the developed system. Different case studieswere developed to demonstrate the capability of the system.

In the long term it could be interesting to design and manufacture a tool holderthat could help overcome this issue. It could improve the flexibility of the system.

Page 18: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 18/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 18

2.1.4 Fine tuning the machine vision system for accuracy

As accuracy is crucial for the measurements and location of the parts a

methodology has been developed for calibration of all cameras used in thesystem.

2.1.4.1 Calibration of the cameras

 To obtain the highest accuracy from the machine vision system, carefulcalibration needs to be performed. For the top camera, see the report IMTI-CTR-189 (2006/10) by Alfred H.K. Sham [5.4].

For the global camera, the NI vision software is used for calibration. Followingare the steps required in preparation for calibration of the global camera.

• Bring the rotary table to the position X=49 Y=49.• Put the grid paper on, and then, run the Vision Assistant 8.5.

• Click on Acquire Image (Figure 15)

Figure 15: Starting window of the NI Vision acquisition software

• Click on Acquire image from the selected camera and image acquisitionboard. (Figure 16)

Figure 16: Acquisition board

• Select the device NI PCI-1426 (img1) Channel 0 (Figure 17)• Start acquiring continuous images

Page 19: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 19/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 19

Figure 17: Camera choice

• Adjust the zoom in the software to have the full image in thewindow

• Make sure that the camera has a full view of the positioning table,otherwise move the flexible shaft to get it. Adjust the focus andaperture of the camera manually (by turning the rings) for thesharpest image

Figure 18: Image of the whole rotary table

• Click on the Store Acquired Image button in the browser and close

it.

If the image is not clear, adjust the brightness, contrast and gamma level valuesto obtain a good image.

 The calibration procedure can now start as follows:• Select the Image item from the menu bar and then select Image

Calibration (Figure 19)

Figure 19: Selecting “image calibration” in the “image” menu

Page 20: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 20/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 20

• In the screen shown in Figure 20 select “Calibration Using UserDefined Points”

Figure 20: Selecting the type of calibration

• In the calibration set up window, select the millimeter unit in thedrop down box (Figure 21)

Figure 21: Changing the unit in mm for the calibration and choosing the center of the grid

• Click on the center of the grid, zoom in if necessary. This point is

also the center of the coordinates system, so in the real world it has(0, 0) coordinates. Write down the image coordinates

• Go to the “specify calibration axis” tab (Figure 22)

Figure 22: Specify calibration axis

• Enter the image coordinates as “user defined” value for “AxisOrigin” (Figure 22)

• Change the direction of Axis Reference to have the same system of axis as defined in Aerotech motion stages. (Figure 22)

Page 21: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 21/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 21

• Adjust the angle to have the red X axis parallel to the rotary plate’saxis. (Figure 22)

• Switch back to “Define reference points” tab• Select four other points (as shown in the Figure 23) for which the

exact coordinates are known. In Figure 22, the real worldcoordinates are 1: (0;0) 2: (40;0) 3: (0;-40) 4: (-40;0) 5: (0;40)

Figure 23: Specifying the user defined four other points

• Once done, press the OK button and save the image (in ___ folderas calibration_rotary_plate.png for example). (Figure 24) Overwritethe previous file if needed. (Figure 25)

Figure 24: Saving the calibration file

Page 22: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 22/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 22

Figure 25: Overwrite the previous file if needed

Usually, once cameras are calibrated the procedure does not need to berepeated unless camera position has changed. In our lab, the calibrationprocedure has to be repeated at least every week because the flexible shaftslightly drops every day. Since the accuracy of measurements will be dependentof these calibrations so it has to be done very well.

Page 23: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 23/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 23

2.1.4.2 Keeping the accuracy along the cycle

As the pick and place cycle executes, picking a pin from a hole and dropping thepin in another hole, the block shifted due to reasons described in 2.1.3. When the

block shifts, the gripper drops the pins slightly off the holes.

 The issue was resolved by taking a new snap-shot each time the system ispicking or dropping a pin. This way, it allows the system to compute the locationof holes through pattern matching for each new step of the cycle. Thus, even if the block has shifted slightly its new position can be computed accurately. Itslightly increases the cycle time but it is the most accurate way to overcome thisblock shifting issue. Another solution that we found useful was to mount theblock on a thin sponge and then secure the sponge on the positioning table. If the pin is not at the centre of the gripper then the block will tilt slightly when thegripper closes but return to its position once the pin has been lifted out of thehole.

Page 24: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 24/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 24

3 Procedure to run the demonstration

 To turn the system on, follow the procedure as described in the document(Reference: IMTI-TR-036 (2007/01) Automated Assembly of Microsystems StartUp and Shut Down Procedures by Brian Wong [5.5])

All the VIs that are referred to in this report are located in the “Christophe” folderin the drive. A backup is also stored in the Christophe folder on G:\concurrent-eng\Microassembly drive.

3.1 Initializing the system

 The first step is to calibrate the gripper. Two steps are needed to complete thistask. These steps are also used to initialize the VRML model.

- Calibration_gripper_step1 (Figure 26) lets user locate the tool centerpoint. It is the first part for the VRML initialization and finds the exact

location of the gripper in relation to the centre of the positioning table. Thelocation of the gripper is used in the main program.

Figure 26: Icon for the step 1

Before running the VI, make sure the gripper is in its folded positionbecause every motion system axis is sent to its home position at thestart of this VI. As soon as this initialization is completed the gripperfixture can be opened to its final position by tightening the screw.

1- Turn the ring light to its maximum level.2- Run the VI

Run button3- Follow the instructions on the user interface screen4- When the new configuration file has been saved, stop the VI and

exit.

Stop button

Page 25: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 25/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 25

- Calibration_gripper_step2 (Figure 27) is used to get the verticaldistance of the gripper from the positioning table. Execution of this VI isfor the VRML model initialization.

Figure 27: Icon for the step 2

1- Run the VI

Run button

2- Follow the instructions on the user interface screen3- When the new configuration file has been saved, stop the VI and

exit.

Stop button

 These steps are not needed if the system has been run before. They have to beperformed at each start of the system and each time the gripper is moved to anew position.

Page 26: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 26/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 26

3.2 Running the demonstration

Open the run_demo_finding_pins_w_array VI or therun_demo_finding_pins_w_matching VI (Figure 28). Therun_demo_finding_pins_w_array VI lets the user enter the initial location and

destination of pins. Whereas, run_demo_finding_pins_w_matching VI finds theinitial location of pins leaving it up to the user to indicate the destination of thepins.

Figure 28: Icons for the demonstration

Wait till all VIs have been loaded correctly (Figure 29).

Figure 29: Loading every subVIs

Switch to the User_interface tab, (Figure 30) if not already done. Now, everythingis ready to start the cycle.

Figure 30: Tabs of the interface

1- Run the VI

Run button

2- Wait till the VRML model is loaded (it takes approximately 30 seconds).

 The green light should turn off when the VRML has been loaded.

3- Make sure the ring light is at its minimum intensity.4- Hit the “start cycle” button.

Page 27: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 27/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 27

5- The motion system will first move the stages. The user interface asks theuser to fill the arrays as shown in Figure 10. If the finding pin with array VIhas been chosen there are two arrays (i.e. source and destination) to fillin. Otherwise, only one array (i.e. destination) has to be filled in. The,system then recognizes the block and finds its location. As soon as it hasbeen done, the system brings the block under the zoom lens.

6- Turn the ring light to its maximum intensity.7- The system recognizes the holes, the block center and the pins if the

“finding pins with matching VI” is being run. The user only has to indicatethe destinations for the pins in the graphical user interface.

8- The system computes the position of the holes.Note: IF the position of the holes is not computed, then stop the VI by pressingthe “STOP” button and re-start using step 1..

Stop button9- The system will start the cycle of moving the pins.

If the cycle has to be stopped before its end, hit the Stop button:

Stop button

If the gripper is still open, open the “outputdrive_subvi” VI and run it (Figure 31).

Wait till the gripper is closed and then stop the VI.

Figure 31: Icon for outputdrive_subVI

Note: When this VI is used (outputdrive_subvi), the gohomeatstart button needsto be activated before running the run_demo_finding_pins_w_array VI again.

Page 28: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 28/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 28

3.3 Operating the “ vacuum station” and the Gassmanngrippers

 To generate the vacuum for the purpose of holding the parts or to generate mildair pressure to blow off the parts, open the “outputdrive_subvi” VI, run it and waittill the initialized system light turns on (Figure 32). Please refer to the sectiontitled, “Festo Vacuum System” in “Automated Assembly of Microsystems Start Upand Shut Down Procedures”, Technical Report for layout of vacuum system.

Figure 32: Icon and initialization of the output drive subVI

 The switches can then be activated to open and close the stop valves and theslider bars are used to control the vacuum regulation valve. Always turn on theFesto switch before activating the “air blow” or the vacuum. Make sure that theair blow (release pulse that generates the air pressure) and vacuum (vacuumgeneration) switches are NEVER turned on together. When the vacuum or airblow needs to be stopped, make sure that the slide is back to 0 and then hit Stopbutton.

Note: When this VI is used (outputdrive_subvi), the gohomeatstart button needsto be activated before running the run_demo_finding_pins_w_array VI again.

Page 29: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 29/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 29

4 VRML models in labVIEW and its synchronization tothe motion system?

LabVIEW provides a few basic tools to manipulate 3D scenes. Careful thought

on building the VRML model minimizes many of the selection and manipulationproblems. This part of the documentation explains the steps to be followed tobuild a virtual model.

First, the section 4.1 explains how to build a VRML model in a way that allows forthe motions we need, using two examples. Then, section 4.2 explains, theinitialization procedure to make the VRML model interact with the hardware.Also, in the last part, how to build motions in the VRML model is explained.

4.1 Import files from SolidWorks to a 3D scene

Importing parts from SolidWorks® to labVIEW® through VRML may causeissues in linking and positioning parts in the 3D scene.

When a VRML file is created with SolidWorks, it retains the origin of the part as itwas defined in SolidWorks®. Either, it is the origin of the assembly or the partitself (depending on whether the VRML file has been created from an assemblyor a part file). When this file is imported in labVIEW®VI, it has its own origin andaxis and it makes it coincident to the labVIEW and SolidWorks origin. Becausethese origins can be different, sometimes the parts need to be translated androtated to synchronize the virtual and physical models. But, these

transformations are constrained along the axis in labVIEW®. Other issues mayalso be found if relative motions between parts have to be set up as discussed inSection 3.

 The front view in SolidWorks is used as a reference to describe the displayorientation in LabVIEW®. This way you will know if the axis of the part has beenbuilt so that it can be used for the transformations in labVIEW®. So, whilebuilding the SolidWorks®model, the developer must exercise care in defining theorigin of the part so that it will be the future axis of rotation and translation.

In order to limit the transformations, all VRML models, should be created from

the whole assembly of the system in SolidWorks®. It can easily be done just bykeeping the parts that have to be exported in VRMLand hiding all the other partsand then exporting the selected part. This way, each VRML file has the sameorigin, being “assembly”.

Page 30: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 30/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 30

Also, to minimize the number of parts to import, all parts fixed together can beexported in the same file. The following are the assembly components used forthis example:

- the base,- the top stages base,- the x stage,-

the y stage,- the u stage,- the xx stage,- the yy stage,- the zz stage,- the moving parts of the gripper.

Figure 38 and Figure 39 illustrate why it is better to start from the wholeassembly. But sometimes, parts need a special axis of rotation. In this case if thepart itself has the origin needed, it is easy to open the part file and create the wrlfile from this part. Otherwise it needs to be recreated with the right axis inSolidWorks. ®

Page 31: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 31/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 31

4.1.1 Example 1: Importing the base and the x slide.

Figure 33: Hiding parts from an assembly

Hide all parts which are not needed by right clicking on the component name onthe browser and hit Hide (Figure 33).

Figure 34: Base and x stage base to export in wrl

Figure 34 shows the Base that can now be exported as a VRML file.

Page 32: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 32/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 32

 To export this part in a VRML format, go to file Save as (Figure 35)

Figure 35: Saving a SolidWorks® part as wrl file

 Then, change the format following the step shown below (Figure 36).

Figure 36: Checking the VRML options

In the drop down

box select *.wrlHit the option button, make sure theversion is VRML97 and the lengthunit is set as meter.

Page 33: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 33/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 33

Follow the above step for exporting the x slide.

2 files are finally obtained: the Base and the X slide.In labVIEW® now, create a VI to import these 2 files. The Base has to beimported first and then, the x stage has to be added to the scene using an“invoke node” from the application palette (Figure 37).

Figure 37: LabVIEW code to import VRML files

 The following two figures (Figure 38, Figure 39) highlight the origin issue if the

VRML files are created from two different SolidWorks®files. This illustrates whycreating the files from the same assembly model is much easier.

Figure 38: Result if the two VRML files haven been created from two different files in SolidWorks

Page 34: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 34/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 34

Figure 39: Result if the two VRML files have been created from the same SolidWorks® assembly

 This example was simple because there are no rotations and the translation axesremain the same in SolidWorks ®and labVIEW®.

Page 35: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 35/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 35

4.1.2 Example 2: Import ing the rotary table

If the same methodology is applied to import the rotary table model into the microassembly system assembly model, another issue is raised because the origin of the system assembly is not at the center of the rotary plate (Figure 40). That

means that if the rotary table is imported and a rotation has to be set up, thetable is going to rotate around the assembly origin. Thus, a way to overcome thisissue has to be found.

Figure 40: Origin issue for the rotary plate

Figure 41: Open a part from an assembly

Origin of theassembly

Origin of therotary plate

Page 36: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 36/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 36

 Thus, a way to overcome this issue is found by setting up the model in such away that the rotations have to be around the rotary plate origin. So the rotarytable model has to be exported with its own origin from SolidWorks®and not thesystem origin. To do so, the part file can be opened in SolidWorks® by rightclicking on to the file in the browser and selecting open part option from the pop-up menu as shown in Figure 41.

Figure 42: Rotary plate file with its origin

 The origin of the part is in the center of the rotary plate and it has the same axisas that of the system assembly (Figure 42).

It can now be exported in *.wrl following the same steps as that of the previous

example.

 The next step is to define the offset of the origin of the system model from theorigin of the rotary table in LabVIEW®. So, the distance measurement betweenthe two origins has to be computed. This can be done using the measurementtool of SolidWorks as shown in Figure 43.

Figure 43: SolidWorks measurements

Page 37: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 37/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 37

 To import the VRML file in LabVIEW®, it is exactly the same procedure as in theprevious example. The rotary plate has to be imported last, because it is the lastpart that is mounted on the lower motion stages (i.e. X and Y stages).

Figure 44: How to match the offset in labVIEW

 The only difference from the previous example is that the origin of the systemand the rotary plate has to be made co-incident by using a “TransformationInvoke” node from the Application Palette (Figure 44) of the LabView application.Be careful because the translation vector uses the metric units.

Figure 45: 3D scene of the bottom stages

 The entire virtual model can now be built, importing each part or assembly.

Offset

Page 38: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 38/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 38

4.2 Initialize the model

 To make the motion system hardware and the VRML model correspond to eachother, several transformations need to be applied. To achieve the highestaccuracy, the top camera is used to make the measurements of the distance

between the gripper Tool Center Point (TCP) and the origin of the rotary table. InSolidWorks®, the measurement tool is used to make similar measurements.

Once the gripper has been installed properly, the system is ready for thecalibration. Two separate VIs have been created to get the position of the gripperwith high accuracy. Once these VIs have been run, the data is stored in aconfiguration file which is used to update the 3D scene automatically in the mainprogram. That means the calibration steps have to be done only when you re-install the gripper or disturb the location (i.e. fold it and put it back, for example).

4.2.1 Calibration step 1

Figure 46: Calibration step 1 code

Saving the data in aconfiguration file.

Calculating theoffset between the TCP and the centeof the image from

to camera

Using the SolidWorks®measurements to compute theoffset for translation to beapplied to the VRML model

Page 39: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 39/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 39

Figure 47: SolidWorks® measurement between the TCP and the center of the zoom lens.

4.2.2 Calibration step 2

 The second step in the calibration process is to compute the height of the gripperfrom the rotary table. This is by finding the position of the Zaber®motion slide.Since the zoom lens is mounted on the Zaber®slide the position of the slide isan accurate measure of the position of the zoom lens. The working distance of the top camera is dependant on the focal length, hence it is fixed. Therefore, if the camera is focused first on the rotary table and then on the TCP of the gripper,

the lower and upper positions of Zaber®slide are recorded (let us call them Z1 &Z2). Hence the height of the gripper from the rotary table, H = (Z1-Z2)

Figure 48: Calibration step 2 reading the first position

Reading the firstposition of Zaber

slide (Z1)

Page 40: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 40/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 40

Figure 49: Calibration step 2 reading the second position of Zaber® and computing thetransformation

Figure 50: SolidWorks® measurement of distance between the gripper and the rotary table.

Reading the second position oZaber®slide (Z2) and computingthese values with the SolidWorksmeasurements to calculate the

transformation to apply in theVRML model.

 Then saving these values

in the configuration file.

Page 41: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 41/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 41

4.2.3 Import and ini tialization of the X, Y, and Rotary stages.

Figure 51: Initialization of the X and Y stages

Once all the files have been imported in LabView® for the X, Y, and Rotary

motion stages, the transformations according to the configuration values have tobe performed. To do so, the configuration file is read and then, thetransformations are applied to the scene (Figure 51 and Figure 52).

Figure 52: Initialization of the X,Y, and Rotary stages

Reading of our values storein the configuration file apply the transformation the scene.

Reading values stored inthe configuration file toapply the transformationto the scene.

Page 42: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 42/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 42

It is exactly the same procedure for the top stages (XX, YY, ZZ).

 The virtual model is now initialized according to be in synchronization with themotion system hardware. Now, the motions can be set up to make the virtualmodel update as the motion hardware moves.

Page 43: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 43/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 43

4.3 Set up the motions

Once all parts have been imported and the VRML model has been initialized, themotions between the parts have to be set up. It is done by updating the position

of each part, every time the motion stage is moved.

 The following section explains the way it has been built.

First, for each virtual model corresponding to a motion stage, a VI has beencreated to track exactly where the motion stage is and to update the scene. TheVI actually read the position of the stage and if it is different from the previousone, it makes the transformation to bring the virtual model corresponding to themost recent location (Figure 53) of the hardware.

Figure 53: Updating the position of virtual model of X stage

 The VI for the stage X is shown inFigure 53.

Figure 54: Updating the rotary table

 The VI for the rotary stage is shown in the Figure 54.

Page 44: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 44/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 44

Figure 55: Updating the whole VRML model

VI shown in Figure 55 updates the position of all motion stages

Figure 56: Creating the motions

Finally a main VI calling the previous subVIs has been created to allow the userto control each stage by slides (Figure 56) or rotary knobs on the user interface.

 This VI first imports the scene and then updates the motion stages when one of the slides or knobs is moved.

Page 45: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 45/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 45

5 References

5.1 Ahamed, Shafee, Pardasani, Ajit, and Kingston, David, "DesignModeling of Micro-assembly System," Controlled Technical Report, IMTI-CTR-186(2006/06).

5.2 Ahamed, Shafee, Pardasani, Ajit, and Kowala Stan “GeometricalInaccuracies and Tolerances in Microassembly, FAIM 2007, J une 18-20,2007, Philadelphia, USA. 

5.3 Alfred H.K. Sham, “Technical Specifications for Machine VisionSystem in Automated Microassembly Project,” Controlled Technical Report,IMTI-CTR-188 (2006/07.)

5.4 Alfred H.K. Sham, “Two-Dimensional Calibration of Machine VisionSystem in Automated Microassembly Project,” Controlled Technical Report,

IMTI-CTR-189 (2006/10.)

5.5 Brian Wong, “Automated Assembly of Microsystems Start Up andShut Down Procedures” Technical Report, IMTI-TR-036 (2007/01)

Page 46: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 46/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 46

 APPENDIX A: Calibration of the system

- Calibration rotary table

 The top camera is used to make the measurements to initialize the VRML model.

 The offset between this camera and the center of the rotary table has to beknown because actually when one of the cameras is used for the patternmatching, it returns a position calculated from the origin of the image. But if aposition of the aerotech stages has to be computed, the origin must be translatedto the aerotech origin. That is why the offset between the vision system and themotion system has to be measured.

1

2

3

4

5

Page 47: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 47/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 47

1. Acquiring an image from the top camera

2. When the start button is pressed, a windowpops up with the snap shot from the topcamera. The user is asked to select thecenter of the rotary table. When done, itmarks this dot with a red mark and recordsthe position.

3. The offset between the center of theimage and the center of the rotary table iscalculated. Then, it is converted from pixelsto mm. Finally, computing thetransformations to apply to the virtual modelbased on SolidWorks measurements.

4. Saving all the datain a configuration filewhich will be used inthe main program. This is to avoidredoing this stepeach time you wantto run the

5. Aerotech motion systemcontrol. It allows the useto drive the stages.

Page 48: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 48/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 48

- Calibration gripper step 1

 This step is run to know the exact location of the tool center point. It has a doubleutility as it is used to initialize the virtual model and to compute the tooldisplacements in the cycle.

2

5

Page 49: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 49/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 49

1. Acquiring an image from the top camera

2. Zaber control to allow the user to focus on the tool

3. When the start button is pressed, a window

pops up with the snap shot from the topcamera. The user is asked to select the toolcenter point. When done, it records theposition. Then it computes the offset betweenthis position and the center of the image.Finally, it computes another transformation toapply to the virtual system, also based onSolidWorks measurements.

4. Saving all the datain a configuration filewhich will be used inthe main program. This is to avoidredoing this stepeach time you wantto run the

5. Aerotech motion systemcontrol. It allows the useto drive the stages.

Page 50: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 50/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 50

- Calibration gripper step 2

 This step is only to initialize the virtual model. Thanks to this step, the systemknows the distance between the tool and the rotary table and can update theVRML model.

1. Acquiring an image from the top camera and

getting a snap shot when the start button is pressed. The Zaber control allowsthe user to focus on the rotary table and the tool.

1

2

3

Page 51: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 51/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 51

2. Recording first the measuremenwhen the camera is focused on throtary table and then when it ifocused on the tool. This way thsystem can compute the distancbetween the table and the tool thank

to a constant working distance of thcamera. Then it computes the transformatioto make in the virtual scene ansaves the data in the configuratiofile.

3. Aerotech motion system control. Itallows the user to drive the stages.

Page 52: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 52/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly P

 APPENDIX B: Main VI

Figure 57: Main VI part 1

As soon as the start button is pressed (1), the rotary table goes under the global camera (2). Tthe arrays to know where the pins are and where he wants them to go (3). Pattern matching global camera is processed to find the block (4). Its position is recorded and the block is brought

one acquires an image and pattern matching is processed on this image to find the block centetool center point is brought under the zoom lens so that it is coincident with the center of the imag

Figure 58: Main VI part 2

Every hole position is computed thanks to the angle of the block and its center position (8). Thcenter of the image are made coincident (it means that now the TCP is exactly located on top ocamera is focused on the block by changing the Zaber position (10). And, the VRML model is upthe pins to their position on the rotary table (11,12,13).

1

2

5

3

4

8

9 10

11 12

Page 53: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 53/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly P

Figure 59: Main VI part 3

Picking up the first pin and acquiring another snap shot from the block to redo the pattern mablock center position (Picking up the pin sometimes makes the block move) (14). The new holethe system can go and drop the pin to the desired hole (15). Again, a snap shot from the globpattern matching and the holes positions are recalculated (16,17). The first pin is now moved.

Figure 60: Main VI part4

It is exactly the same procedure to move the second pin. Then it grips back the second pin and wmatching to re-learn the holes positions.

14

15

16

Page 54: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 54/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly P

Figure 61: Main VI part5

 The system drops the pin number 2 back to the hole where it came from and applies patteposition of the holes. The angle of the block is now tested to make sure no collisions will occur benumber 2 (18). If the angle is too big, the rotary table is rotated to have clear access to pin numthe block back under the zoom lens and applies pattern matching to get the new hole positions.

Figure 62: Main VI part 6

Once the new holes coordinates are known, it picks up the pin and does the last pattern mapositions if the block has moved. Then, it drops the pin to the hole where it came from. At thclosed.

1

Page 55: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 55/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly P

Figure 63: Aerotech stages control

 This part of the VI is to control the Aerotech stages. It allows the user manual control and autsubvis from Aerotech, for the initialization of the stages and for the control.

Figure 64: VRML importation and update

 This sequence is for the importation and update of the VRML model in the main VI. It is also us

pins when needed. The placement of these parts is done in the main sequence.

Page 56: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 56/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly P

Figure 65: Link between the hardware stages and the virtual stages

Above is the link between the hardware part of Aerotech stages and the virtual stages. The vsame time the hardware is moving (nearly the same time because it is not exactly real time asthan 1 second)

Figure 66: Control of the 3D scene (Virtual Model)

 This part is to allow the user to navigate in the 3D scene. It can zoom in and out in the area ofare three basic views (Front view, Top view and Left view) that you can use

Page 57: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 57/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 57 

 APPENDIX C: Find block

1

2

3

Page 58: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 58/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 58 

1. Taking a snap shot fromthe top camera

2. Performing a geometric matching tothe image acquired with the blocktemplate. This VI has been created from the NIVision software which is simple toimplement. Lots of parameters can bechanged to get consistent results. The

template can be created from thissoftware as well.

3. Translating the positionreturned by the softwarefrom pixels to millimetersand marking with a red dotwhere the software foundthe template,

Page 59: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 59/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 59 

 APPENDIX D: Find the holes

1

2

3

Page 60: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 60/61

Integration of Machine Vis ion and Virtual Reality for Robot Assis ted Micro Ass embly Page 60 

1. Taking a snap shot from the top camera and processing the image to get asuitable image for pattern matching. (Contrasts, brightness and gammaparameters)

2. Processing pattern matching to theimage acquired with the template specifiedin the path. This VI has been created from the NI Vision

software which is simple to implement. Lotsof parameters can be changed to getconsistent results. The template can becreated from this software as well. Rotationof the template and the offset with thepattern are examples of parameters whichcan be changed.

3. Marking the position returned bymatching a blue dot. It should be thecenter of the block.

Page 61: vrml pag 29

7/27/2019 vrml pag 29

http://slidepdf.com/reader/full/vrml-pag-29 61/61

 APPENDIX E: Computing holes positions

With the information given by the user, the block center and the angle returnedby the system and the characteristics of the block, it is easy to find the exactlocation of each holes.

 The mathscript function from labVIEW is used to make use of the trigonometricfunctions. This VI returns the coordinates that need to be applied to xx stage and yy stageto pick up one of the pins.

Mathscriptwindow