Upload
n
View
217
Download
5
Embed Size (px)
Citation preview
256 JOURNAL OF MICROELECTROMECHANICAL SYSTEMS, VOL. 21, NO. 2, APRIL 2012
JMEMS LettersFour d.o.f. Piezoelectric Microgripper Equipped
With a Smart CMOS Camera
B. Tamadazte, M. Paindavoine, J. Agnus,V. Pétrini, and N. Le-Fort Piat
Abstract—This letter deals with the design of a micro-eye-in-hand ar-chitecture. It consists of a smart camera embedded on a gripper. Thecamera is a high-speed (10 000 frames/s) complementary metal–oxide–semiconductor sensor of 64 × 64 pixels. Each pixel measures35 µm × 35 µm and includes a photodiode, an amplifier, two storagecapacitors, and an analog arithmetic unit. The gripper consists of a4-degree-of-freedom (y+, y−, z+, z−) microprehensile system basedon piezoelectric actuators. [2011-0288]
Index Terms—Complementary metal–oxide–semiconductor (CMOS)sensor, eye-in-hand system, microgripper, micromanipulation,microrobotics.
I. OVERVIEW
There is a growing interest in 3-D complex hybrid micro-electromechanical systems (MEMS) and microoptoelectromechanicalsystems devices. They are used in a variety of domains: automo-tive, household, information technology peripherals, and biomedicaldevices.
To obtain 3-D MEMS, it is necessary to use microassemblyprocesses. Two microassembly techniques exist in the literature: self-assembly and robotic assembly. In the latter, a robotic system com-bined with a microhandling system [1] and an imaging system are usedto reach the same objective [2]–[4]. The vision sensors used in thiscase are mainly based on optical microscopes or conventional camerasassociated to a high-magnification objective lens. This type of imagingsystems has very inconvenient sizes and weights.
Traditionally, a camera can be used in a robot control loop withtwo types of architecture: eye-in-hand when the camera is rigidlymounted on the robot end-effectors and eye-to-hand when the cameraobserves the robot within its work space. In microrobotic applications,the eye-in-hand architecture is not possible because of the size andweight of an optical microscope, although this configuration may havemany interests in the case of vision feedback control [5]. The useof an eye-in-hand system in microrobotic applications allows solvingmany constraints related to the use of an optical microscope. Notableconstraints are the small depth of field (DOF), the small field of view,and the small working distance.
Manuscript received September 26, 2011; accepted November 13, 2011.Date of publication January 18, 2012; date of current version April 4, 2012.This work was supported in part by the LEMAC Project from PRES BourgogneFranche-Comté. Subject Editor H. Seidel.
B. Tamadazte, J. Agnus, V. Pétrini, and N. Le-Fort Piat are with Franche-Comté Electronique Mécanique Thermique et Optique—Sciences et Technolo-gies (UMR 6174), Centre National de la Recherche Scientifique/Université deFranche-Comté/École Nationale Supérieure de Mécanique et des Microtech-niques/Université de Technologie de Belfort-Montbéliard, 25000 Besançon,France (e-mail: [email protected]).
M. Paindavoine is with the Laboratoire d’Etude de l’Apprentissage et duDéveloppement (UMR 5022), Centre National de la Recherche Scientifique/Université de Bourgogne, 21065 Dijon, France.
Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/JMEMS.2011.2180363
Fig. 1. Representation of the gripper functionalities.
TABLE IMMOC MICROGRIPPER FEATURES
This letter presents a new concept of a 4-degree-of-freedom (dof)microgripper equipped with a small and intelligent complementarymetal–oxide–semiconductor (CMOS) camera. This concept allowsreplacing the traditional cumbersome and expensive imaging systemwhich equipped the traditional microassembly workcells. The em-bedded imaging system consists of a 64 × 64 pixel CMOS analogsensor with a per-pixel programmable processing element in a standard0.35-µm double-poly quadruple-metal–metal CMOS technology (re-fer to [6] for more details).
This letter is organized as follows. Section II presents the 4-dofpiezoelectric-actuator-based gripping system and the CMOS cameracharacteristics. The final designed eye-in-hand system is presented inSection III, along with its integration on a robotic MEMS microassem-bly station. Section IV presents the obtained results of the eye-in-handsystem (i.e., the CMOS sensor and microgripper) characterization.
II. DEVELOPMENTS
In this section, we present the developed 4-dof microprehensilesystem and the smart CMOS camera used to design the novel eye-in-hand system dedicated to MEMS manipulation and assembly tasks.
A. Piezoelectric Microgripper
The gripper developed in our laboratory is used for handling. It has4 dofs and allows open–close motions as well as up–down motions.It is based on piezoelectric actuators which consist of two parallelpiezoceramic PZT PIC 151 bimorphs (see Fig. 1). Each parallel bi-morph is independent and able to provide two uncoupled dofs (detailssummarized in Table I). The piezoceramic bimorph contains twosuperimposed actuated 200-µm layers to produce independent move-ments along y− and y+ directions and z− and z+ directions [7]. Theshape of the microgripper’s end-effectors can be adapted according tothe type of handled MEMS.
Fig. 1 shows the drawings of the functioning of the developedgripper’s dof: Fig. 1(a) shows the open–close motions, also called inthis letter y− and y+ directions, used to grasp and release the handled
1057-7157/$31.00 © 2012 IEEE
JOURNAL OF MICROELECTROMECHANICAL SYSTEMS, VOL. 21, NO. 2, APRIL 2012 257
Fig. 2. (a) Layout of 4 pixels and (b) full layout of the retina (64 × 64 pixels).
TABLE IICMOS SENSOR CHARACTERISTICS
micro-objects; Fig. 1(b) shows the up–down motions, i.e., z− and z+directions, used for fine displacements (insertion tasks) of the micro-object; and Fig. 1(c) shows the combined motions (e.g., y + z+ ory + z− displacements) in order to adjust the space position of thegrasped micro-object.
B. CMOS Sensor
There are two primary types of electronic image sensors: charge-coupled devices and CMOS sensors. CMOS imaging technology hasbecome increasingly the major technology because of its severaladvantages such as simple operation process, low fabrication cost, andlow power consumption.
In this letter, we develop a smart CMOS camera based on the AustriaMikroSystem CMOS 0.35-µm technology. The described sensor sizeis 64 × 64 pixels with a chip size of 11 mm2 with 160 000 transistors(38 transistors per pixel and a pixel fill factor of around 25%), as shownin the layout of a 4-pixel block in Fig. 2.
Fig. 2 shows the mask design of the pixel (four metals, double poly),with its four related photodiodes. The center of the analog arithmeticunit ([AM]2) is equidistant from the centers of the photodiodes. Thishas a direct impact on the spatial noise sensor axis. It is obvious that,for automatic micromanipulation of a micro-object using vision-basedtechniques, the integration of hardware low-level image processingtechniques inside the sensor will be an asset. Thus, various in situimage processing techniques had been integrated using local neighbor-hoods (such as spatial gradients and Sobel and Laplacian filters). Thedeveloped CMOS sensor characteristics are summarized in Table II.
III. EYE-IN-HAND SYSTEM DEVELOPED
As mentioned previously, the gripper end-effectors can be adaptedto the type (i.e., size and shape) of micro-objects to be manipulated.Therefore, for a better functioning of the developed eye-in-hand sys-tem, it is necessary to be able to control the embedded camera positionaccording to the gripper end-effectors. Thus, 2 dofs have been added tothe camera support: a linear stage x and an angular stage α (see Fig. 3).These dofs have a dual interest in our application. First, the adjustmentof the translation and rotation stages allows a target visualization ofthe scene (i.e., the end-effector tips, the MEMS handled, etc.). Second,
Fig. 3. Final developed eye-in-hand system.
Fig. 4. Photographs showing the final connection of the developed CMOScamera and integrated in the printed circuit board support.
these 2 dofs allow a precise adjustment of the camera focus in order toget a sharp view of the scene.
Fig. 4(a) shows the top view of the CMOS sensor when it isconnected to the support measuring 30 mm × 22 mm × 1 mm (length,width, and depth, respectively). Fig. 4(b) shows the side view of theconnected sensor. It can be seen that it is very compact.
The eye-in-hand system presented in this letter can solve the fol-lowing constraints related to the use of optical microscopy in theMEMS microassembly workcells: the traditional integration problemof optical microscopes in microrobotic stations, the possibility to usean eye-in-hand architecture in MEMS manipulation and assemblyprocesses, and the problem of weak DOF often encountered usingoptical microscopes in MEMS assembly stations.
Fig. 5 shows the 5-dof homemade MEMS microassembly workcellincluding the developed eye-in-hand system, where (a) representsthe 4-dof piezoelectric-actuator-based gripper equipped with nickelend-effectors, (b) shows the embedded smart CMOS camera and itssupport, (c) represents the 2-dof (zφ) micromanipulator, (d) shows thesilicon micro-objects to be visualized and manipulated, (e) illustratesthe compliant micro-object support, and (f) represents the 3-dof (xyθ)positioning platform.
IV. VALIDATION
A. Gripper Performances
The performances of the developed gripper are tested using ahomemade 5-dof MEMS microassembly station (Fig. 5). Two methodsto control the gripper dof are studied: all-or-none method, particularly
258 JOURNAL OF MICROELECTROMECHANICAL SYSTEMS, VOL. 21, NO. 2, APRIL 2012
Fig. 5. Integration of the developed system into a MEMS microassemblystation.
TABLE IIICHARACTERIZATION OF THE CMOS SENSOR
in the case of manipulation of rigid micro-objects, and closed-loopapproach using vision-based control law adequate for biological cellmanipulation. In the case of the first type of micro-objects, a controlstrategy which consists of a look-and-move vision-based control lawis implemented to validate the developed active gripper. The obtainedaccuracies in the different dofs are of 1.36 µm (average estimation forten micromanipulation cycles) with a standard deviation of 0.34 µm(refer to [8] for more details).
B. Sensor Characteristics
The developed CMOS sensor was tested in terms of conversion gain,sensitivity, fixed pattern and thermal reset noises, and dynamic range.The obtained results are summarized in Table III.
Fig. 6 shows the experimental image results. Fig. 6(a) shows animage acquired at 10 000 frames/s (integration time of 100 s). Exceptfor amplification of the photodiode signal, no other processing is per-formed on this raw image. Fig. 6(b)–(d) shows different images withpixel-level image processing at a frame rate of about 2500 frames/s.From left to right, horizontal and vertical Sobel filter and Laplacianoperator images are displayed. Some of these image processing algo-rithms imply a dynamic reconfiguration of the coefficients. We cannote that there is no energy spent for transferring information fromone level of processing to another because only a frame acquisition isneeded before the image processing takes place. In order to estimatethe quality of our embedded image processing approach, we havecompared the results of the horizontal and vertical Sobel and Laplacianoperators obtained with our chip with those of digital operators imple-mented on a computer. In each case, the image processing is applied onreal images obtained by our chip. For the comparison of the results, wehave evaluated the likelihood between the resulting images by usingthe cross-correlation coefficient. In our case, this coefficient is 93.2%
Fig. 6. (a) Raw image at 10 000 frames/s, (b) output Sobel horizontal image,(c) output Sobel vertical image, and (d) output Laplacian image.
on average, which demonstrates that the analog arithmetic unit hasgood performance compared to external operators implemented on acomputer.
V. CONCLUSION
In this letter, we have presented a novel smart eye-in-hand systemfor MEMS microassembly applications. This system was developed tosolve many problems or constraints related to the use of optical micro-scopes in MEMS microassembly processes. First, to surpass these con-straints, we have designed a 64 × 64 pixel smart microcamera basedon CMOS technology with a high-speed acquisition (10 000 frames/s).With basic image processing techniques, the maximal frame rateis about 5000 frames/s. Each pixel measures 35 µm × 35 µm andincludes a photodiode, an amplifier, two storage capacitors, and ananalog arithmetic unit. Some image processing techniques as spa-tial gradients, Sobel operator, and second-order detector (Laplacian)have been implemented. Second, a novel version of the homemade4-dof piezoelectric-actuator-based microprehensile system has beendesigned in order to integrate the developed smart camera.
The next step of our research concerns the implementation of moresophisticated image processing operators, e.g., recognition and vision-based operations like the integration of low-level visual servoingcontrol laws (look-and-move techniques), directly in the developedCMOS sensor.
REFERENCES
[1] F. Beyeler, A. Neild, S. Oberti, D. Bell, Y. Sun, J. Dual, and B. Nelson,“Monolithically fabricated micro-gripper with integrated force sensor formanipulating micro-objects and biological cells aligned in an ultrasonicfield,” J. Microelectromech. Syst., vol. 16, no. 1, pp. 7–15, Feb. 2007.
[2] N. Dechev, W. Cleghorn, and J. Mills, “Microassembly of 3D microstruc-tures using a compliant, passive microgripper,” J. Microelectromech. Syst.,vol. 13, no. 2, pp. 176–189, Apr. 2004.
[3] B. Donald, C. Levey, and I. Paprotny, “Planar microassembly by parallelactuation of MEMS microrobots,” J. Microelectromech. Syst., vol. 17,no. 4, pp. 789–808, Aug. 2008.
[4] M. Probst, C. Hurzeler, R. Borer, and B. Nelson, “A microassembly systemfor the flexible assembly of hybrid robotic MEMS devices,” Int. J. Opto-mechatron., vol. 3, no. 2, pp. 69–90, Apr. 2009.
[5] B. Tamadazte, E. Marchand, S. Dembélé, and N. Le Fort-Piat, “CADmodel-based tracking and 3D visual-based control for MEMS microassem-bly,” Int. J. Robot. Res., vol. 29, no. 11, pp. 1416–1434, Sep. 2010.
[6] R. Mosqueron, J. Dubois, and M. Paindavoine, “High-speed smart camerawith high resolution,” EURASIP J. Embed. Syst., vol. 2007, no. 1, p. 23,Jan. 2007.
[7] J. Agnus, D. Hériban, M. Gauthier, and V. Pétrini, “Silicon end-effectorsfor microgripping tasks,” Precis. Eng., vol. 33, no. 4, pp. 542–548,Oct. 2009.
[8] B. Tamadazte, N. Le Fort-Piat, and S. Dembélé, “Robotic micromanip-ulation and microassembly using monoview and multiscale visual ser-voing,” IEEE/ASME Trans. Mechatronics, vol. 16, no. 2, pp. 277–287,Apr. 2011.