Bio-Inspired Optic Flow Sensors Based on FPGA Application

  • Upload
    alfonso

  • View
    33

  • Download
    0

Embed Size (px)

DESCRIPTION

Bio-Inspired Optic Flow Sensors Based on FPGA Application

Citation preview

  • Bio-inspired optic ow sensorsto Micro-Ai

    .

    CN

    rsei

    27

    perform these tasks would give MAVs some degree of deci-sion-making autonomy, while relieving ground operatorsof the arduous task of constantly piloting and guiding aparticularly agile craft that is invisible most of the time.

    One lesson we have learned from insects is that they areable to sense and avoid obstacles and to navigate swiftly

    Look-Up Table; MAV, Micro-Air-Vehicles; lC, Micro-Controller; OF,optic ow; UAV, Unmanned Air Vehicles; VHDL, Very High speed In-tegrated Circuit Description Language; VLSI, Very Large Scale Integra-tion.* Corresponding authors. Tel.: +33 491 28 94 52; fax: +33 491 28 94 03.E-mail addresses: [email protected] (F. Aubepart),

    [email protected] (N. Franceschini).

    Microprocessors and Microsystem1. Introduction

    One recent trend in the eld of Unmanned Air Vehicle(UAV) and robotic aircraft design has been the develop-ment of Micro-Air-Vehicles (MAVs) in the 150 cm size

    range. MAVs could be used as scouts in many dangerouscivil and military missions without any risk to human life,and they also have many potential industrial applicationssuch as plant supervision, power line [1] and constructionsite inspection, pollution and weather monitoring, forestre and disaster control, etc. Missions of this kind requirereactive vehicles equipped with onboard sensors and ightcontrol systems capable of performing the lowly tasks ofattitude stabilization, obstacle sensing and avoidance, ter-rain following and automatic landing [1,2]. The ability to

    Abbreviations: ASF, Angular Sensitivity Function; AWHH, AngularWidth at Half Height; EMD, Elementary Motion Detector; FOV, Field ofView; FPAA, eld programmable analog array; FPGA, Field Program-mable Gate Array; IC, integrated circuit; IP, Intellectual Property; LUT,Abstract

    Tomorrows Micro-Air-Vehicles (MAVs) could be used as scouts in many civil and military missions without any risk to human life.MAVs have to be equipped with sensors of several kinds for stabilization and guidance purposes. Many recent ndings have shown, forexample, that complex tasks such as 3-D navigation can be performed by insects using optic ow (OF) sensors although insects eyes havea rather poor spatial resolution. At our Laboratory, we have been performing electrophysiological, micro-optical, neuroanatomical andbehavioral studies for several decades on the houseys visual system, with a view to understanding the neural principles underlying OFdetection and establishing how OF sensors might contribute to performing basic navigational tasks. Based on these studies, we developeda functional model for an Elementary Motion Detector (EMD), which we rst transcribed into electronic terms in 1986 and subsequentlyused onboard several terrestrial and aerial robots. Here we present a Field Programmable Gate Array (FPGA) implementation of anEMD array, which was designed for estimating the OF in various parts of the visual eld of a MAV. FPGA technology is particularlysuitable for applications of this kind, where a single Integrated Circuit (IC) can receive inputs from several photoreceptors of similar (ordierent) shapes and sizes located in various parts of the visual eld. In addition, the remarkable characteristics of present-day FPGAapplications (their high clock frequency, large number of system gates, embedded RAM blocks and Intellectual Property (IP) functions,small size, light weight, low cost, etc.) make for the exible design of a multi-EMD visual system and its installation onboard MAVs withextremely low permissible avionic payloads. 2007 Elsevier B.V. All rights reserved.

    Keywords: Optic ow sensor; Elementary Motion Detector; Field Programmable Gate Array; Micro-Air-Vehicle; BioroboticsF. Aubepart *, N

    Biorobotics Laboratory, Movement and Perception Institute,

    CP 938, F-13288 Ma

    Available online0141-9331/$ - see front matter 2007 Elsevier B.V. All rights reserved.doi:10.1016/j.micpro.2007.02.004based on FPGA: Applicationr-Vehicles

    Franceschini *

    RS & University of the Mediterranean, 163 Avenue Luminy,

    lle cedex 09, France

    February 2007

    www.elsevier.com/locate/micpro

    s 31 (2007) 408419

  • RAM blocks, embedded multipliers, and embedded Intel-lectual Property functions, and their small size and lightweight, make it possible to design a exible multi-EMDsystem and mount it onboard a MAV with a very low avi-onic payload [26].

    In the next section, we present the bio-inspired visualsystem and the principles underlying EMD operation.The photodiode conguration and its use with a lineararray are explained. Section 3 presents the specic top-down method used for FPGA integration of the EMDsusing the Matlab (The Mathworks software) and ISE(Xilinx software) programs. In Section 4, details of thedesign specications (sampling frequency limits, digitaltechniques, architecture) are explained. Lastly, in Section5, we describe the hardware implementation and presentthe experimental results obtained on a real test bed on

    essothrough the most unpredictable environments without anyneed for sonars or laser range-nders. Insects visuallyguided behaviour depends on optic ow (OF) sensing pro-cesses. The optic ow perceived by a moving animal,human or robot is a vector eld that gives the angular speed(direction in degrees; magnitude in rad/s) at which any con-trasting object in the environment is moving past the eye.Measuring this angular speed is not a trivial task. Onboardinsects such as the y, this angular speed is not givendirectly but is computed locally by a neuron called anElementary Motion Detector (EMD), which is driven byat least two photoreceptors facing in dierent directions.The ys eye has long been known to be equipped with awhole array of these smart sensors, which contribute toassessing the OF [3,4]. The ys eye is therefore one ofthe best animal models available for studies on motiondetecting neurons [5]. Our EMD model was inspired bythe results of studies in which microelectrode recordingswere performed while applying microstimulation to indi-vidual photoreceptor cells on the ys retinal mosaics [5,6].

    Psychophysical studies on motion detection in humansand neurobiological studies on motion detection in variousanimals have led to the development of two main kinds ofmodels for directionally selective motion detectors. Thesemodels are based on what is known as intensity-basedschemes (correlation techniques and gradient methods)and token-matching schemes [7,8]. Our y-inspired elec-tronic EMDs are of the second kind. We have been usingthem for 20 years onboard various mobile robots.

    Our biorobotic approach consists in building terrestrialand aerial robots [913] based on optic ow sensing tech-niques. The roboy (le robot-mouche) started o as asmall, completely autonomous robot equipped with a com-pound eye and 114 electronic EMDs implemented in ana-log technology using Surface Mounted Devices (SMD).This robot was able to steer its way through an unknowneld full of obstacles at a relatively high speed (50 cm/s)[10,13]. During the last 10 years, we have further usedEMDs for the visual guidance of other miniature (mass

  • 2. Bio-inspired visual system

    The principle of the Elementary Motion Detector wasoriginally based on the results of experiments in which acombined electrophysiological and micro-optical approachwas used. The activity of large eld motion detecting neu-rons in the houseys eye was recorded with a microelec-trode while applying optical microstimuli to a single pair

    tors. Each single EMD is driven by two neighbouring

    XEMD e Dts () K X K DuDt K 0

    Dt2

    Our original EMD functional scheme [28,29] consists ofve processing steps giving XEMD (Fig. 2):

    1. A rst-order high-pass temporal lter (fc = 20 Hz) pro-duces a transient response whenever a contrasting bor-der crosses the photoreceptors visual eld. This lter

    410 F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419where Du is the inter-receptor angle and X is the relativeangular speed (the optic ow, OF). An electronic devicebased on some linear and nonlinear functions estimatesthe angular speed XEMD:receptors and can only detect movements occurring withinthe narrow visual eld corresponding to these two photore-ceptors. The latter are mounted slightly defocused behind alens, which creates a bell-shaped Angular Sensitivity Func-tion (ASF) for each of them [31]. The ASF, which is oftenmodeled in the form of a truncated Gaussian curve [11], ischaracterized by the acceptance angle Dq, i.e. the AngularWidth at Half Height (AWHH). The ASF plays an impor-tant role in the visual processing chain, because it serves asan eective low pass anti-aliasing spatial lter.

    2.2. Elementary Motion Detector (EMD)

    In each of the EMDs forming the array, the lens/photo-receptor combination transforms the motion of a contrast-ing object into two successive photoreceptor signalsseparated by a delay Dt:

    Dt DuX

    1of photoreceptor cells located behind a single facet [5,27].Based on the results of these experiments, a principle wasdrawn up for designing an articial EMD capable of mea-suring the angular speed X of a contrasting object [28,29].

    2.1. Spatial sampling and spatial ltering

    The visual systems of advanced creatures include arraysof motion detecting neurons, which compute the relativemotion of any contrasting objects that cross their visualelds [27,30]. In the physical model that we have con-structed, an array of articial photosensors is connectedto an array of neuromorphic Elementary Motion Detec-Fig. 2. Functional scheme of the Elementary Motion Denhances the contrast information while eliminatingthe DC components of the photoreceptor signals. Inaddition, it makes a distinction between ON andOFF contrasting edges (i.e., dark-to-light and light-to-dark edges, respectively).

    2. A higher order low-pass temporal lter (fc = 30 Hz)attenuates any high frequency noise, as well as any inter-ferences brought about by the articial indoor lighting(100 Hz) used.

    3. A hysteresis thresholding device/step separates ON andOFF transitions and normalizes the signals in eachchannel.

    4. A time delay circuit is triggered by one channel andstopped by the neighbouring channel. This functionmeasures the delay time Dt elapsing between similar(ON or OFF) transitions occurring in two adjacentphotoreceptors.

    5. A converter translates the delay Dt measured into amonotonic function that will approximate the angularspeed XEMD. A simple inverse exponential functionmakes for a relatively large dynamic range (Eq. (2)).

    2.3. Photoreceptor conguration

    In an embedded system such as a Micro-Air-Vehicle,which has to be as lightweight as possible, the number ofcomponents, the Printed Circuit Board (PCB), and the sizeand mass of all the electronic devices are crucial parame-ters. Yet some functions, such as the currentvoltage con-verter, the gain control, the anti-aliasing lter, theAnalog-to-Digital Converter (ADC) and the analog or dig-ital multiplexer can be advantageously implemented out-side the FPGA when using a photoreceptor array.

    A conguration involving photodiodes in the current-integrator mode was used, (Fig. 3) [32]. The photodiodemodel is approximately equivalent to a current generatorIG in parallel with a junction capacitance CJ. The currentetector (EMD) principle (adapted from [20,28,29]).

  • generated is the sum of the signal current whichdepends on the number of photons detected the leak-age current and the noise current. The photodiode junc-tion capacitance depends on the depth of the depletionlayer and on the reverse bias voltage (CJ 5 pF at3.3 V in the case of the 12-photodiode linear array withthe reference code Centronic LD12A-5T). The loadcapacitance caused by the circuit routing and by thecomparator input has to be taken into account to obtainthe brightness sensitivity. The value of the threshold volt-

    TINT. A refresh pulse lasting few nanoseconds closes theanalog switch K, thus temporarily dumping the photodiodecharges of the junction capacitance, while starting thedown-counting process. The comparator output stops thedown-counter when the voltage at the anode reaches Vth.The down-counter bit number N is dened as:

    T INT 6 2N TDC 6 T S 3

    where TDC is the down-counter clock period and TS is thesampling time. Since each photoreceptor drives its owndown-counter, we took N = 12 bits, thus keeping thedown-counter size to a minimum in the FPGA. In addi-tion, the sampling time TS was taken to be equal to theintegration time TINT with a view to obtaining the opti-mum sampling frequency.

    By using the photodiode array in the current-integratormode, we limit the number of ancillary components on thePCB (each photodiode corresponds to just one switch andone comparator). Moreover, we avoid the problems arisingwith commercial CMOS cameras, where timing constraints

    Fig. 3. Photoreceptor in the Current-Integrator mode.

    F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419 411age Vth relative to the photodiode anode voltage can beused to adjust the sensitivity so as to be able to copewith the widest possible illuminance range. A Digital-Analog Converter (DAC) managed by an I2C bus fromthe FPGA, carries out a controlled direct voltage Vth ofaround 1 V, for example.

    The digital data corresponding to the photoreceptor sig-nal originating from each channel is obtained from the out-put delivered by a down-counter integrated into the FPGA(where high intensities correspond to high digital values).These data are recovered at the end of an Integration TimeFig. 4. The Angular Sensitivity Function (ASF) of two adjacent photorecep(Angular Width at Half Height: AWHH) is 1.65.are imposed on both the photosensor and the FPGA.Although CMOS cameras are available nowadays in small,inexpensive packages, they do not have high frame ratesbecause they still need to scan every pixel internally [25,33].

    Fig. 4 shows the Angular Sensitivity Function (ASF)obtained from two neighbouring photodiodes equippedwith a lens with a focal length of 30 mm. The ASF of thelens/photoreceptor system was determined by moving apoint light source across the FOV of two neighbouringphotoreceptors while measuring their relative output volt-ages with a Digital-to-Analog Converter (DAC) in the cur-rent-integrator mode. Defocusing the lens by +2.25 mmtors. The inter-receptor angle is Du = 1.05, and the acceptance angle

  • yielded appropriate Gaussian ASFs with Du 1.05 andDq 1.65.

    3. Top-down methodology

    The digital signal processing methods used and thelinked framework required a top-down methodologyadapted to the hardware implementation of ElementaryMotion Detectors, Fig. 5. This methodology simpliesthe integration problems by using detailed descriptions[34,35].

    First, the functional approach involves dividing the sys-tem into elementary functional blocks. In our case, thesefunctions are dened by the functional scheme presentedin Fig. 2, to which the lens/photoreceptor model has beenadded. This approach can be described at two levels, interms of the system model, which validates the principleof the visual sensor (lens/photoreceptors/EMD), and thehigh level behavioural model, which denes the computa-tion sequences and timing data (sampling frequency, etc.).

    With each function, the operating approach identiesthe data type and the procedures used in the algorithm.At this stage, the algorithm implementation model takesinto account some factors, such as the binary format, in

    facilities provided by this software were used to plot theoutput signals. The low level hardware descriptions were

    412 F. Aubepart, N. Franceschini / Microprocessoorder to optimize the digital calculations.The logical approach introduces hardware constraints

    on the blocks that are to be integrated. These are studiedin the architectural model, which denes one or severalimplementation architectures that comply with the opti-mized digital algorithm. During this stage, the processingFig. 5. Top-down method of designing and simulating the EMD principlewith a view to implementing it in FPGA.simulated only with digital stimuli.

    4. EMD implementation

    4.1. Photoreceptor conguration

    The system model was easy to develop because the var-ious functional blocks of an EMD were dened twentyyears ago [28,29]. With the high level behavioural model,the sampling frequency has to be carefully chosen becauseseveral parameters in the EMD design, such as the digitallter coecients and the number of possible EMD chan-nels that can be integrated into the FPGA, will dependon the sampling time.

    In aerial robotic applications, the sampling time mustcomply with the requirements imposed on the EMD so thatthe Micro-Air-Vehicle (MAV) can be controlled through-out its ight envelope. The maximum sampling timeTSMAX will depend on the minimum delay Dtmin encoun-tered by the MAVs EMDs during the fastest maneuversin the most critical applications. One example of a fastmaneuver is automatic terrain-following, which is per-formed by measuring the optic ow in the downward direc-tion [10,11,1517]. When an eye-bearing MAV is ying inpure translation at speed vx and height h over an unknownterrain, the image of the terrain underneath slips at anangular speed X that depends on both vx and h:time required by the algorithm is analyzed. Also, the archi-tecture can be optimized as far as the hardware parameters(the number of basic logic cells) are concerned. Finally, asuitable compromise must be made between the processingspeed and the hardware functions.

    The integration stage is that involving the physicalimplementation in the FPGA. Two levels of descriptionlevels are applied here: that of the logical model (RTLmodel) and that of the electrical model. The logical modeldescribes the architecture as a netlist of interconnectedbasic logic cells after a logic synthesis. The electrical modelis a low level hardware description obtained after the plac-ing and routing of cells in the FPGA. In the framework ofthis approach, a digital simulation deals with the electricaland timing problems caused by the physicalimplementation.

    At the end, a le is set up for the hardware congurationin the FPGA, which will be used to perform tests in a phys-ical environment. The Matlab software program was usedto study the functional approach and operating approachmodels presented above. However, this software is not suit-able for use in the integration stage. We therefore chose theISE platform of CAD Xilinx tools for this purpose.

    The logic approach stage was validated using stimuliobtained from the Matlab environment and the graphical

    rs and Microsystems 31 (2007) 408419X vxh

    4

  • essoIf we take an extreme case where the MAV is allowed to yat the minimum height h = 0.5 m at the maximum speedvx = 10 m/s, Eqs. (1) and (4) show that the downwardoriented EMD onboard the MAV, with its inter-receptorangle Du = 1.05 (Fig. 4), will be subject to a minimum de-lay Dtmin 0.9 ms. Accordingly, the sampling frequencyfSmin will have to be set at values of at least 1 kHz.

    The maximum sampling frequency fSMAX must be inkeeping with the timing specications to which the lens/pho-toreceptors devices are subject, especially in the case ofCMOS sensors equipped with digital outputs [22] or photo-receptors using the current-integrator mode [23]. On theother hand, the maximum sampling frequency fSMAX, is lim-ited by the lower end of the illuminance range over which thesensor is intended to operate, because at low illuminance lev-els, the integration of the photoreceptor signal takes a rela-tively long time and the sampling procedure will have towait for this integration process to be completed (Eq. (3)).Taking the range [1002000 Lux] to be a reasonable workingilluminance range for the MAV, this gives fS = 2.5 kHz,whichwe call the nominal sampling frequency.At twice thissampling frequency (5 kHz), the MAV would still operateeciently in the [2002000 Lux] range, but it would thenbe dicult for it to detect low contrasts under articialindoor lighting conditions (see Section 5).

    4.2. Digital specications

    The digital specications were dened during the lterdesign. Due to the low values of the high-pass and low-passlter corner frequencies (fCHP = 20 Hz, fCLP = 30 Hz) incomparison with the sampling frequencies (fS = 2.5 kHzor 5 kHz), it was not possible to obtain a digital band-passlter meeting the Bode specications. The high-pass ltersection and low-pass lter section were therefore designedseparately and cascaded.

    Innite Impulse Response (IIR) lters were synthesized(see Eq. (5) below) because they require far fewer coe-cients than Finite Impulse Response (FIR) lters, giventhe low cut-o frequencies and short sampling timesinvolved:

    yn Xn

    i1bi xi

    Xn1

    i1ai yi 5

    A Direct-Form II structure was used because this struc-ture reduces the number of delay-cells and decreases thequantization errors. Ripples on the low-pass lter tempo-ral response were prevented by using a 4th-order Butter-worth Filter, the phase of which was linearized over thefrequency range of interest. The lters require 17 coe-cients in all (4 coecients for the 1st-order high-pass sec-tion, 12 for the 4th-order low-pass section, and 1 for theadjustment between the two lters). Three Direct-Form IIlters suce in fact to perform all the ltering, including

    F. Aubepart, N. Franceschini / Microprocthat carried out by the two cascaded 2nd order low-passlters.A specic binary format was developed and used to pre-vent oset and stabilization problems. A two-complementxed-point binary format, denoted [s,mI,mD], was dened.The bit number of integer parts, mI, and the decimal part,mD, were dened so as to ensure maximum accuracy andto eliminate overow from the lter calculations. Basedon the results of a study carried out with Filter Design andAnalysis and Fixed-point Blockset of the Mathworks tools,6 bits were selected for the integer part mI and 29 bits forthe decimal part mD. The large mD bit number is due tothe low value required to make the coecients in the low-pass lter section comply with a Bode template character-ized by a low cut-o frequency at high sampling frequencies.

    Other digital specications were dened as regards (i)the bit number of the counter output giving the delay timeDt, and (ii) the inverse exponential function giving theangular speed XEMD. The delay time Dt is measured interms of a count number at a given clock period. The min-imum delay to be measured determines the minimum clockperiod (200 ls for fS = 5 kHz, or 400 ls for fS = 2.5 kHz).The maximum delay to be measured is taken to be100 ms, which is compatible with the wide range of angu-lar speed values encountered by the MAV (Eq. (4)):X 10/s to 5000/s, for Du = 1.05. Using a 9 bits coun-ter at fS = 5 kHz or an 8 bits counter at fS = 2.5 kHz givesan delay of 102.4 ms.

    The measured angular speed X measured is a hyperbolicfunction of Dt (Eq. (1)), but we used a function thatdecreases more slowly: an inverse exponential functionwith a time constant s = 30 ms. A Look-Up Table (LUT)was used to convert the delay Dt into an output thatdecreases monotonically (exponentially) with the delayand therefore approximately reects the angular speedXEMD (Eq. (2)). The Look-Up Table features an 8-bit inputresolution (at fS = 2.5 kHz), or a 9-bit input resolution (atfS = 5 kHz), and a 12-bit output resolution for memorizingthe results of the conversion.

    An algorithm implementation model using the Matlablanguage was used to check the digital specications.Fig. 6 presents the results of simulations carried out withthis model when the EMD, with its eld of view (FOV asdened in Fig. 4) oriented vertically downwards, was trav-elling horizontally at a constant speed of 2 m/s above agently rising terrain (Fig. 6a) covered with a randomly con-trasting texture (Fig. 6b). The nal curve (Fig. 6i) is a plotof the estimated angular speed XEMD , which is reminiscentof the hilly relief shown in Fig. 6a.

    Fig. 7 gives a magnication of the signals from two adja-cent photoreceptors and their lter outputs between dis-tances 0.8 and 2 m. The ON dark-to-light edges andOFF light-to-dark edges are highlighted.

    4.3. Architecture

    Fig. 8 shows the system architecture described by the

    rs and Microsystems 31 (2007) 408419 413architecture model and designed for the processing of eachEMD. This architecture has several important features,

  • esso414 F. Aubepart, N. Franceschini / Microprocsuch as the optimization of the digital lters, the simplicityof its design thanks to the use of Intellectual Property (IP)cores, and the exibility of the circuit design. Special care

    Fig. 6. Simulation results of the EMD algorithm implementation (fS = 2.5 kH(from 0 to 0.6 m), (b) one-dimensional ground texture consisting of randomly dtwo adjacent photoreceptors, (e) and (f) band-pass ltered outputs, (g) and (h)estimated by the EMD facing vertically downwards over the terrain shown in

    Fig. 7. Zoom on the signals from two adjacent photoreceptors (see Figrs and Microsystems 31 (2007) 408419was taken to restrict the space taken up by the digital lterimplementation, in order to maximize the possible numberof EMDs in the FPGA.

    z, vx = 2 m/s, h = 1 m, 36-bit xed-point binary format). (a) Shallow reliefistributed, variously contrasting rectangles, (c) and (d) output signals fromoutputs from the hysteresis comparators, (i) relative angular speed XEMD(a) while translating at a constant speed.

    . 6c and d) and their band-pass ltered versions (see Fig. 6e and f).

  • tion

    essoFig. 8. Elementary Mo

    F. Aubepart, N. Franceschini / MicroprocA single structure called the Filter Compute Unit wasdeveloped, with which high speed sequential processingcan be performed, as shown in Fig. 9. This unit consistsof just one multiplier, one adder, one Read Only Memory(ROM), one Random Access Memory (RAM), two multi-plexers, three registers and two binary transformationfunctions. These components are Xilinx Intellectual Prop-erty (IP) blocks or synthesizable VHDL (Very High speedIntegrated Circuit Hardware Description Language)descriptions. The ROM contains the 17 lter coecientsobtained at a sampling frequency fS = 2.5 kHz orfS = 5 kHz. These coecients can be quickly and easilychanged using CAD tools (ISE Xilinx) as required, in orderto run tests at other sampling frequencies. The RAM isused to store the intermediate values computed. Multiplex-ers minimize the number of operators.

    In this unit, each photoreceptor signal is processedduring the sampling time TS. When the processing has beencompleted, the digital values are memorized in a registerand the intervals between the excitation of two neighbour-ing photoreceptor channels start to be measured. Ahysteresis comparator determines the instant at which theband-pass ltered signal from each photoreceptor channel

    Fig. 9. Filter ComputeDetector architecture.

    rs and Microsystems 31 (2007) 408419 415reaches the threshold value. The resulting logical signal isused to trigger the measurement of the delays Dt in ques-tion. The logical signal delivered by channel i starts a coun-ter which is stopped by the logical signal delivered by theneighbouring channel i + 1.

    One of the most interesting facts learned from our studiesin which electrophysiological analysis of the y EMD wascombined with single photoreceptor microstimulation hasbeen that the motions of ON and OFF contrasting edgesare detected separately by the nervous system and measuredby two separate neural circuits operating in parallel [27].

    These ON and OFF signals are not necessarily redun-dant and actually improve the refresh rate of motiondetection while alleviating the correspondence problem.Each EMD channel was therefore split into two parallelchannels, one devoted to measuring the motion of ONcontrasting edges, and the other to measuring the motionof OFF contrasting edges. This required multiplying thenumber of comparators and counters by two, as shownin Fig. 8.

    Useful information about the delay Dt, and hence aboutthe angular speed X, can be obtained by suitably mergingthe count data corresponding to the various intervals

    Unit architecture.

  • measured. The data fusion procedure consisted here inincreasing the measurement accuracy by arithmeticallyaveraging the 14 ON and OFF channel intervals mea-sured by a linear array of 7 EMDs (8 neighbouring photo-receptors) covering a total visual eld of 8 1.05 = 8.4.

    The nal piece of the architecture is the inverse exponen-tial function with a time constant s = 30 ms, which allows

    the photodiode outputs (Fig. 12a and b) is caused by the

    tion noise due to the binary adaptation between the digitallter output signal and the 12-bit DAC (36 bits) 12 bits)used to monitor the signals.

    Fig. 13 gives the normalized EMD output with respectto the delay time Dt, which was estimated from the variousspeeds XC at which the moving pattern was presented. Thefollowing parameters were used in this experimental test:maximum delay range 102.4 ms, sampling frequencyfS = 2.5 kHz, contrast m 0.8, measured illuminance420 Lux (corresponding to the usual illuminance of indoorenvironments). The curve (solid line) is the theoreticalinverse exponential and the circles are the output measure-ment points at each angular speed XC. The slight mismatch

    416 F. Aubepart, N. Franceschini / Microprocessofor a wide range of delays ranging up to 102.4 ms. Thiscomponent was implemented in the form of a Look-UpTable (IP block in FPGA) that was used to convert thefused delay data into an estimated angular speed XEMD.

    5. Experimental results

    5.1. Multi-EMD hardware integration

    Among the members of the Virtex2 Xilinx FPGA fam-ily, we selected the XC2V250 type for the hardware imple-mentation phase because its small size (12 12 mm) andsmall mass (0.5 g) meet the stringent payload constraintsinvolved in designing Micro-Air-Vehicles, while it provides250.000 possible system gates available for the computa-tion. In addition, the hardware implementation phasewas carried out using the convenient ISE Xilinx CADTools on a dedicated evaluation board, which made theFPGA design relatively quick, easy and exible.

    Table 1 shows the working characteristics of the deviceobtained after the logical synthesis of a 7-EMD architec-ture (based on 8 photoreceptor inputs). The processing per-formed by each EMD is carried out in 163 clock cycles.Final calculations after the placing and routing steps hadbeen carried out showed that a maximum FPGA frequencyas high as 100 MHz would be acceptable. This is compati-ble with processing 245 EMD channels in the FPGA at asampling frequency of 2.5 kHz.

    Fig. 10 shows the two Printed Circuit Boards (PCB)which we designed to test the visual sensor. A 12-photore-ceptor linear array (Centronic LD12A-5T) was mountedonto a miniature circular PCB (diameter 24 mm, total mass1.25 g) incorporating the 12 SMD current integratingampliers and a DAC. This circular board was mountedbehind a lens (f = 30 mm) and nely shifted axially (bymeans of a micrometer) to obtain the defocus describedin Section 2.3. The second board (measuring 33 60 mmand weighing 5.5 g) consists of a Xilinx FPGA, a ReadOnly Memory for the conguration architecture, and volt-age regulators. Power consumption is 150 mW at low illu-minance levels (indoor environments with articial light)

    Table 1Working characteristics of the XC2V250 device

    Slices 924 out of 1536 60%Slice ip ops 988 out of 3072 32%Four input LUTs 1563 out of 3072 50%Bonded IOBs 33 out of 92 35%BRAMs 4 out of 24 16%

    MULT18 18s 9 out of 24 37%GCLKs 1 out of 16 6%100-Hz uctuation due to the articial light. Noise in theband-pass outputs (Fig. 12c and d) is mainly a quantica-and reaches 400 mW at high illuminance levels (sunny out-door environments).

    5.2. Experimental test-bench

    The experimental test-bench used to assess the perfor-mances of the experimental eye is shown in Fig. 11. Theeye consists here of a lens (focal length 30 mm) and thelinear photoreceptor array LD12A-5T with its current-inte-grator electronics. The eye is placed at a distance D = 1 mfrom a white wall, in front of which a contrasting pattern (astrip of black or grey cardboard) is moved. This pattern ismounted onto the pen-holder arm of an analog plotterwhich moves it linearly at a constant speed v0. Data fromeach of the comparators in the photoreceptor board(Fig. 3) are sent to the FPGA.

    Fig. 12 shows the signals delivered by the two photore-ceptor channels of an EMD when a dark stripe with con-trast m = 0.8 crossed their visual eld at an angularspeed X = 58.2/s. The contrast m = 0.8 was determinedfrom the relative luminance of the contrasting pattern (I1)and that of the white background (I2), as follows:

    m I2 I1I1 I2 6

    The apparent noise (amplitude peak-to-peak 60 mV) in

    Fig. 10. Photoreceptor linear array board (left) and electronic board(right) with the 12 12 mm FPGA.

    rs and Microsystems 31 (2007) 408419errors are mainly due to inaccurate estimates of the actualdelays, which were deduced from the linear speed v0 at

  • essoF. Aubepart, N. Franceschini / Microprocwhich the analog plotter (used to displace the pattern) wasdriven. In this conguration, the smallest angular speeddetected was approximately 8.4/s and the highest angularspeed measured was 82/s (the speed was limited by theplotter used to move the stripe).

    Fig. 11. Experimen

    Fig. 12. (a and b) Real signals from the 4th and 5th photoreceptors in the arraoutputs, (e, f, g and h) ON and OFF comparator outputs (e and f correspcorrespond to the ON and OFF transitions in the 5th photoreceptor). Thespeed of X = 58.2 of the moving stripe presented.rs and Microsystems 31 (2007) 408419 417Similar experiments were carried out with various greyscale patterns to test the robustness of the digital EMDto contrast. Table 2 indicates the minimum angular speeddetected by the digital EMD with 3 contrast values ata mean illuminance of 420 Lux, at the two sampling

    tal test-bench.

    y, corresponding to the down-counter (Fig. 3), (c and d) band-pass lteredond to the ON and OFF transitions in the 4th photoreceptor; g and hinterval e measured was Dt = 18.04 ms, which corresponds to the angular

  • ualovi

    essoFig. 13. Normalized EMD output versus delay Dt (circles). This delay is eqphotoreceptors. Dt was inferred from the speed of the analog plotter arm mgood match with the inverse exponential curve.418 F. Aubepart, N. Franceschini / Microprocfrequencies selected: fS = 2.5 kHz and fS = 5 kHz. As canbe seen from this table, the motion of highly contrastingpatterns is measurable down to lower speeds than themotion of slightly contrasting patterns. This table alsoshows that reducing the sampling period to 2.5 kHz helpsto detect the motion of low-contrast objects.

    6. Conclusion

    In this paper, we presented a bio-inspired visual systembased on optical ow (OF) measurements. The OF sensors(Elementary Motion Detectors) are implemented in termsof a miniature FPGA, which makes them small enoughto be embedded into miniature autonomous systems suchas Micro-Air-Vehicles, where they can be used for obstacleavoidance, terrain following and stabilization purposes.

    A top-down methodology was used to determine howthe EMDs would function at each stage of developmentand to integrate and optimize the architecture. Our specicEMD architecture is integrated into a Virtex2 XilinxFPGA (XC2V250), which is only 12 12 mm in size, whilefeaturing no less than 250.000 system gates. This devicewas tested successfully using an experimental electro-opti-cal test-bench. Using this same FPGA at a 100 MHz clock

    Table 2Minimum angular speeds measured, depending on the contrast m and thesampling frequency fS (Illuminance = 420 Lux)

    fS = 2.5 kHz fS = 5 kHz

    m = 18% 40.8/s No detectionm = 50% 15.2/s 32.5/sm = 80% 8.4/s 24/sfrequency, it would be possible to implement up to 245 Ele-mentary Motion Detectors on a less-than-one-gram pieceof integrated digital electronics that requires only a fewexternal components.

    The maximum sampling frequency of 5 kHz makes itpossible to operate in a relatively large illuminance range.The most suitable sampling frequency was found to be

    to the time taken by an edge to cross the visual axes of two neighbouringng the dark pattern (Fig. 11). The continuous curve shows that there was ars and Microsystems 31 (2007) 4084192.5 kHz when the Centronic LD12A-5T photoreceptor lin-ear array was used with a lens with a focal length of 30 mm.These high sampling frequencies are compatible with thefast dynamics of Micro-Air-Vehicles.

    The FPGA solution is highly versatile, as it can accom-modate photoreceptor arrays of various sizes and shapesassociated with lenses of various focal lengths covering var-ious elds of view, in much the same way as the eyes ofmany arthropods are able to do.

    References

    [1] M. Williams, D.I. Jones, G.K. Earp, Obstacle avoidance during aerialinspection of power lines, Aircraft Engineering and AerospaceTechnology 73 (5) (2001) 472479.

    [2] V.H.L. Cheng, B. Sridhar, Technologies for automating rotorcraftNap-of-the-earth ight, Journal of American Helicopter Society(1993) 7887.

    [3] W. Reichardt, R. Poggio, Visual control of orientation behavior inthe y, Quarterly Reviews of Biophysics 9 (3) (1976) 311375.

    [4] K. Hausen, The lobula complex of the y: structure, function andsignicance in visual behaviour, in: M.A. Ali (Ed.), Photoreceptionand Vision in Invertebrates, New York, 1984, pp. 523559.

    [5] N. Franceschini, Early processing of color and motion in a mosaicvisual system, Neuroscience Research (Suppl. 2) (1985) 1749.

    [6] N. Franceschini, Combined optical, neuroanatomical, electrophysio-logical and behavioural studies on signal processing in the ycompound eye, in: C. Taddei-Ferretti (Ed.), Biocybernetics of Vision:

  • Integrative Mechanisms and Cognitive Processes, World Scientic,London, pp. 341361.

    [7] S. Ullman, Analysis of visual motion by biological and computersystems, IEEE Computer 14 (1981) 5767.

    in: Proceedings of the IEEE International Conference on FieldProgrammable Technology, Tokyo, 2003.

    [26] D.S. Katz, R.R. Some, NASA advances robotic exploration, IEEE

    F. Aubepart, N. Franceschini / Microprocessors and Microsystems 31 (2007) 408419 419[8] S.S. Beauchemin, J. Barron, The computation of optical ow, ACMComputing Surveys 27 (3) (1995) 433467.

    [9] J.-M. Pichon, C. Blanes, N. Franceschini, Visual guidance of a mobilerobot equipped with a network of self-motion sensors, in: W. Wolfe,W. Chun (Eds.), Mobile Robots. SPIE, vol. 1195, Bellingham, USA,1989, pp. 4453.

    [10] N. Franceschini, J.-M. Pichon, C. Blanes, From insect vision to robotvision, Philosophical Transactions Royal Society of London B (337)(1992) 283294.

    [11] T. Netter, N. Franceschini, A Robotic aircraft that follows terrainusing a neuromorphic eye, in: Proceeding IEEE InternationalConference on Robots and Systems (IROS), Lausanne, Swiss, 2002,pp. 129134.

    [12] F. Ruer, S. Viollet, N. Franceschini, Visual control of two aerialmicro-robots by insect-based autopilots, Advanced Robotics 18 (8)(2004) 771786.

    [13] N. Franceschini, J-M. Pichon, C. Blanes, Bionics of visuomotorcontrol, in: T. Gomi (Ed.), Evolutionary Robotics: From IntelligentRobots to Articial Life, AAAI books, Ottawa, 1997, pp. 4967.

    [14] F. Mura, N. Franceschini, Obstacle avoidance in a terrestrialmobile robot provided with a scanning retina, in: M. Aoki, I.Masaki (Eds.), Intelligent Vehicles, vol. II, MIT Press, Cambridge,1996, pp. 4752.

    [15] T. Netter, N. Franceschini, Neuromorphic optical ow sensing fornap-of-the-earth ight, in: Proceeding of SPIE Conference on MobileRobots XIV, vol. 3838, Boston, USA, 1999, pp. 208216.

    [16] F. Ruer, N. Franceschini, OCTAVE, a bio-inspired visuo-motorcontrol system for the guidance of Micro-Air-Vehicles, in: Proceedingof SPIE Conference on Bioengineered and Bioinspired Systems, vol.5119, Bellingham, USA, 2003, pp. 112.

    [17] F. Ruer, N. Franceschini, Optic ow regulation: the key to aircraftautomatic guidance, Journal of Robotics and Autonomous Systems(50) (2005) 177194.

    [18] S. Viollet, N. Franceschini, Aerial Minirobot that stabilizes andtracks with a bio-inspired visual scanning sensor, in: B. Webb, T.Consi (Eds.), Biorobotics, MIT Press, Cambridge, 2001, pp. 6783.

    [19] J. Serres, F. Ruer, N. Franceschini, Two optic ow regulators forspeed control and obstacle avoidance, in: Proceedings of the rstIEEE International Conference on Biomedical and Biomechatronics(Biorob), Pisa, Italy, 2006, pp. 750757.

    [20] F. Ruer, S. Viollet, S. Amic, N. Franceschini, Bio-inspired opticalow circuits for the visual guidance of Micro-Air-Vehicles, in:Proceeding of the IEEE International Symposium on Circuits AndSystems, vol. III, Bangkok, Thailand, 2003, pp. 846849.

    [21] R.R.Harrison,C.Koch,A robust analogVLSImotion sensor basedonthe visual system of the y, Autonomous Robots 7 (3) (1999) 211224.

    [22] S.C. Liu, A. Usseglio-Viretta, Fly-like visuomotor responses of arobot using a VLSI motion-sensitive chips, Biological Cybernetics 85(6) (2001) 449457.

    [23] J. Kramer, R. Sarpeshkar, C. Kock, Pulse-based analog VLSIvelocity sensors, IEEE Transactions on Circuits and Systems II (44)(1997) 86101.

    [24] G. Barrows, C. Neely, Mixed-mode VLSI optic ow sensors for in-ight control of a Micro Air Vehicle, Proceedings of SPIE 4109 (2000)5263.

    [25] H. Yamada, T. Tominaga and M. Ichikawa, An autonomous yingobject navigated by real-time optical ow and visual target detection,[33] J.C. Zuerey, A. Beyeler, D. Floreano, Vision-based navigation fromwheels to wings, in: Proceeding of IEEE International Conference onintelligent Robots and Systems, Las Vegas, USA, 2003, pp. 29682973.

    [34] F. Aubepart, N. Franceschini, Optic ow sensors for robots:Elementary Motion Detectors based on FPGA, in: Proceeding ofIEEE International Workshop on Signal Processing Systems, Athens,Greece, 2005, pp. 182187.

    [35] C. Browy, G. Gullikson, M. Indovina, A Top-Down approach toIC design, .

    Fabrice Aubepart was born in Chaumont, France.In 1999, he obtained his Ph.D degree in Micro-electronics at the Universite Louis Pasteur inStrasbourg, France. He joined the BioroboticsLaboratory at the Motion and Perception Insti-tute, CNRS and University of the Mediterraneanin Marseille, France, in 2001. His research focu-ses mainly on Algorithms and Architectures forRobotics, Computer Vision and VLSI Design.

    Nicolas Franceschini was born in Macon, France.He graduated in Electronics and Control Theoryat the National Polytechnic Institute in Grenoblebefore studying Biophysics, Neurophysiologyand behavioural analysis at the University andMax-Planck Institute for Biological Cyberneticsin Tubingen (Germany). He obtained his PhD atthe National Polytechnic Institute in Grenoble in1972 and spent nine years as a research worker atthe Max-Planck Institute. He then settled downin Marseille, where he set up a Neurocybernetics

    Research Group at the National Centre for Scientic Research(C.N.R.S.). He is now a C.N.R.S Research Director and Head of theBiorobotics Laboratory at the Motion and Perception Institute, CNRSand Univ. of the Mediterranean, Marseille, France.Industrial Electronics, Ajaccio, France, 2004, vol. 1, pp. 7176.[28] N. Franceschini, C. Blanes, L. Oufar, Passive non-contact opticalvelocity sensor, Dossier ANVAR/DVAR Nb 51,549, Paris, 1986.

    [29] C. Blanes, Appareil visuel elementaire pour la navigation a` vue dunrobot mobile autonome, DEA thesis (in French), Univ. Aix-Marseille,1986.

    [30] K. Hausen, M. Egelhaaf, Neural mechanisms of visual course controlin insects, in: Facets of Vision, Springer, Berlin, 1989, pp. 391424.

    [31] R.C. Hardie, Functional organisation of y retina, in: D. Ottoson(Ed.), Progress in Sensory Physiology, Berlin, vol. 5, pp. 179.

    [32] F. Aubepart, M. El Farji, N. Franceschini, FPGA implementation ofElementary Motion Detectors for the visual guidance of Micro-Air-Vehicles, in: Proceeding of IEEE International Symposium onComputer 36 (1) (2003) 5261.[27] N. Franceschini, A. Riehle, A. Le Nestour, Directionally selective

    motion detection by insect neurons, in: D.G. Stavenga, R.C. Hardie(Eds.), Facets of Vision, Springer, Berlin, 1989, pp. 360390.

    Bio-inspired optic flow sensors based on FPGA: Application to Micro-Air-VehiclesIntroductionBio-inspired visual systemSpatial sampling and spatial filteringElementary Motion Detector (EMD)Photoreceptor configuration

    Top-down methodologyEMD implementationPhotoreceptor configurationDigital specificationsArchitecture

    Experimental resultsMulti-EMD hardware integrationExperimental test-bench

    ConclusionReferences