21
Visual Processing in Free Flight Martin Egelhaaf* Neurobiology & CITEC, Bielefeld University, Bielefeld, Germany Definition With their miniature brains many insect groups are able to control highly aerobatic ight maneuvers and to solve spatial vision tasks, such as avoiding collisions with stationary obstacles as well as moving objects, landing on environmental structures, pursuing rapidly moving animals, or localiz- ing a previously learned inconspicuous goal on the basis of environmental cues. With regard to solving such tasks, these insects outperform man-made autonomous ying systems, especially if computational costs and energy efciency are taken as benchmarks. To accomplish their extraordi- nary performance, several insect groups have been shown to actively shape the dynamics of the image ow on their eyes (optic ow) by the characteristic way they move when solving behavioral tasks. The neural processing of spatial information is greatly facilitated, for instance, by segregating the rotational from the translational optic ow component by way of a saccadic ight and gaze strategy. Flying insects acquire at least part of their strength as autonomous systems through active interactions with their environment, which lead to adaptive behavior in surroundings of a wide range of complexity. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually guided ight control might be helpful to nd technical solutions. Detailed Description Flying animals, but also autonomous man-made ying systems, have to solve a number of fundamental tasks. Most basically, they need to control their attitude in three-dimensional space and to stabilize their intended ight course against disturbances. Moreover, to accomplish any task in the environment, a ying animal relies on information about the spatial layout of its surroundings. In many situations the animal may encounter other moving animals and either needs to avoid them in the case of predators or may pursue one of them, for instance, when it is searching for a prey or a mate. Visual information and its processing by the nervous system are crucial for solving such tasks in free ight. Self-motion through three-dimensional environments but also movements of other animals induce image displacements on the eyes and, thus, brightness changes on the two-dimensional array of photoreceptors, i.e., the input site of the visual system. All visual information required for solving any of the abovementioned fundamental tasks needs to be processed from the changes in these two-dimensional intensity distributions at the photoreceptor level. There are many ways how this may be accomplished. One particularly well-analyzed and functionally highly relevant possibility is based on the concept of optic ow that results from the *Email: [email protected] Encyclopedia of Computational Neuroscience DOI 10.1007/978-1-4614-7320-6_343-15 # Springer Science+Business Media New York 2013 Page 1 of 21

Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

  • Upload
    ranu

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

Visual Processing in Free Flight

Martin Egelhaaf*Neurobiology & CITEC, Bielefeld University, Bielefeld, Germany

Definition

With their miniature brains many insect groups are able to control highly aerobatic flight maneuversand to solve spatial vision tasks, such as avoiding collisions with stationary obstacles as well asmoving objects, landing on environmental structures, pursuing rapidly moving animals, or localiz-ing a previously learned inconspicuous goal on the basis of environmental cues. With regard tosolving such tasks, these insects outperform man-made autonomous flying systems, especially ifcomputational costs and energy efficiency are taken as benchmarks. To accomplish their extraordi-nary performance, several insect groups have been shown to actively shape the dynamics of theimage flow on their eyes (“optic flow”) by the characteristic way they move when solving behavioraltasks. The neural processing of spatial information is greatly facilitated, for instance, by segregatingthe rotational from the translational optic flow component by way of a saccadic flight and gazestrategy. Flying insects acquire at least part of their strength as autonomous systems through activeinteractions with their environment, which lead to adaptive behavior in surroundings of a wide rangeof complexity. Model simulations and robotic implementations show that the smart biologicalmechanisms of motion computation and visually guided flight control might be helpful to findtechnical solutions.

Detailed Description

Flying animals, but also autonomous man-made flying systems, have to solve a number offundamental tasks. Most basically, they need to control their attitude in three-dimensional spaceand to stabilize their intended flight course against disturbances. Moreover, to accomplish any taskin the environment, a flying animal relies on information about the spatial layout of its surroundings.In many situations the animal may encounter other moving animals and either needs to avoid them inthe case of predators or may pursue one of them, for instance, when it is searching for a prey ora mate. Visual information and its processing by the nervous system are crucial for solving suchtasks in free flight. Self-motion through three-dimensional environments but also movements ofother animals induce image displacements on the eyes and, thus, brightness changes on thetwo-dimensional array of photoreceptors, i.e., the input site of the visual system. All visualinformation required for solving any of the abovementioned fundamental tasks needs to beprocessed from the changes in these two-dimensional intensity distributions at the photoreceptorlevel.

There are many ways how this may be accomplished. One particularly well-analyzed andfunctionally highly relevant possibility is based on the concept of optic flow that results from the

*Email: [email protected]

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 1 of 21

Page 2: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

image displacements on the eyes induced by self-motion and/or object motion. The optic flow fieldsdepend on the dynamic characteristics of self-motion and object motion, as well as on the three-dimensional structure of the environment in a very specific way. Therefore, they contain richinformation both on the way the animal moves around and on its surroundings. Since the opticflow vectors defined by the retinal image displacements are not immediately available to the brain ofthe animal, representations of optic flow fields and of behaviorally relevant information that may becontained in them need to be computed by the nervous system from the time-varying brightnesschanges of the retinal image as sensed by the lattice of photoreceptors. This processing is done ina sequence of steps before the extracted information can be used by the animal to solve visual tasks.

Optic flow as a source of information is particularly relevant for fast-flying animals, includingmany insects: (1) Given that during flight animals do not have any direct contact to the groundsurface, they rely much more on visual information and, in particular, optic flow information forcontrolling their attitude in space than when walking. (2) Since flight takes place in three dimen-sions, the resulting optic flow may be much more complex than during locomotion on the ground:Movements are characterized by six degrees of freedom, three rotational and three translationalcomponents reflected in the corresponding optic flow components (Fig. 1). (3) Especially during theoften highly aerobatic flight maneuvers of many insects, the optic flow fields may have complexdynamical properties that are related to both the characteristic dynamics of the flights but also theanimal’s spatial relation to objects and surfaces in its surroundings. (4) In contrast to most currenttechnical flying systems which frequently employ systems such as infrared sensors and the globalpositioning system for obtaining positional, distance, and range information, animals apparently donot have access to such systems and have to rely on optic flow information for solving spatial tasksduring flight.

Fig. 1 Self-motion components and corresponding optic flow fields. Movements in free flight have six degrees offreedom, three translatory (forward, upward/downward, sideways) and three rotatory ones (roll, yaw, pitch). The overallgeometrical structure of the corresponding optic flow fields are sketched on a sphere. Note that the lengths of the flowvectors during translatory movements depend, in addition to flight direction, on the distance between the animal and thesurfaces and objects in the environment and, thus, can be regarded as a signature of its depth structure (Not taken intoaccount in the figure)

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 2 of 21

Page 3: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

Constraints Imposed on the Visual Input During Free FlightThe dynamical properties of retinal image displacements that are generated during free flight arelargely shaped by the animal’s flight style. The flight style of some insect species, especially of fliesand flying hymenopterans, such as bees, has been analyzed in detail. It is characterized by sequencesof rapid saccade-like turns of body and head interspersed with translational flight phases duringwhich the gaze direction is kept largely constant (Fig. 2) (Land 1999; Zeil et al. 2008; Egelhaafet al. 2012). Saccadic turns have a rather uniform time course and are shorter than 50 ms. Angularvelocities of up to several thousand degrees can occur during saccades. Since roll movements of thebody that are performed for steering purposes during saccades, and also during sideways trans-lations, are compensated by counter-directed head movements, the animal’s gaze direction isvirtually kept constant during intersaccades. Hence, turns that are inevitable to reach behavioralgoals are minimized in duration and separated from translational flight phases.

This peculiar time structure facilitates the processing of spatial information because only the opticflow component induced by translations contains information about the relative distance of envi-ronmental surfaces and objects from the animal: surfaces and objects nearby pass quickly, whilethose far-off appear virtually stationary (Fig. 3; see below).

A saccadic flight and gaze strategy can be observed during cruising flight in stationary environ-ments in a variety of behavioral contexts (see below). It actively constrains the geometry anddynamics of the optic flow on the eyes and, thus, the visual input and facilitates visual informationprocessing in free flight in spatial tasks as diverse as collision avoidance, flight speed control, andlocal navigation (Egelhaaf et al. 2012). However, in some behavioral situations the motion patternsbeing perceived by the eyes are not only generated as a consequence of self-motion of the animal. Ifanother object, such as a predator, a prey, or a potential mate, moves in the animal’s visual field, thevisual input is shaped by both the animal’s self-motion and the direction and velocity of the movingobject relative to the animal.

Fig. 2 Saccadic flight and gaze strategy of free-flying blowflies. (a) Sample flight trajectory of a blowfly flying ina flight arena with textured walls. Downward view of a flight trajectory (dotted line), with head position and orientationshown every 50 ms (time color coded: start, red; finish, green). (b) Upper panel: orientation of the fly’s longitudinalbody axis (solid red line) and flight direction (dotted blue line) in the external coordinate system. Bottom panel: angularvelocity of the body orientation of the fly (solid red line) and of the flight trajectory (dotted blue line). The fly changed itsgaze and heading direction through a series of short and fast body turns. Flight direction and body axis orientationfrequently deviate: The body axis already points in the new flight direction while the fly is continuing to move on itsprevious course (Adapted from van Hateren et al. 2005)

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 3 of 21

Page 4: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

There is another relevant issue that constrains information processing in free flight, i.e., the shorttime intervals of only up to some 10 ms that are available for extracting behaviorally relevantinformation from the retinal input. These are a consequence of the characteristic dynamics ofsaccadic flights but also of objects moving through the visual field in pursuit or escape maneuvers.Understanding visual processing in free flight needs to take these characteristic dynamical con-straints into account. Given the limited reliability of all neural mechanisms, it is, thus, hardlypossible to draw definite conclusions that are functionally valid for free-flight situations fromexperiments with purely experimenter-designed optic flow stimuli that are frequently constant forseveral hundreds of milliseconds. During such time intervals insects may have performed completeflight maneuvers with more than ten saccadic turns and straight flight segments of variable durationand direction in between.

Geometry of the Retinal Motion Patterns Induced by Self-Motionand Object MotionThe optic flow generated by self-motion in a stationary environment has characteristic geometricalproperties which depend on the movement vector as well as on the three-dimensional layout of theenvironment. A point on a surface in the environment is given by the position vector P ¼ (X, Y, Z)T

relative to the eye-centered coordinate system (T represents the transpose of the vector). When theeye moves in the environment, Pmoves in time t along the corresponding three-dimensional path P(t) ¼ (X(t) Y(t), Z(t))T in the eye-centered coordinate system (Fig. 4).

Its velocity is given by the time derivative of this path:

V tð Þ ¼ dP tð Þ=dt ¼ dX tð Þ=dt, dY tð Þ=dt, dZ tð Þ=dtð ÞT (1)

If we assume that the eye moves in a stationary and rigid environment, all points of an object sharethe same three rotational and translational motion vectors in the eye-centered coordinate system.Then the time-dependent velocity vector V(t) can be expressed in terms of the instantaneous

Fig. 3 Schematic illustration of the consequences of rotational (left diagram) or translational self-motion (rightdiagram) for the resulting optic flow. Superimposed images were either generated by rotating a camera around itsvertical axis or by translating it forward. Rotational self-motion leads to image movements (green arrows) of the samevelocity (indicated by arrow length) irrespective of the distance of environmental objects from the observer. In contrast,the optic flow elicited by translational self-motion (red arrows) depends on the distance between objects from theobserver. Hence, translational optic flow contains information about the spatial layout of the environment

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 4 of 21

Page 5: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

translation T (t) ¼ (Tx(t), Ty(t), Tz(t))T and rotationV(t) ¼ (Ox(t),Oy(t),Oz(t))

T vector, respectively(Longuet-Higgins and Prazdny 1980; Heeger and Jepson 1992):

V tð Þ ¼ dP tð Þ=dt ¼ dX=dt, dY=dt, dZ=dtð ÞT ¼ V� Xþ Tð Þ (2)

Any motion of the eye in the environment results in a time-dependent displacement of thebrightness values on the retina, i.e., the image plane of the eye. The retinal image sequence can bedescribed as a function L(x, y, t), with the brightness value L at image position (x, y) and time t. Thevelocities of the image points are then given in the most general form by

v ¼ dx tð Þ=dt, dy tð Þ=dtð ÞT (3)

The optic flow vectors are geometrically related to the movement vector of the eye and thelocation of the surface points in the environment as well as their spatial relationship. The surfacepoints in the environment, P (X, Y, Z)T, project to image point, L (x, y)T, according to

x ¼ fY=Z and y ¼ fY=Z (4)

f represents the distance between the eye’s nodal point and the image plane. Substituting Eqs. 4and 2 into Eq. 3 leads to the following expression for the image velocity after some rearrangements:

Fig. 4 Relationship between self-motion and the motion of the retinal image of an object in the environment. Retinalmotion generated by self-motion in a stationary environment has characteristic geometrical properties which depend onthe movement vector as well as on the three-dimensional layout of the environment. A point on an object in theenvironment is given by the position vector P in the eye-centered coordinate system. When the eye moves in theenvironment according to a translation vector T (t) ¼ (Tx(t), Ty(t), Tz(t))

T and a rotation vector V (t) ¼ (Ox(t), Oy(t),Oz(t))

T, Pmoves with velocityV(t) in the eye-centered coordinate system. This results in a time-dependent displacementof the image of the object on the retina (i.e., the image plane of the eye) at a velocity v(t). f represents the distancebetween the eye’s nodal point and the image plane

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 5 of 21

Page 6: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

v ¼ dx tð Þ=dt, dy tð Þ=dtð ÞT ¼ 1=Z x; yð ÞM1 x; yð ÞTþM2 x; yð ÞV (5)

In this equation, Z is the distance of each image point on the eye to its corresponding surface pointin the environment.M1 andM2 are matrix expressions that only depend on the image position andthe distance of the image plane, f, from the nodal point of the eye (for details see Heeger and Jepson1992). The field of velocity vectors v at all points in the image plane of the eye represents the opticflow field.

From a functional point of view, the most important conclusion that can be drawn from Eq. 5 isthat the velocity vectors consist of two terms: The first term is referred to as the translationalcomponent of the flow field; it depends on the translation velocity, T, of the eye in three dimensionsand on the depth structure of the environment, i.e., the distance to environmental objects. The secondterm reflects the rotational component; it only depends on the rotation velocity of the eye,V, in theenvironment, but not on the distance to objects. Since the distance term 1/Z and the translationvelocity are multiplied, both a larger distance and a slower translation velocity result in a slowerimage velocity. This implies that the spatial information in the translational optic flow component isambiguous. Accordingly, spatial information can be extracted from the optic flow field only inrelative terms unless the translation velocity is known (Longuet-Higgins and Prazdny 1980).

Additional complexities may arise in natural environments where, for instance, brightness oftenvaries over time, i.e., when clouds temporarily occlude the sun. In the latter case, not every change inbrightness at the level of photoreceptors is generated by motion. Moreover, not every motion yieldsa change in brightness values: If the eye passes an un-textured surface, the optic flow within theobject area cannot be recovered without additional information. Even at object edges, where thebrightness of the background differs from that of the object, the direction of object motion can onlybe recovered if more information is available, for example, at object corners. As a consequence ofthis aperture problem and the other aspects sketched above, it is not possible to derive thegeometrically correct optic flow vectors from the displacements of individual image points withoutadditional information about the physical structure of the environment and the way the observermoves (Heeger and Jepson 1992; Borst and Egelhaaf 1993).

The degrees of freedom in stimulus space and, thus, the abovementioned ambiguities areconstrained by physics. Environmental structures tend to change only on a coarse spatial scale; asa consequence, the probability is very large that neighboring image points are similar (“spatialsmoothness constraint”). Moreover, animals cannot change their direction of motion instanta-neously, but are limited by the physics of force production through muscles as well as by frictionand inertia, depending much on the shape and mass of the body (“temporal smoothness constraint”).However, input dynamics might also be shaped – beyond the constraints imposed by physics – byactive flight and gaze strategies (see above). All these features may help to reduce ambiguities whenit comes to optic flow estimation and may facilitate the processing of information about the animal’sself-motion and the three-dimensional layout of the environment.

If the environment is not entirely stationary, but contains moving objects, the motion patternsgenerated on the eyes are constrained not only by the animal’s self-motion but also by the velocityand trajectory of the moving objects. Assume for convenience that the eye is stationary and a singleobject of half size, l, approaches the eye on a straight collision course at a constant velocity v. Furtherassume that x > 0 is the object’s position relative to the eye (i.e., x ¼ 0 corresponds to the eye’sposition) and that t < 0 represents the time before an impending collision (i.e., t ¼ 0 corresponds tothe time of expected collision). Consequently, the object’s velocity, v, is <0 when the object isapproaching. The object’s position during such a collision course is then given by x(t) ¼ vt, and, bytrigonometry, its retinal angular size y can be determined by y(t) ¼ 2tan�1[l/(vt)]. Hence, the

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 6 of 21

Page 7: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

temporal dynamics of the retinal image is fully determined by its size-to-speed ratio, l/|v|. For a givenl/|v|, the retinal size of an object on a constant collision course initially increases only slowly, i.e., theretinal expansion velocity is small, but then its size appears to grow exponentially with decreasingdistance to the eye, while the velocity vectors always point away from the center of the approachingobject (Fig. 5). Exactly the same time course of y is obtained for all pairs of l and v as long as the ratiol/|v| is kept constant (Fotowat and Gabbiani 2011).

Processing of Visual Motion InformationThe velocity vectors characterizing the retinal optic flow patterns during both self-motion and objectmotion are not directly available to the visual system. Rather the two-dimensional array of photo-receptors only has the two-dimensional pattern of time-dependent brightness changes at its disposal.The first stage of visual motion processing is generally thought to be the computation of local motioninformation from the brightness changes at neighboring locations in the retinal lattice (Fig. 6). Thesecond stage of visual motion processing is the interpretation of this neural representation of opticalflow in terms of the direction and velocity of the animal’s self-motion and of object motion, on the

Fig. 5 Escape from an impending collision with a moving object. (a) Variables characterizing looming stimuligenerated by an approaching object viewed by an insect eye. A solid object (red disk) with half size, l, approachesthe eye at a constant speed; v. t ¼ 0 is the time of expected collision and t<0 before collision. Consequently, the object’svelocity, v, is <0 when the object is approaching, and �v ¼ |v| is the absolute value of v.Y(t) is the angular size of theobject as seen by the eye. The time course of Y is fully determined by the size-to-speed ratio, l/|v|. (b) Kinematics oflooming stimuli. Both the angular size,Y(t), (blue curve) and speed,Y’(t), (green curve) of an approaching object grownonlinearly with time. (c) If a locust is approached by a moving object, it initiates an escape response. The time tocollision at takeoff is linearly related to the size-to-speed ratio of the looming stimulus. (d) The response to a loomingstimulus of an identified wide-field neuron in the locust third visual neuropile increases with increasing retinal size of theobject and reaches a peak before the impending collision. The relation between the peak time and the size-to-speed ratioof the looming stimulus is close to linear (Adapted from Fotowat and Gabbiani 2011)

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 7 of 21

Page 8: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

one hand, but also in terms of the properties and distances of stationary surfaces in the three-dimensional world, on the other hand. Although much is known about how these visual processingsteps may be accomplished by flying animals, there are still many open questions.

Biological motion detection mechanisms get their input from a retinotopic array of discretesampling points given in insects by the ommatidial lattice. The number of ommatidia (facets) rangesfrom only some tens to up to 10,000 (Land 1997). Thus, compared with technical imaging systems,the number of sampling points and the spatial resolution of insect eyes is very low. Nonetheless, thislow spatial resolution appears not to be detrimental to the overall performance of visual motionprocessing given the fact that many insects are well able to perform highly aerobatic flightmaneuvers that are based on optic flow information (see below).

The most prominent computational mechanism proposed as the basis of local visual motionprocessing in flying insects, but also birds and even humans, is the correlation-type motion detector(Reichardt 1961; Borst and Egelhaaf 1989, 1993; Egelhaaf and Borst 1993). This mechanism wasoriginally proposed on the basis of behavioral experiments with beetles but has been characterized indetail by electrophysiological analyses in the visual system of flies. In its simplest form, a localmotion detector is composed of two mirror-symmetrical subunits (Fig. 6b). In each subunit, thesignals of adjacent light-sensitive cells receiving spatially and temporally filtered brightness signals

Fig. 6 Major processing steps of visual motion computation in insects. (a) Schematic of the visual motion pathway.Images of the moving environment are projected on the array of photoreceptors. The input is spatially and temporallyfiltered before signals originating from neighboring points in the visual space interact with each other. These interactionslead to local motion measurements. The outputs of many retinotopically organized local movement detectors arespatially pooled by wide-field neurons. Their output signals are involved in the control of various motion-drivenbehavioral components. (b) Organization of a local movement detector in its simplest form. The movement detectorreceives spatially and temporally filtered signals from neighboring points in space. The detector consists of two mirror-symmetrical subunits. In each subunit one of the inputs is temporally delayed (t), before it interacts nonlinearly with theundelayed signal of the other detector input. A multiplication-like interaction (M) is the lowest-order nonlinearity that issufficient to account for many aspects of motion detection. The subunit outputs contribute to the response of the wide-field neurons with opposite polarity, i.e., the two signals are subtracted. (c) Example of a wide-field neuron in the thirdvisual neuropile of the blowfly. The cell was filled with a fluorescent dye, before it was visualized under a fluorescencemicroscope

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 8 of 21

Page 9: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

from neighboring points in visual space are multiplied after one of them has been delayed. The finaldetector response is obtained by subtracting the outputs of two such subunits with opposite preferreddirections, thereby considerably enhancing the direction selectivity of the motion detection circuit.Each motion detector responds with a positive signal to motion in a given direction, i.e., eitherhorizontally or vertically, and with a negative signal to motion in the respective opposite direction.Various elaborations of this basic motion detection scheme have been proposed to account for theresponses of insect motion-sensitive neurons under a wide range of stimulus conditions, includingeven natural optic flow as experienced under free-flight conditions (Egelhaaf et al. 2012). During thelast years much progress has been made by combining the sophisticated repertoire of genetic andmolecular approaches in Drosophila with electrophysiological and imaging techniques to identifythe different components of the neural circuits underlying the correlation motion detection scheme.From a computational point of view, an especially relevant result is the separation of the inputcircuits of the movement detector into an ON and OFF pathway processing separately brightnessincrements and decrements (Borst 2009; Borst et al. 2010).

The local motion measurements provided by correlation-type movement detectors do not providea veridical representation of the optic flow vectors. Even if at each retinal location the correspondingpair of local movement detectors most sensitive to horizontal and vertical motion, respectively, iscombined to a local motion vector, its direction and amplitude may considerably deviate from thegeometrically correct velocity vector. Several aspects of biological motion detectors are of particularfunctional relevance (Egelhaaf and Borst 1993; Egelhaaf et al. 2012): (1) Velocity dependence: Localmotion detectors do not operate like odometers, even if the time average or spatial average of theirresponses is taken into account. Their mean response increases with increasing velocity, reachesa maximum, and then decreases again and, thus, does not reflect pattern velocity unambiguously.The response characteristics of biological motion detection systems are even more complex, sincetheir velocity maximum depends on the textural properties of the moving stimulus pattern (Fig. 7a).The pattern dependence of velocity tuning is reduced if the stimulus pattern consists of a broad rangeof spatial frequencies, as is characteristic of natural scenes. Despite these ambiguities, free-flyingflies and bees appear to regulate their translation velocity to keep the retinal velocities in that part ofthe operating range of the motion detection system in which its response increases monotonicallywith retinal velocity. (2) Time course of local motion responses: The responses of the localmovement detectors do not represent the local pattern velocity unambiguously because theyprominently depend on the local pattern properties that are perceived by their input elements.Since these response modulations of neighboring movement detectors are phase shifted with respectto each other, spatial pooling of many of these, especially along the direction of motion, mainlyreduces – depending on the spatial extent of pooling – those pattern-dependent response modula-tions that originate from the high spatial frequencies of the stimulus pattern (Fig. 7b).

Not in all instances is optic flow processing in flying insects believed to be based on directionallyselective movement detectors described by the correlation model. In a specialized circuit tuningwide-field neurons in locusts and, probably, also in the fruit fly Drosophila, to looming stimuli asevoked by an approaching object, the retinotopic excitatory inputs of these neurons have beenconcluded not to rely on the direction of motion. Rather their responses depend on the brightnesschanges evoked by retinal image displacements and the increase of these changes when the objectcomes closer to the eye. With increasing retinal speed, as the object comes closer, the latency of theinputs decreases, favoring synchronization of inputs sequentially activated throughout the loomingsequence making the neuron maximally sensitive to an impending collision (Fotowat and Gabbiani2011; Dewell and Gabbiani 2012). Whereas this computationally parsimonious principle servesa special circuit for detecting an object on a collision course, correlation-type movement detectors

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 9 of 21

Page 10: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

are thought to be employed by flying insects in solving a broad variety of motion-dependentorientation tasks, such as course control, object-induced orientation, collision avoidance withstationary surfaces in the environment, or even landmark-based local navigation, which requirerobust motion information and need to combine local movement detectors in versatile, task-dependent arrangements.

Optic flow elicited by self-motion is specified by global, rather than merely local, featuresdepending on flight direction and on whether the animal basically translates or rotates but also onthe three-dimensional layout of the environment (see above). This implies that mechanismsextracting optic flow information from the retinal input need to combine local motion measurementsfrom large areas of the visual field. Accordingly, local motion information has been shown to bespatially pooled on the extended dendrites of wide-field neurons in the insect visual system or, morespecifically, the third visual neuropile, the lobula complex (Fig. 6). The way how this may beaccomplished is likely to depend on the task that is solved by the respective system. In flies, wherethe mechanisms of task-dependent spatial pooling of local motion information have been analyzedin detail, a population of some tens of individually identifiable neurons and their synaptic interac-tions has been characterized in the third visual neuropile. Due to their spatial input organization andspecific synaptic interactions, these neurons respond preferentially to retinal motion patterns thatmay be relevant in different behavioral contexts (Hausen 1984; Krapp 2000; Borst and Haag 2002;Egelhaaf et al. 2002; Egelhaaf 2006; Borst et al. 2010; Borst 2012; Egelhaaf et al. 2012).

Local motion information from different parts of the visual field needs to also be combined in anappropriate way if, for instance, an approaching object is to be detected. To a large extent, this

Fig. 7 (a) Time-averaged velocity responses of a wide-field neuron in the blowfly third visual neuropile to gratingpatterns of different spatial wavelengths (red curve, 6.6�; blue curve, 21.5�; green curve, 36.3�) moving horizontally inthe neuron’s preferred direction. Velocity–response curves strongly depend on the pattern properties; their optima areshifted to higher velocities with increasing spatial wavelength of the pattern (Data from Eckert 1980). (b) Consequencesof dendritic integration on the representation of visual motion: schematic of a directionally selective wide-field neuronwith two branches of its dendrite, the axon, and the axon terminal. The wide-field neuron receives retinotopicallyorganized input from many local motion detectors (vertical lines terminating with “synapses”; red dots representexcitatory synapses and blue dots inhibitory synapses on the dendrite). As a consequence of this input, the cell isexcited by motion in its preferred direction and inhibited by motion in the null direction. Even when the velocity ofmotion is constant, the activity of the local movement detectors is modulated depending on the texture of the surround inits receptive fields. Traces on the right indicate the time-dependent signals of three local input elements of the wide-fieldneuron. By dendritic pooling of many local elements, this pattern dependence in the time course of the responses isreduced (left trace)

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 10 of 21

Page 11: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

situation is characterized by looming stimulation, i.e., by moving motion vectors pointing awayfrom the edges of the object, while its retinal size increases as a consequence of the approach (seeabove). Again in locusts and, most recently, flies where flight behavior induced by approachingobjects has been analyzed, wide-field neurons have been characterized in detail in the third visualneuropile and at later processing stages that appear to process this kind of retinal image expansionand to be tuned to such looming stimuli (Fotowat and Gabbiani 2011; Dewell and Gabbiani 2012).

Information Extracted from the Retinal Motion Patterns During Different Free-Flight BehaviorsFlying animals have to solve a variety of behavioral tasks in a variety of contexts that all rely ondifferent aspects of visual motion information. A selection of particularly well-analyzed behaviorswill be addressed in the following – whenever possible, with respect to the neural mechanismsinvolved.

Estimation of Self-Motion in the Context of Attitude Control and Course StabilizationWide-field neurons that are thought to sense self-motion of the animal and to play a role in coursecontrol have been characterized in a number of insect species. The underlying mechanisms havebeen investigated in flies in great detail. The preferred directions of the local motion detectors thatsynapse onto a given wide-field cell appear to coincide to some extent with the directions of thevelocity vectors characterizing the optic flow induced during particular types of self-motion. Thespecificity of wide-field neurons for certain types of optic flow has been shown to be much enhancedby synaptic interactions with other wide-field cells in the ipsilateral and/or contralateral half of thevisual system. As a consequence, individual wide-field cells strongly respond to certain types of self-motion, such as to rotations around the vertical axis of the visual system or different axes in thehorizontal plane (Hausen 1984; Krapp 2000; Taylor and Krapp 2008).

These wide-field cells are commonly thought to mediate compensatory optomotor turningmovements of the head and/or of the entire body. These behavioral responses have commonlybeen analyzed in flight simulators in tethered flight rather than in free flight. The fly generatesturning movements of the head and the body to follow the moving pattern. These are usuallyinterpreted to reflexively stabilize gaze direction and/or the flight course by minimizing the retinalvelocities, for instance, resulting from external and/or internal disturbances (Hengstenberg 1993;Taylor and Krapp 2008). As a general feature, compensatory optomotor turning responses arerelatively slow: During maintained motion stimulation they build up over several hundreds ofmilliseconds. Given the extraordinarily rapid flight dynamics and, as a consequence, the continu-ously changing retinal optic flow pattern on the eyes during free-flight maneuvers of many insects,the functional significance under natural behavioral conditions of such slow compensatoryoptomotor responses is still unresolved. Since in free flight intersaccadic gaze stabilization appearsto be very fast, it is unlikely to be controlled by optomotor feedback. Rather optomotor feedback islikely to only play a role on a much slower timescale, for instance, to compensate for steadyasymmetries at the level of the sensory input (e.g., internal gain differences) or the motor output(e.g., worn-out wings). Hence, the motion-sensitive wide-field neurons in the third visual neuropilemay play a role in compensating unintended slow deviations of the animal from its flight course,after their output signals are considerably temporally low-pass filtered (Egelhaaf et al. 2012). So far,it is not clear where in the nervous system and by what mechanisms this filtering is accomplished.

In addition to the body, the head of many analyzed insects, such as flies and bees, performscompensatory optomotor movements. In free flight, compensatory head movements are mostprominent during roll rotations of the body, as are generated during banked saccadic turns and

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 11 of 21

Page 12: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

during sideways translations. Given the extremely rapid timescale at which gaze direction isstabilized during saccadic flight maneuvers, the response latencies, and the relatively slow timecourse of visually mediated head responses, the functional role of visual motion for mediatingcompensatory head movements under free-flight conditions is still not entirely clear.

Optic Flow as a Source of Spatial InformationThe time that flies and bees keep their gaze straight during cruising flights amounts to more than80 % of the overall flight time. Rotations are comparatively short and very rapid (Fig. 2). Thissaccadic flight and gaze strategy has been interpreted as a way to facilitate gathering environmentalinformation and, in particular, spatial information from the retinal image flow during intersaccadictranslatory self-motion (see above). The motion-sensitive wide-field neurons in the third visualneuropile are well suited to play this role. Accordingly, responses of such cells in flies and bees havebeen found to depend on the distance of environmental surfaces and objects passing their receptivefields during self-motion of the animal (Egelhaaf et al. 2012). These object-induced responses areaugmented by adaptation mechanisms which depend on stimulus history, and, thus, on the previ-ously perceived optic flow (Egelhaaf et al. 2012; Kurtz 2012).

Two features of distance-dependent object-induced responses are of special functional signifi-cance. (1) Since the retinal velocity of an object scales with distance, an object nearby will lead tolarger intersaccadic responses than a more distant one. A cluttered spatial scenery is segmented inthis way, without much computational effort, into nearby and distant objects. Thus, in behavioralcontexts where nearby objects are especially functionally relevant (see below), object detection viaoptic flow automatically weighs objects according to their functional relevance. (2) The spatial rangethat leads to significant intersaccadic response changes in motion-sensitive wide-field neuronsdepends on flight velocity. Under spatially constrained conditions where flies flew at translationalvelocities of only slightly more than 0.5 m/s, the spatial range within which significant distance-dependent intersaccadic responses are evoked amounts to approximately 2 m. Since a given retinalvelocity is determined in a reciprocal way by the distance of objects and velocity of self-motion,respectively, the spatial range that is neurally represented increases with increasing translationalvelocity, and, hence, the behaviorally relevant spatial range scales with intersaccadic flight velocity.From an ecological point of view, this scaling of the behaviorally relevant depth range is economicaland efficient: A fast-moving animal should, for instance, initiate a turn leading to an avoidancemaneuver at an earlier point of time and at a greater distance from an obstacle than when movingslowly.

Spatial information derived from optic flow and represented in the visual motion pathway is usedby free-flying insects in a variety of behavioral contexts. Four of these will be sketched here:

Avoiding Collisions with Environmental Surfaces by Controlling Saccadic Turns In manysituations, objects or other structures in the environment, such as extended surfaces, may interferewith the animal’s trajectory as obstacles that need to be avoided. Thus, collision avoidancerepresents a basic but highly relevant spatial task. Optic flow information is the most relevant cueinvolved in mediating collision avoidance with environmental structures in fast-flying insects. Thishas been shown in detail both for tethered and free-flying flies. The evasive responses in free flightare based on the characteristic saccadic turns of insects, whereas those observed in tethered flight aremuch slower and, thus, only reflect to some extent what is going on under free-flight conditions.

There is consensus between studies that the largely translational optic flow during intersaccadicintervals is relevant in controlling the direction and amplitude of saccades that lead to collisionavoidance. However, it is still inconclusive which optic flow parameters may be most relevant.

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 12 of 21

Page 13: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

Nevertheless, all proposed mechanisms of evasive saccadic responses rely on some sort of asym-metry in the optic flow pattern in front of the two eyes. The asymmetry may be due to the location ofthe expansion focus of the flow pattern in front of one eye or to a difference between the overall opticflows in the visual fields of the two eyes.

Not all parts of the visual field appear to be involved in saccade control. In blowflies, for instance,the optic flow in the lateral visual field does not play a role in determining saccade direction. Thisfeature is likely to be related to the flight style of blowflies (Egelhaaf et al. 2012). Duringintersaccades they predominantly fly forward with some sideways components immediately aftersaccades that shift the pole of expansion of the flow field slightly towards frontolateral locations. Incontrast, Drosophila, hoverflies, but also bees are able to hover to some extent and to fly sideways.Here, also lateral and even rear parts of the eye may be involved in eliciting evasive saccades in thecontext of collision avoidance.

Control of Translatory Flight Whereas the time course of saccades is fairly stereotyped with onlytheir amplitude and direction depending on the behavioral context, the intersaccadic translationalmovements during free flight vary to a much larger extent. In particular, they depend on the spatiallayout of the environment but also on the behavioral task. Two systematically analyzed examples ofhow translational free-flight components are controlled by optic flow in two different behavioralcontexts will be sketched: (1) the dependence of translation velocity on the spatial layout of theenvironment and (2) the control of translational movements during visual landmark navigation in thevicinity of a barely visible goal.

All flying insects investigated in this regard decelerate when their flight path is obstructed. Flightspeed is thought to be controlled by optic flow generated during translational flight. Flies, bees, andmoths were concluded to keep the optic flow on their eyes at a “preset” total level by adjusting theirflight speed. Accordingly, they decelerate when the translational optic flow increases, for instance,while passing a narrow gap or flying in a narrow tunnel (Srinivasan and Zhang 2004; Srinivasan2011; Egelhaaf et al. 2012). Not all parts of the visual field equally contribute to the input of the flightvelocity controller: The most prominent role can be attributed to the intersaccadic optic flowgenerated in eye regions looking in front of the insect (Fig. 8). In these regions of the visual field,the intersaccadic retinal velocities are kept in a range of the motion vision system where theresponses of the wide-field cells in the third visual neuropile still increase monotonically withincreasing velocity and decrease with decreasing velocity (Egelhaaf et al. 2012).

During spatial navigation of bees, translational flight maneuvers have a particularly elaborate finestructure in the vicinity of the goal that can be described by a distinct set of prototypical movements.The optic flow generated during these flight sequences appears to be exploited to gather spatial andtextural information about the environment and, in particular, the landmark constellation. Not onlythe overall velocity but also the sideways and forward translational movements depend on the bee’sdistance and orientation relative to the landmarks and the goal. For instance, bees tend to performtranslational movements with a strong sideways component close to landmarks, as if they wanted toscrutinize them in detail (Dittmar 2011; Egelhaaf et al. 2012).

Estimation of Travelled Distance Optic flow information is also used at least by bees for spatialtasks in the context of long-range navigation, such as the estimation of distances travelled duringflights. Bees need to acquire distance (and direction) information on foraging excursions in anunknown environment to be able to return to their nest and to communicate the location of rewardingfood sources to their fellow foragers. They gauge distance in terms of the optic flow experiencedduring the flight to a food source and use this information (in addition to direction information)

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 13 of 21

Page 14: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

during their return flights. Since the optic flow generated during translational movements dependson the three-dimensional layout of the environment, distance information gathered in this way isambiguous. These ambiguities do not lead to problems as long as the recruited bees tend to fly on thesame route as the forager and the environment does not change much between the flight of theforager and that of recruited bees (Fig. 9). Since distinctive changes in the spatial layout of theenvironment occur only rarely in natural settings over a couple of days, such a relatively simplemechanism of distance estimation is likely to be sufficient for the specific needs of navigating beesunder normal behavioral conditions (Srinivasan and Zhang 2004; Srinivasan 2011; Wolf 2011).

Local Navigation Based on Landmark Information Whereas all flying insects should be able toavoid collisions with obstacles and probably rely on optic flow information for solving this task,local navigation is a special ability of particular insects, such as bees, some wasps, and ants, whichcare for their brood and, thus, have to return to their nest after foraging (Collett and Collett 2002; Zeilet al. 2009; Zeil 2012). Nevertheless, basic elements of local navigation could also be found inDrosophila. Visual landmarks represent crucial spatial cues and are employed to localize a goal,especially if it is barely visible. Information about the landmark constellation around the goal ismemorized during elaborate learning flights: The animals perform characteristic flight sequenceswhile facing the area around the goal. During these learning flights the animal somehow gathersrelevant information about the landmark constellation that is subsequently used to relocate the goalwhen returning to it after an excursion. Although a variety of visual cues, such as contrast, texture,and color, are relevant in defining landmarks and are employed to find the goal, it is gettingincreasingly clear that also the spatial layout of the landmark constellation plays an important roleand that landmarks that are defined by motion cues alone can be used by bees to locate the goal(Dittmar 2011; Egelhaaf et al. 2012). Although the mechanisms by which the landmark constellationis learned and the memorized information is eventually used to locate the goal are not fullyunderstood so far, it is clear that optic flow information actively generated during the bees’ typical

Fig. 8 (a) Control of translational velocity in free-flying blowflies. Box plot of the translational velocity in flight tunnelsof different widths, in a flight arena with two obstacles, and in a cubic flight arena (sketched below data). Translationvelocity strongly depends on the geometry of the flight arena. (b) Box plot of the retinal image velocities withinintersaccadic intervals experienced in the fronto-ventral visual field (see sketches above data diagram) in the differentflight arenas. In this area of the visual field, the intersaccadic retinal velocities are kept roughly constant by regulating thetranslation velocity according to clearance with respect to environmental structures. The upper and lower margins of theboxes in a and b indicate the 75th and 25th percentiles and the whiskers the data range (Data from Kern et al. 2012)

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 14 of 21

Page 15: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

learning and searching flights is essential for the acquisition of a spatial memory of the goalenvironment (Zeil et al. 2009; Zeil 2012). Moreover, in the vicinity of landmarks, the animalswere found to adjust their flight movements depending on the specific textural landmark properties(see above). Since landmarks close to the goal are, for geometrical reasons, most suitable to pinpointthe goal location because the retinal locations of close landmarks are displaced more than distantones during a given translational movement, the relevance of environmental objects to serve aslandmarks for local navigation in the vicinity of the goal is weighed without any additionalcomputational effort – just as a consequence of the geometrical properties of the optic flow inducedduring intersaccadic translational movements (see above). Processing of behaviorally relevantvisual information is, thus, facilitated by the characteristic active gaze strategy during free flight(Egelhaaf et al. 2012).

The neuronal mechanisms underlying visual landmark navigation are not yet known, althougha variety of models have been proposed on the basis of behavioral experiments (Vardy and Möller

Fig. 9 Honeybees measure distances in terms of optic flow generated during flight and communicate this information totheir hive mates by the waggle dance. Behavioral analysis revealed how honeybees estimate the distance travelledbetween their hive and a food source. (a) Layout for the experiments using tunnels and probabilities of waggle dance (W,orange bars) and round dance (R, blue bars). A tunnel with a length of 6 m and a width of 11 cmwas positioned either ata distance to the hive of 35 m (not drawn to scale) or at a distance of only 6 m. The walls of the tunnel were covered witha texture that contained either vertically oriented (Exp.1, Exp.2, Exp.4) or horizontally aligned stripes (Exp.3). The beeswere trained to collect sugar water from a food source (indicated by red object). When the food source was placed at theentrance of the tunnel (Exp.1), the bees performed mainly round dances after returning to their hive, signaling a shortdistance to the food source. When the food source was placed at the end of the tunnel containing vertically orientedtexture (Exp.2), the returning bees performed mainly waggle dances, signaling much larger distances to the hive,although the actual travel distance was not much larger. A food source at the same distance, however, located in a tunnelwith horizontally oriented stripes (Exp.3) again led mainly to round dances. The main difference between Exp.2 andExp.3 is that in the former much optic flow is evoked on the eyes of the honeybee while flying along the tunnel, whereasin the latter case there is only little optic flow, because the contours are oriented along the flight direction. When thetunnel covered with vertical contours and the food source close to its end is placed near to the hive (Exp.4), mainlywaggle dances are performed, which are shorter than those performed in Exp.2 (compare green bars). These experi-ments suggest that travelled distance is measured in terms of optic flow. (b) Calibration of the odometer of the bee. Meanduration of waggle dances elicited by outdoor feeders at various distances to the hive. Also shown are the mean durationsof waggle dances measured in Exp.2 and Exp.4 and their equivalent outdoor flight distances, as read from the regressionline (Adapted from Srinivasan et al. 2000)

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 15 of 21

Page 16: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

2005; Zeil et al. 2009; Zeil 2012). Nevertheless, the responses of bee wide-field neurons to visualstimuli as experienced during navigation flights in the vicinity of an invisible goal were found todepend on the spatial layout of the environment. The spatial landmark constellation that guides thebees to their goal leads to a characteristic time-dependent neural response profile during theintersaccadic intervals of navigation flights (Egelhaaf et al. 2012).

Visual Motion as a Source for Information About Other Moving ObjectsIn special free-flight situations, retinal image motion does not originate primarily from the self-motion of the animal, but from moving objects, such as a predator, a prey, or a mate.

Escape from Impending Collisions with Moving Objects Objects that may elicit an escape froman impending collision are not necessarily stationary in the environment. Rather they may also movetowards the animal. Examples are predators but also conspecifics in a swarm. The size of the retinalimage of an object increases in a characteristic way when it directly approaches the animal. Underthe simplifying assumption that the object moves on a straight collision course at a constant velocity,the temporal dynamics of its retinal image can be fully characterized by its size-to-speed ratio (seeabove; Fig. 5). For a given object size, a faster approach velocity implies a faster expansion rate anda smaller size-to-speed ratio. In the locust, but also in Drosophila, escape behavior has beenconcluded not to occur at a fixed time before collision, but rather at a fixed delay after the stimulusreaches a threshold angular size on the retina. This may be different in other systems and for otherbehavioral contexts where the time which it would take for the object to collide with the animal if noaction was taken by it is critical (Fotowat and Gabbiani 2011).

There is good evidence, mainly from detailed studies on flight behavior and visual processing inlocusts and most recently Drosophila, that there is a dedicated pathway in their visual system whichis tuned to the characteristics of looming stimuli during an impending collision. The computationsperformed by the underlying sensory and motor processes could be analyzed in great detail bothexperimentally and by computational modeling. An identified wide-field neuron plays a key role inthe neural circuit. The characteristic time course of its response encodes an impending collision withan approaching object. This information is conveyed via further identified neurons to motor centersthat eventually generate escape behavior. The neural mechanisms of the angular threshold compu-tation relevant for collision avoidance relies on three distinct processes: (1) motion-sensitiveexcitation evoked by the brightness changes resulting from displacements of the approachingobject’s retinal image activating retinotopically a dendritic subfield of the wide-field neuron,(2) an inhibitory network acting onto the motion-sensitive pathway presynaptically to the wide-field neuron, and (3) feedforward inhibition impinging on two additional dendritic subfields of thewide-field neuron. Eventually, the wide-field neuron’s output performs a kind of multiplication bya nonlinear transformation of the overall postsynaptic potential into spike activity. This multiplica-tion of an input that depends on the size of the approaching object with another input that depends onangular velocity of the edges of the object has been concluded to be decisive in computing objectapproach (Fotowat and Gabbiani 2011).

Pursuit of Moving Target Many insects follow moving objects, such as potential prey or mates.Dragonflies pursue other insects in highly aerobatic flight maneuvers to catch and eventually eatthem. They appear to compute an interception course which somehow requires prediction of themoving target’s future locations. In their third visual neuropile, neurons that are sensitive to smalltargets in restricted parts of the visual field have been characterized electrophysiologically andmodeled. The outputs of such small target-selective local neurons are pooled by a population of

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 16 of 21

Page 17: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

individually identified descending neurons which have been concluded to represent a vector thatreflects the direction of target motion with high accuracy (Olberg 2011; Nordström 2012).

In the context of mating behavior, male flies chase females in aerobatic visually controlled flightmaneuvers. The forward velocity of the chasing fly is controlled by the angular size of the target,whereas the turning velocity depends on the angle from which the target is seen as well as on itsspeed. The pursuing fly generates catch-up saccades when the target changes its direction too rapidlyfor the pursuer to follow smoothly. Pursuit of moving objects is, so far, the only flight behaviordescribed in detail where, at least in flies, no clear flight phases characterized by almost puretranslational movements are observed. During their rapid pursuit maneuvers, male flies, thus,abandon the possibility to gather information about the environment apart from information abouttheir moving target. This may be functional given the fact that when closely following the course ofits moving target, the pursuing fly will hardly collide with stationary obstacles in its surroundings(Egelhaaf et al. 2012).

Constraints Imposed on Visual Information Processing by a Timescaleof Natural BehaviorDuring free flight the retinal motion patterns continually change. As a consequence of the typicalsaccadic flight and gaze strategy of insects (see above), optic flow dynamics during naturallocomotion deviate considerably from motion stimuli (e.g., constant velocity motion, white-noisevelocity fluctuations) that are often employed in characterizing neural computations. In the contextof spatial vision, the intersaccadic intervals are of particular interest because they are the most likelytimes during which the flying insect can gather information about the outside world. Althoughintersaccadic intervals have the largest share of the entire flight time, they may be as short as 30 ms.Hence, information processing during free flight needs to take place on very short timescales. Rapidinformation processing is not only an issue in the context of intersaccadic spatial vision but oftenhighly relevant when the flying insect encounters moving objects and has to respond to them rapidlyin an appropriate way (i.e., either escape or pursue; see above).

Why is rapid information processing during flight and the short timescales during which infor-mation about the environment needs to be computed an issue at all? One reason is that neurons arerelatively unreliable computing devices. In insects the problem of reliability is particularly daunting,as there is not much redundancy at the output level of the visual system which would allow for thepooling of information across equivalent neurons (de Ruyter van Steveninck and Bialek 1995;Warzecha and Egelhaaf 2001; Warzecha et al. 2013).

When the same stimulus is repeatedly presented to a neuron the responses may vary muchbetween trials. Neuronal activity continually fluctuates even during constant velocity motion. Onthe basis of individual response traces, it is not easily possible to discern stimulus-driven activitychanges from those that are due to sources not associated with the stimulus (“noise”). The origin ofvarious potential noise sources in the visual motion pathway and the consequences of the unreliablenature of neural signals can be attributed in part to the input of the visual system, i.e., photon noise,but arise mainly from a host of biophysical and cellular processes in the nervous system. Examplesof such noise sources are the mechanisms involved in synaptic transmission and of action potentialgeneration. The impact of these noise sources on the precision with whichmotion information can beencoded is still not entirely clear. However, one aspect that appears to be especially relevant in thecontext of computing spatial information during intersaccadic intervals could be resolved to someextent: Given that neuronal responses are noisy, it will take some time to reliably infer behaviorallyrelevant environmental information from neuronal activity. Statistical analyses of noisyintersaccadic responses of individual and populations of fly wide-field neurons in the third visual

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 17 of 21

Page 18: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

neuropile reveal that sufficiently reliable information about translatory self-motion and, thus, aboutspatial parameters of the environment can already be decoded on a timescale of little more than 5 msand, thus, on a timescale relevant for processing behaviorally relevant information (Egelhaafet al. 2005).

ConclusionsVision is only one information source for behavioral control in free flight. Other sensory modalities,especially a wide variety of mechanosensors, are also important, for instance, for stabilizing theattitude of the animal. Moreover, odor has been shown to be a relevant external cue in severalbehavioral contexts (Hengstenberg 1993; Frye 2010). However, behavior is a phenomenon thattakes place in space and is intricately entangled with it. Thus, its control requires information aboutits environment that can only be provided by the visual system. It is safe to conclude that flyinginsects, such as flies, bees, locusts, and dragonflies, rely to a large extent on their visual system toperform their highly aerobatic flight maneuvers and to solve complex spatial tasks, such as avoidingcollisions with obstacles, landing on objects, or even finding hardly visible goals on the basis ofspatial landmark information, but also to detect and pursue other moving insects that may serve, forinstance, as a prey or a mate. Insects accomplish all this with the help of tiny brains with less thana million neurons and a spatial resolution of their eyes much smaller than technical camera systems.Hence, if resource efficiency with respect to computational effort and energy consumption isconceived as a benchmark, insects outperform man-made autonomous flying systems in theabovementioned tasks. Moreover, insects successfully solve these tasks at flight velocities thatimply rapid time-varying image flow on their eyes. The processing of rapid retinal image flowrepresents great challenges for the neuronal machinery given the limited reliability of neurons ascomputing devices. Obviously, as a consequence of millions of years of evolution, insect nervoussystems have become well adapted to successfully cope with these computational challenges and tosolve those computational tasks that are relevant for the success of the species efficiently andparsimoniously.

One mean to accomplish such extraordinary performance is that insects actively shape the imageflow on their eyes by their characteristic flight behavior, for instance, by strictly segregating througha saccadic flight and gaze strategy their translational and rotational movements and, thus, thecorresponding translational optic flow components on their eyes during cruising flight. In this wayneural processing of spatial information about surfaces and objects in the environment is greatlyfacilitated. More generally, and also taking into account other behavioral contexts, such as theencounter with other moving animals, it is suggested that by tuning the neural networks for visualinformation processing to the characteristic spatiotemporal properties of the retinal motion patternsas generated during free flight allows the nervous system to solve apparently complex vision tasksefficiently and parsimoniously.

Most conclusions on how visual information is processed in free flight are based on electrophys-iological experiments performed on completely restrained animals because recording from most ofthe analyzed cells has not been possible during flight. In recent analyses, conventional experimenter-defined visual stimulation was abandoned. Instead, analyses at the neuronal level were carried out bysimulating the complex spatiotemporal visual stimulus conditions during free flight and presentingthe tethered animal motion sequences that free-flying animals had previously experienced in variousbehavioral contexts. Only from this type of experiment has it been possible to draw conclusionsabout the significance of intersaccadic neuronal responses for providing the animal with spatialinformation (Egelhaaf and Kern 2002; Egelhaaf et al. 2012). Although recent studies suggest that thebehavioral state of the animal influences motion-sensitive wide-field neurons at the output level of

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 18 of 21

Page 19: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

the visual system by increasing their response level and slightly shifting their velocity tuning(Maimon 2011), the overall specificity of these neurons to different optic flow patterns and, thus,the major conclusions on visual processing in free flight appear not to be affected (Egelhaafet al. 2012). Recent successful attempts to monitor neural activity in large free-flying insects (i.e.,locusts, dragonflies) show great promise to qualify these conclusions (Harrison et al. 2011).

Nevertheless, it is reasonable to conclude from the experimental work on visually guided free-flight behavior of insects and the underlying neural mechanisms, and in combination with modelsimulations, that insects with their tiny brains derive part of their power as autonomous systems fromdynamic animal–environment interactions that lead to adaptive behavior in environments of a widerange of complexity. Model simulations and robotic implementations reveal that the smart biologicalmechanisms of motion computation and flight control might be helpful when designing micro airvehicles that may carry an onboard processor of only relatively small size and weight (Floreanoet al. 2009).

References

Borst A (2009) Drosophila’s view on insect vision. Curr Biol 19(1):R36Borst A (2012) Fly motion vision: from optic flow to visual course control. e-Neuroforum

3(3):59–66Borst A, Egelhaaf M (1989) Principles of visual motion detection. Trends Neurosci 12:297–306Borst A, Egelhaaf M (1993) Detecting visual motion: theory and models. In: Miles FA, Wallman

J (eds) Visual motion and its role in the stabilization of gaze. Elsevier, Amsterdam, pp 3–27Borst A, Haag J (2002) Neural networks in the cockpit of the fly. J Comp Physiol A 188:419–437Borst A, Haag J, Reiff DF (2010) Fly motion vision. Annu Rev Neurosci 33:49–70Collett TS, Collett M (2002) Memory use in insect visual navigation. Nat Rev Neurosci 3:542–552de Ruyter van Steveninck R, Bialek W (1995) Reliability and statistical efficiency of a blowfly

movement-sensitive neuron. Phil Trans R Soc B Biol Sci 348:321–340Dewell RB, Gabbiani F (2012) Escape behavior: linking neural computation to action. Curr Biol

22(5):R152Dittmar L (2011) Static and dynamic snapshots for goal localization in insects? Commun Integr Biol

4:17–20Eckert H (1980) Functional properties of the H1-neurone in the third optic ganglion of the blowfly,

Phaenicia. J Comp Physiol 135:29–39Egelhaaf M (2006) The neural computation of visual motion. In: Warrant E, Nilsson DE (eds)

Invertebrate vision. Cambridge University Press, Cambridge, UK, pp 399–461Egelhaaf M, Borst A (1993) Movement detection in arthropods. In: Miles FA, Wallman J (eds)

Visual motion and its role in the stabilization of gaze. Elsevier, Amsterdam, pp 53–77Egelhaaf M, Kern R (2002) Vision in flying insects. Curr Opin Neurobiol 12:699–706Egelhaaf M, Kern R, Kurtz R, Krapp HG, Kretzberg J, Warzecha AK (2002) Neural encoding of

behaviourally relevant motion information in the fly. Trends Neurosci 25:96–102Egelhaaf M, Grewe J, Karmeier K, Kern R, Kurtz R, Warzecha AK (2005) Novel approaches to

visual information processing in insects: case studies on neuronal computations in the blowfly.Assisted by Simon SA, Nicolelis MAL. In: Christensen TA (ed) Methods in insect sensoryneuroscience. CRC Press LLC (Frontiers in Neuroscience), Boca Raton, pp 185–212

Egelhaaf M, Boeddeker N, Kern R, Lindemann JP (2012) Spatial vision in insects is facilitated byshaping the dynamics of visual input through behavioral action. Front Neural Circ 6:108

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 19 of 21

Page 20: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

Floreano D, Zufferey JC, Srinivasan MV, Ellington C (2009) Flying insects and robots. Springer,Heidelberg

Fotowat H, Gabbiani F (2011) Collision detection as a model for sensory-motor integration. AnnuRev Neurosci 34(1):1–19

Frye MA (2010) Multisensory systems integration for high-performance motor control in flies. CurrOpin Neurobiol 20(3):347–352

Harrison RR, Fotowat H, Chan R, Kier RJ, Olberg R, Leonardo A, Gabbiani F (2011) Wirelessneural/EMG telemetry systems for small freely moving animals. IEEE Trans Biomed Circ Syst5(2):103–111

Hausen K (1984) The lobula-complex of the fly: structure, function and significance in visualbehaviour. In: Ali MA (ed) Photoreception and vision in invertebrates. Plenum Press, NewYork, pp 523–559

Heeger DJ, Jepson AD (1992) Subspace methods for recovering rigid motion I: algorithm andimplementation. Int J Comput Vis 7(2):95–117

Hengstenberg R (1993) Multisensory control in insect oculomotor systems. In: Miles FA, WallmanJ (eds) Visual motion and its role in the stabilization of gaze, vol 1. Elsevier, Amsterdam,pp 285–298

Kern R, Boeddeker N, Dittmar L, Egelhaaf M (2012) Blowfly flight characteristics are shaped byenvironmental features and controlled by optic flow information. J Exp Biol 215:2501–2514

Krapp HG (2000) Neuronal matched filters for optic flow processing in flying insects. In: LappeM (ed) Neuronal processing of optic flow. International review of neurobiology, vol 44. Academic(International Review of Neurobiology), San Diego, pp 93–120

Kurtz R (2012) Adaptive encoding of motion information in the fly visual system. In: Barth F,Humphrey J, Srinivasan MV (eds) Frontiers in sensing. Springer, Wien, pp 115–128

Land MF (1997) Visual acuity in insects. Annu Rev Entomol 42:147–177Land MF (1999) Motion and vision: why animals move their eyes. J Comput Physiol

A 185:341–352Longuet-Higgins HC, Prazdny K (1980) The interpretation of a moving retinal image. Proc R Soc

Lond Series B Biol Sci 208:385–397Maimon G (2011) Modulation of visual physiology by behavioral state in monkeys, mice, and flies.

Curr Opin Neurobiol 21(4):559–564Nordström K (2012) Neural specializations for small target detection in insects. Curr Opin

Neurobiol 22:272–278Olberg RM (2011) Visual control of prey-capture flight in dragonflies. Curr Opin Neurobiol

22(2):267–271Reichardt W (1961) Autocorrelation, a principle for the evaluation of sensory information by the

central nervous system. In: Rosenblith WA (ed) Sensory communication. M.I.T. Press/Wiley,New York, pp 303–317

Srinivasan MV (2011) Honeybees as a model for the study of visually guided flight, navigation, andbiologically inspired robotics. Physiol Rev 91(2):413–460

Srinivasan MV, Zhang S (2004) Visual motor computations in insects. Annu Rev Neurosci27:679–696

Srinivasan MV, Zhang S, Altwein M, Tautz J (2000) Honeybee navigation: nature and calibration ofthe “odometer”. Science 287:851–853

Taylor GK, Krapp HG (2008) Sensory systems and flight stability: what do insects measure andwhy? Adv Insect Physiol Insect Mech Control 34:231–316

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 20 of 21

Page 21: Encyclopedia of Computational Neuroscience || Visual Processing in Free Flight

van Hateren JH, Kern R, Schwerdtfeger G, Egelhaaf M (2005) Function and coding in the blowflyH1 neuron during naturalistic optic flow. J Neurosci 25:4343–4352

Vardy A, Möller R (2005) Biologically plausible visual homing methods based on optical flowtechniques. Connect Sci 17(1–2):47–89

Warzecha AK, Egelhaaf M (2001) Neuronal encoding of visual motion in real-time. In: Zanker JM,Zeil J (eds) Processing visual motion in the real world: a survey of computational, neural, andecological constraints. Springer, Berlin, pp 239–277

Warzecha AK, Rosner R, Grewe J (2013) Impact and sources of neuronal variability in the fly’smotion vision pathway. J Physiol 107(1–2):26–40

Wolf H (2011) Odometry and insect navigation. J Exp Biol 214(10):1629–1641Zeil J (2012) Visual homing: an insect perspective. Curr Opin Neurobiol 22:285–293Zeil J, Boeddeker N, Hemmi JM (2008) Vision and the organization of behaviour. Curr Biol 18(8):

R320Zeil J, Boeddeker N, St€urzl W (2009) Visual homing in insects and robots. In: Floreano D, Zufferey

JC, Srinivasan MV, Ellington CP (eds) Flying insects and robots. Springer, Heidelberg, pp 87–99

Encyclopedia of Computational NeuroscienceDOI 10.1007/978-1-4614-7320-6_343-15# Springer Science+Business Media New York 2013

Page 21 of 21