1
Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry S. Davis and Nikos Paragios Problem Description Problem Description Single-camera background subtraction: •Shadows. •Illumination changes. •Specularities. Stereo-based background subtraction: •Can overcome many of these problems, but •Slow and •Inaccurate online matches. Project Goals Project Goals 1.Develop a fast two camera background subtraction algorithm that doesn’t require solving the correspondence problem online. 2.Analyze advantages of various camera configurations with respect to robustness of background subtraction: –We assume objects to be detected move on a known ground plane. Fast Illumination- Fast Illumination- Invariant Multi- Invariant Multi- Camera Approach Camera Approach A clever idea: Yuri A. Ivanov, Aaron F. Bobick and John Liu, “Fast Lighting Independent Background Subtraction”, IEEE Workshop on Visual Surveillance, ICCV'98, Bombay, India, January 1998. Background model: •Established conjugate pixels offline. •Color dissimilarity measure between conjugate pixels. What are the problems? •False and missed detections, caused by homogeneous objects. Detection Errors Detection Errors Given a conjugate pair (p, p’): •p’ is occluded by a foreground object, and •p is visible in the reference view. False detections, •p and p’ are occluded by a foreground object. Missed detections, Eliminating False Detections Eliminating False Detections Consider a two-cameras placement: •Baseline orthogonal to ground plane. •Lower camera used as reference. Reducing Missed Detections Reducing Missed Detections Initial detection free of false detections: •And the missed detections form a component adjacent to the ground plane. For a detected pixel I t along each epipolar line in an initial foreground blob: 1.Compute conjugate pixel I’ t (constrained stereo). 2.Determine base point I b . 3.If |I t I b | > thres, increment I t and repeat step 1. 4.Mark I t as the lowermost pixel. Base Point Base Point Proposition 1: In 3D space, the missed proportion of a homogeneous object with negligible front-to- back depth is independent of object position. Equivalently, the proportion that is correctly detected remains constant. Proof: Extent of missed detection = being the length of the baseline. Thus, proportion of missed detections = . ¤ Under weak perspective: •Can be shown that is the proportion of correct detection, I m = * I’ t , is the ground plane homography from reference to second view. •Homogeneous and background pixel on ground plane assumptions not necessary since I m can be independently determined using and I’ t . Under perspective: A. Criminisi, I. Reid, A.Zisserman, “Single View Metrology”, 7th IEEE International Conference on Computer Vision, Kerkya, Greece, September 1999. •Based on Criminisi et. al., we can show that in reference view, ref is unknown scale factor, h is the height of I t , is the normalized vertical vanishing line of the ground plane, v ref is the vertical vanishing point. •Equation also applies to the second camera, equating them can be used to determine I b . •Base point in second camera is just * I b . Robustness to Specularities Robustness to Specularities After morphological operation, two possibilities: 1.Specularities in a single blob, or 2.Specularities in a different blob. Case 1 - Specularities in the same blob: •Virtual image lies below the ground plane. •Eliminated by base-finding operations. •Hard to find a good stereo match. •Lambertian + Specular at point of reflection. Even if matched, typically causes I m above I t . Case 2 – Specularities in different blob: Robustness to Near-BG Object Robustness to Near-BG Object Typical disparity-based background subtraction faces problem with near-background objects: 1.Our algorithm needs only detect top portion, follow by 2.Base-finding operations. Experiments Experiments 1.Dealing with illumination changes using our sensor placement. 2.Dealing with specularities (day raining scene). 3.Dealing with specularities (night scene). 4.Near-background object detection. 5.Indoor scene (requiring perspective model). Comparisons: •Weak perspective model much simpler, ease of implementation. •When objects close to camera, weak perspective model can be violated (e.g., indoor scenes). •Perspective model, much less stable, sensitive to calibration errors. Robustness to Illumination Robustness to Illumination Changes Changes Geometrically, the algorithm is unaffected by: •Lighting changes. •Shadows. Extension to objects not moving on ground possible. Additional Advantages Additional Advantages Very fast and stereo matches of background model can be established offline, much more accurate.

Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry

  • View
    218

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry

Fast Illumination-invariant Background Subtraction using Two Views: ErrorAnalysis, Sensor Placement and Applications

Ser-Nam Lim, Anurag Mittal, Larry S. Davis and Nikos Paragios

Problem DescriptionProblem DescriptionSingle-camera background subtraction:

•Shadows.•Illumination changes.•Specularities.

Stereo-based background subtraction:

•Can overcome many of these problems, but•Slow and•Inaccurate online matches.

Project GoalsProject Goals1. Develop a fast two camera background subtraction algorithm that doesn’t require solving the correspondence problem online.2.Analyze advantages of various camera configurations with respect to robustness of background subtraction:

–We assume objects to be detected move on a known ground plane.

Fast Illumination-Invariant Fast Illumination-Invariant Multi-Camera ApproachMulti-Camera ApproachA clever idea:

•Yuri A. Ivanov, Aaron F. Bobick and John Liu, “Fast Lighting Independent Background Subtraction”, IEEE Workshop on Visual Surveillance, ICCV'98, Bombay, India, January 1998.

Background model:•Established conjugate pixels offline.•Color dissimilarity measure between conjugate pixels.

What are the problems?•False and missed detections, caused by homogeneous objects.

Detection ErrorsDetection ErrorsGiven a conjugate pair (p, p’):

•p’ is occluded by a foreground object, and•p is visible in the reference view.

False detections,

•p and p’ are occluded by a foreground object.Missed detections,

Eliminating False DetectionsEliminating False DetectionsConsider a two-cameras placement:

•Baseline orthogonal to ground plane.•Lower camera used as reference.

Reducing Missed DetectionsReducing Missed DetectionsInitial detection free of false detections:

•And the missed detections form a component adjacent to the ground plane.

For a detected pixel It along each epipolar line in an initial foreground blob:

1. Compute conjugate pixel I’t (constrained stereo).2. Determine base point Ib.3. If |It – Ib| > thres, increment It and repeat step 1.4. Mark It as the lowermost pixel.

Base PointBase PointProposition 1:

In 3D space, the missed proportion of a homogeneous object with negligible front-to- back depth is independent of object position. Equivalently, the proportion that is correctly detected remains constant.Proof:

Extent of missed detection =being the length of the baseline. Thus,

proportion of missed detections = . ¤

Under weak perspective:•Can be shown that• is the proportion of correct detection, Im = * I’t , is the ground plane homography from reference to second view.•Homogeneous and background pixel on ground plane assumptions not necessary since Im can be independently determined using and I’t.

Under perspective:•A. Criminisi, I. Reid, A.Zisserman, “Single View Metrology”, 7th IEEE International Conference on Computer Vision, Kerkya, Greece, September 1999.

•Based on Criminisi et. al., we can show that in reference view,

•ref is unknown scale factor, h is the height of It, is the normalized vertical vanishing line of the ground plane, vref is the vertical vanishing point.•Equation also applies to the second camera, equating them can be used to determine Ib. •Base point in second camera is just * Ib.

Robustness to SpecularitiesRobustness to SpecularitiesAfter morphological operation, two possibilities:

1. Specularities in a single blob, or2. Specularities in a different blob.

Case 1 - Specularities in the same blob:•Virtual image lies below the ground plane.•Eliminated by base-finding operations.

•Hard to find a good stereo match.•Lambertian + Specular at point of reflection.•Even if matched, typically causes Im above It.

Case 2 – Specularities in different blob:

Robustness to Near-BG ObjectRobustness to Near-BG ObjectTypical disparity-based background subtractionfaces problem with near-background objects:

1. Our algorithm needs only detect top portion, follow by2. Base-finding operations.

ExperimentsExperiments1. Dealing with illumination changes using our sensor placement.2. Dealing with specularities (day raining scene).3. Dealing with specularities (night scene).4. Near-background object detection.5. Indoor scene (requiring perspective model).

Comparisons:

•Weak perspective model much simpler, ease of implementation.•When objects close to camera, weak perspective model can be violated (e.g., indoor scenes).•Perspective model, much less stable, sensitive to calibration errors.

Robustness to Illumination Robustness to Illumination ChangesChangesGeometrically, the algorithm is unaffected by:

•Lighting changes.•Shadows.

Extension to objects not moving on ground possible.

Additional AdvantagesAdditional AdvantagesVery fast and stereo matches of background model can be established offline, much more accurate.