6
“Hybrid Cone-Cylinder” Codebook Model for Foreground Detection with Shadow and Highlight Suppression Anup Doshi and Mohan Trivedi University of California, San Diego - CVRR Laboratory 9500 Gilman Drive, San Diego, CA 92093 {andoshi, mtrivedi}@ucsd.edu Abstract In the interest of 24-7 long-term surveillance, a truly ro- bust, adaptive, and fast background-foreground segmenta- tion technique is required. This paper deals with the es- pecially difficult but extremely common problems of mov- ing backgrounds, shadows, highlights, and illumination changes. To produce reliable foreground extraction in the face of these problems, the best practical aspects of two algorithms, Codebook Segmentation[6] and HSV Shadow Suppression[2] are combined. The main contribu- tion of this paper is the introduction of the “Hybrid Cone- Cylinder” Codebook (HC3) model. Results show supe- rior speed and quantitatively better performance in many different conditions and environments. Applications in- clude people-tracking with Omni-directional cameras and vehicle-counting with rectilinear cameras. 1. Introduction and Motivation Outdoor surveillance and security settings are likely one of the most common uses of video cameras. Yet there are still many problems to be solved to make a stable and reli- able computer vision system useful for those applications. The primary problem for any system is segmenting fore- ground objects from the background, in the face of various environmental challenges. These challenges include global illumination changes, as can be expected over the course of 24 hours or chang- ing weather; moving cast shadows during the daytime; specularities, highlights, and shadows due to artificial light sources at night; and moving backgrounds such as flags or trees. Global illumination changes can occur quickly, as in a cloud suddenly blocking out the sunlight, or relatively slowly, as the sun takes a while to set. Either way, the per- ceived background will change, so a model which removes that should adaptively update. Moving cast shadows are those shadows which move along with a foreground object, but are not a part of the object itself[2]. These shadows can cause objects to merge, distort their shape, or occlude other objects, so they can be very problematic. At nighttime, several more interesting problems occur. Due to the lack of ambient light, every small light source within range of a scene can cause objects to light up. As those light sources vary, the objects may get brighter or darker; these illumination fluctuations are referred to as highlights and shadows. In particular it would be useful to discard the effects of these highlights and shadows, while still detecting foreground objects as they move through the image. As an example, consider a car moving through a parking lot in the evening. When the car turns, various dif- ferent parts of the image will get brighter; others will get darker. These fluctuations do not represent actual changes in the background, and thus those points should remain clas- sified as background. Finally, an environmental challenge which is quite com- mon to many outdoor scenes is of moving backgrounds. A common example is of trees or grass swaying in the wind. These movements may occur frequently and with an un- known periodicity (depending on the wind), so an algorithm to adaptively learn such motions would be useful. The ability to perform object identification and recog- nition depends on a reliable and accurate object detection scheme. A search for interest points or template matches can easily be narrowed down to very few locations once the foreground has been extracted. Additionally, for surveil- lance purposes, reliable detection of objects in the presence of such environmental noise would help in tracking and sub- sequent analysis of tracks. Motivated by the desire for continuous day-and-night op- erations, an algorithm that works well in many different il- lumination conditions is essential. The applications of such an algorithm could range from people-counting to abnor- mal vehicle track identification, as well as countless other surveillance-related tasks. With such a system online all day, every day, powerful and useful data can be automati- Proceedings of the IEEE International Conference on Video and Signal Based Surveillance (AVSS'06) 0-7695-2688-8/06 $20.00 © 2006 Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on August 06,2010 at 22:31:57 UTC from IEEE Xplore. Restrictions apply.

Hybrid Cone-Cylinder' Codebook Model for Foreground Detection …cvrr.ucsd.edu/publications/2006/doshi-HybridCodebook.pdf · 2010-08-06 · “Hybrid Cone-Cylinder” Codebook Model

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Hybrid Cone-Cylinder' Codebook Model for Foreground Detection …cvrr.ucsd.edu/publications/2006/doshi-HybridCodebook.pdf · 2010-08-06 · “Hybrid Cone-Cylinder” Codebook Model

“Hybrid Cone-Cylinder” Codebook Model for Foreground Detection withShadow and Highlight Suppression

Anup Doshi and Mohan TrivediUniversity of California, San Diego - CVRR Laboratory

9500 Gilman Drive, San Diego, CA 92093{andoshi, mtrivedi}@ucsd.edu

Abstract

In the interest of 24-7 long-term surveillance, a truly ro-bust, adaptive, and fast background-foreground segmenta-tion technique is required. This paper deals with the es-pecially difficult but extremely common problems of mov-ing backgrounds, shadows, highlights, and illuminationchanges. To produce reliable foreground extraction inthe face of these problems, the best practical aspectsof two algorithms, Codebook Segmentation[6] and HSVShadow Suppression[2] are combined. The main contribu-tion of this paper is the introduction of the “Hybrid Cone-Cylinder” Codebook (HC3) model. Results show supe-rior speed and quantitatively better performance in manydifferent conditions and environments. Applications in-clude people-tracking with Omni-directional cameras andvehicle-counting with rectilinear cameras.

1. Introduction and Motivation

Outdoor surveillance and security settings are likely oneof the most common uses of video cameras. Yet there arestill many problems to be solved to make a stable and reli-able computer vision system useful for those applications.The primary problem for any system is segmenting fore-ground objects from the background, in the face of variousenvironmental challenges.

These challenges include global illumination changes,as can be expected over the course of 24 hours or chang-ing weather; moving cast shadows during the daytime;specularities, highlights, and shadows due to artificial lightsources at night; and moving backgrounds such as flags ortrees. Global illumination changes can occur quickly, asin a cloud suddenly blocking out the sunlight, or relativelyslowly, as the sun takes a while to set. Either way, the per-ceived background will change, so a model which removesthat should adaptively update. Moving cast shadows are

those shadows which move along with a foreground object,but are not a part of the object itself[2]. These shadows cancause objects to merge, distort their shape, or occlude otherobjects, so they can be very problematic.

At nighttime, several more interesting problems occur.Due to the lack of ambient light, every small light sourcewithin range of a scene can cause objects to light up. Asthose light sources vary, the objects may get brighter ordarker; these illumination fluctuations are referred to ashighlights and shadows. In particular it would be useful todiscard the effects of these highlights and shadows, whilestill detecting foreground objects as they move through theimage. As an example, consider a car moving through aparking lot in the evening. When the car turns, various dif-ferent parts of the image will get brighter; others will getdarker. These fluctuations do not represent actual changesin the background, and thus those points should remain clas-sified as background.

Finally, an environmental challenge which is quite com-mon to many outdoor scenes is of moving backgrounds. Acommon example is of trees or grass swaying in the wind.These movements may occur frequently and with an un-known periodicity (depending on the wind), so an algorithmto adaptively learn such motions would be useful.

The ability to perform object identification and recog-nition depends on a reliable and accurate object detectionscheme. A search for interest points or template matchescan easily be narrowed down to very few locations once theforeground has been extracted. Additionally, for surveil-lance purposes, reliable detection of objects in the presenceof such environmental noise would help in tracking and sub-sequent analysis of tracks.

Motivated by the desire for continuous day-and-night op-erations, an algorithm that works well in many different il-lumination conditions is essential. The applications of suchan algorithm could range from people-counting to abnor-mal vehicle track identification, as well as countless othersurveillance-related tasks. With such a system online allday, every day, powerful and useful data can be automati-

Proceedings of the IEEE International Conferenceon Video and Signal Based Surveillance (AVSS'06)0-7695-2688-8/06 $20.00 © 2006

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on August 06,2010 at 22:31:57 UTC from IEEE Xplore. Restrictions apply.

Page 2: Hybrid Cone-Cylinder' Codebook Model for Foreground Detection …cvrr.ucsd.edu/publications/2006/doshi-HybridCodebook.pdf · 2010-08-06 · “Hybrid Cone-Cylinder” Codebook Model

cally collected.This paper introduces the Hybrid Cone-Cylinder Code-

book (HC3) model as a solution to these preceding prob-lems. Section 2 presents an overview of related work, aswell as the original Codebook model[6] and HSV Shadowsuppression algorithms[2]. In Section 3 the new unifiedHC3 system is proposed with several key modifications andupdates. Section 4 discusses experimental validation, andSection 5 concludes the discussion.

2. Related Studies in Foreground and ShadowSegmentation

Foreground-background segmentation is a relatively ex-tensive field. In search of algorithms that would performwell in light of all the problems posed above, several maincontenders appeared. Typically common models for back-grounds considered are based on single Gaussians [5, 8, 4]or a Mixture of Gaussians (MOG) [12, 7, 1]. Where singleGaussians fail to model complex backgrounds, MOGs havetrouble with sensitive detections and fast variations [6, 8].Also, while MOGs may be able to converge to a complexbackground with enough components, the computation re-quired may preclude real-time operations[10].

Another method for modeling backgrounds involvesnon-parametric kernel density estimation [3]. This methodcan in many cases be prohibitively memory intensive [6,10]. The codebook method[6], described in detail be-low, uses a non-statistical clustering technique to achieve afast and efficient model for the background while allowingfor moving background. There are many other techniques(Wallflower, Pfinder, W4, etc.[9]) based on texture, colorspaces, and pixel-, region-, or frame-based approaches.

Shadow suppression has also been quite extensivelystudied. A more common implementation assumes thatshadows decrease the luminance of an image, while thechrominance stays relatively unchanged [2, 10, 5, 4]. How-ever there are many more methods for removing shadows,many of which are reviewed in [11]. Accordingly, the al-gorithm based on the HSV space[2] was cited in [11] asthe most generally capable and robust shadow detection andsuppression algorithm. This algorithm is also described infurther detail below.

2.1. Codebook Model for Background -Foreground Segmentation [6]

The codebook model for background segmentation isquite simply a non-statistical clustering scheme with sev-eral important additional elements to make it robust againstmoving background. The motivation for having such amodel is that it will be fast, because it is deterministic; effi-cient (requires little memory); adaptive; and able to handle

complex backgrounds with sensitivity. A detailed descrip-tion of the model and can be found in [6].

In the codebook model, each pixel can have a variablenumber of codewords representing the background. Thecodeword is comprised of an RGB vector, and several auxil-iary components which are used in the test data comparison.A test pixel is classified as a member of the codeword’s setif it satisfies 2 conditions: 1) Brightness Constraint: the in-tensity (norm[RGB]) should be within some range of thelowest intensity pixel in that codeword’s set, and the high-est intensity pixel in the codeword’s set. 2) Color Distance:the color or chromaticity (function of the angle between testvector and codeword rgb) should be within some constant.If both conditions are satisfied, the test pixel is added to thecodeword set.

Several other auxiliary components are kept as a part ofthe codeword, and assist in training and pruning the code-book. Namely, these values are the first and last accessesof that codeword, along with the frequency of accessesand maximum length of time between consecutive accesses(MNRL). These values are used to determine whether acodeword should remain in the background or not. Gener-ally, a moving background such as a tree will have a smallerMNRL the two components of the background (tree andnon-tree) will get accessed relatively frequently.

Adaptive updating occurs whenever a test pixel falls intothe cluster of a codeword. Additionally, when a pixel isclassified as foreground, it is added as a codeword to acache. In turn, if a pixel is repeatedly accessed from thecache, it moves into a layer of the background. These twotogether provide a basis for robustness under global illumi-nation changes; however, there are additional measures in[6] which can be taken to gain more adaptability.

2.2. Shadow Detection in HSV space[2]

The algorithm presented in [2] is a deterministic non-model based solution for finding moving cast shadows. It isbased on the simple idea that shadows change the brightnessof the background, but do not really affect the color values.For this reason, the HSV space is chosen to distinguish lu-minance (V) from chrominance (H and S). Using the logicpresented above, a shadow classifier for a given pixel cansimply boil down to the following expression:

SPk(x, y) =

1 ifα ≤ IV

k (x,y)

BVk

(x,y)≤ β

∧(ISk (x, y)−BS

k (x, y)) ≤ τS

∧∣∣IH

k (x, y)−BHk (x, y)

∣∣ ≤ τH

0 otherwise(1)

where Ik and Bk are the input and background images,respectively. α, β, τS , τH are all parameters to be chosen.

Proceedings of the IEEE International Conferenceon Video and Signal Based Surveillance (AVSS'06)0-7695-2688-8/06 $20.00 © 2006

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on August 06,2010 at 22:31:57 UTC from IEEE Xplore. Restrictions apply.

Page 3: Hybrid Cone-Cylinder' Codebook Model for Foreground Detection …cvrr.ucsd.edu/publications/2006/doshi-HybridCodebook.pdf · 2010-08-06 · “Hybrid Cone-Cylinder” Codebook Model

Parameter selection is one critical issue here and will bediscussed further in Section 5. [2] does however provide ananalysis of the parameter choices and implications. It is alsofairly obvious that this algorithm can extend to highlightdetection. A highlight is simply an increased luminance asopposed to a decreased luminance, so the equations abovecan be appropriately modified to find highlights as well.

3. Towards Generalization of the CodebookModel

The two algorithms above were chosen for their variousproperties to implement a robust foreground detection algo-rithm. However two pressing issues arose in the combina-tion stage, which suggested a modified approach to thesealgorithms. These are presented below. First is a discussionabout the choice to use the HSV color space for the entirealgorithm, which greatly simplifies the number of param-eters and lessens the time of calculation. Then the code-word cluster volumes and shadow and highlight volumesare considered and re-engineered to provide a more cohe-sive framework.

3.1. Color Space Modification

The original codebook model[6] uses a the followingmethod, as mentioned above, to distinguish intensity andchrominance. An intensity value is simply the L2-norm ofthe RGB components, and the chrominance is measuredas a function of the angle between the input and refer-ence values in RGB space. These calculations are only oneway of estimating intensity, since the perceived brightnessof each of the primary colors is actually quite different -green (0,255,0) actually appears much brighter than blue(0,0,255).

In HSV space, which is used in [2], intensity can be ap-proximated as the V (value) component, where the chromi-nance encompasses the Hue and Saturation components. Itis assumed that this V will be roughly equivalent to the totalpower of the spectrum; hence it is a good candidate to usefor intensity. Upon closer inspection, though, it is apparentthat the V component is a function only of the largest ofthe RGB components, and so it does not encode the entirespectrum.

However the main advantage of the HSV space is thatthe H and S components, which represent chromaticity, areindependent of the intensity approximation V. This impliesthat the chrominance calculations can be done separatelyfrom the luminance calculations Also, according to [3], theHSV space more closely models human vision, where theV can be understood as roughly close to how humans per-ceive luminance. This implies that it is a more represen-tative approximation of the true illumination intensity than

the L2-norm. Finally according to [2], RGB space has beendemonstrated to be less accurate than HSV space for detect-ing shadows.

Therefore to combine the codebook model and shadowsuppression techniques, and in the interest of simplicity,both methods are converted into HSV space. This repre-sentation eases calculations in both cases, and allows for asmaller parameter set as well. The transformation into HSVspace and corresponding distance calculations are shown tobe effective and extremely quick.

3.2. Introducing the Hybrid Cone-CylinderVolume

The main modification to the models involves the shapeof the test volume around a background pixel, whose spacedefines the “cluster” associated with that pixel. In otherwords, the test pixels which fall inside this volume are as-sociated with the corresponding background pixel, and la-beled as background.

The advantages of having such a fixed volume are dis-cussed in [6], but in essence boil down to simplicity andeffectiveness. To develop a more dynamic model would in-volve laborious statistical calculations and potentially showlittle, if any, improvements.

However the choice of the volume in the original code-book model[6] does not account for certain cases. As shownin Figure 1(a), a cylinder represents the cluster space aroundthe label. Unfortunately it is apparent that at lower intensi-ties, more chromaticity values would have the chance of ly-ing within the background pixel’s cylinder. This is evidentwhen examining a background pixel of very small intensity:almost every low-intensity test pixel will be assigned to itscylinder. An unintended consequence of this is that as twosimilarly grouped pixels increase intensity, they have lesschance of being in the same cluster, even though their re-spective chrominance remains the same.

As a more effective and intuitive choice of volumes, acone corrects these problems and more precisely covers thecolor-space. Therefore the codebook model here is modi-fied to use cones instead of cylinders. Tests are underwayto verify the hypothesized advantages of cones over cylin-ders in the codebook model. However empirically, it worksextremely well.

The cone has already been used in shadow suppressiontechniques([10]) - see Figure 1(b) . As mentioned in theshadow detection algorithm description, a shadow or high-light is simply the same chromaticity value as the origi-nal background pixel, with a lower or higher luminance.Thus the shadow and highlight “cones” would lie beyondthe range of the codebook cone, adjacent on either side.However after testing, the pure conical highlight detectorgrabbed too many pixel values within its space and thus

Proceedings of the IEEE International Conferenceon Video and Signal Based Surveillance (AVSS'06)0-7695-2688-8/06 $20.00 © 2006

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on August 06,2010 at 22:31:57 UTC from IEEE Xplore. Restrictions apply.

Page 4: Hybrid Cone-Cylinder' Codebook Model for Foreground Detection …cvrr.ucsd.edu/publications/2006/doshi-HybridCodebook.pdf · 2010-08-06 · “Hybrid Cone-Cylinder” Codebook Model

(a) Codebook ’codeword’ model[6] (b) Shadow suppression model[10]

(c) Hybrid Cone-Cylinder model

Figure 1. Various models for the cluster vol-ume around a background pixel.

pushed up the false negative rate. With a desire for sensitivedetections in mind, the highlight volume was limited to acylinder. Therefore the final Hybrid Cone-Cylinder volumeused in the HC3 algorithm is shown in Figure 1(c).

4. Experimental Validation of HC3 Model

4.1. Experiments

Several experiments were performed using the method-ology above, to determine its potential effectiveness on 24-7data in various environments. First, two scenes were manu-ally marked up as ground truth data, and a those were usedto quantitatively compare the HC3 model and original code-book model. Additionally, preliminary results are presentedbelow on several other difficult environmental contexts.

The first two experiments were conducted on two se-quences, each 100 frames long, which were selected fortheir complexity and appearance of cast shadows. Theseframes were manually marked up to represent ground truth.For each experiment, both the original codebook model[6]and the HC3 model were trained on a series of 500 framesand tested on these 100 frames. For each frame, the errorswere detected and a corresponding Detection versus FalseAlarm rate was plotted on an ROC graph. It is worth noting

that in these sequences some morphology was used in bothmodels to remove noise and fill in holes, slightly reducingthe sensitivity.

Figure 2 shows results from a simple scene where twopeople are walking along the road with very long cast shad-ows. As is evident, the HC3 model does well in classifyingshadows as background. Figure 3 displays a more com-plex scene, with moving background in the waving grassand dust storms, as well as cast shadows and dust kicked upby the car. The HC3 model correctly classifies most of theshadows as background, and the dust is classified as high-lights and thereby background as well.

(a) Input frame (top left), ground truth (topright), HC3 model (bottom left), originalmodel (bottom right)

(b) Errors for each of 100 frames in the sequence

Figure 2. Quantitative comparison - Walkingpeople with shadows

Evidently the ROC plots show the effectiveness of thisalgorithm. The errors in the HC3 model are generally wellseparated from the original model; this is mostly due to theclassification of shadows and highlights as background. Ta-ble 1 compares the average error rates over all 100 framesfor each sequence. While the detection rate of true positivesremains quite similar in both sequences, the false positiverate drops relatively significantly on average using HC3.

In Figure 4 is an example of a sensitive people-detectorusing an omnidirectional camera. Even though the back-ground is somewhat stable, the foreground objects are quitesmall and could easily be confused with noise. However in

Proceedings of the IEEE International Conferenceon Video and Signal Based Surveillance (AVSS'06)0-7695-2688-8/06 $20.00 © 2006

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on August 06,2010 at 22:31:57 UTC from IEEE Xplore. Restrictions apply.

Page 5: Hybrid Cone-Cylinder' Codebook Model for Foreground Detection …cvrr.ucsd.edu/publications/2006/doshi-HybridCodebook.pdf · 2010-08-06 · “Hybrid Cone-Cylinder” Codebook Model

(a) Input frame (top left), ground truth (topright), HC3 model (bottom left), original model(bottom right)

(b) Errors for each of 100 frames in the sequence

Figure 3. Quantitative comparison - Car ondirt road, shadows, dust storms, and movingbackground

each case the people are detected with a good deal of accu-racy.

A night scene is played out in Figure 5. This is an in-triguing data set for all its problems to overcome. There isin fact very little color data, so the chrominance effect isdiminished. However the algorithm still performs consider-ably well. In the first image, the highlights due to the car’sheadlamps are suppressed, as well as the person’s shadow.After all the processing, both people are detected in the fore-ground. Note that the car had merged into a layer of thebackground after sitting for a long time, so it was not de-tected. In the second image, the illuminations caused bythe car’s lights are detected and removed, leaving a clearlocation for the car itself.

In terms of performance, the HC3 algorithm runs atapproximately 40 frames per second on videos of size

People walking Car on dirt roadFPR TPR FPR TPR

Original Model 3.7% 68.7% 2.4% 83.4%HC3 Model 1.1% 77.9% 0.2% 79.4%

Table 1. Average FPR & TPR Values

Figure 4. Sensitive detection of people withan Omni-directional Camera. Input with azone of interest (top), Foreground (bottom).

320x240. Thus it is quite clearly viable for real-time ap-plications. Due to pruning and adaptive updating, the sizesof the codebook and cache stay relatively constant. In factfor most pixels 1-2 codewords are sufficient on average. Aselect few pixels need more codewords to deal with fluctu-ating background scenes.

4.2. Discussion and Future Work

The algorithm presented in this paper is clearly effec-tive and robust against many different scenes. However onedrawback of the current setup is the parameter tuning. Thereare a considerable number of variables to adjust to find ap-propriate values for a certain environment. [2] has a numer-ical evaluation of several parameter choices, and [6] pro-poses sample values. One solution to allow the algorithm torun for long periods of time, would be to automatically ad-just the parameters according to the environmental changes(using cues such as time-of-day, weather patterns, illumina-tion levels, etc.). However it may also be possible to usemachine learning and optimization algorithms to find theoptimal parameters.

As a next step, it will also be necessary to comprehen-sively compare the adjusted HC3 algorithm with the origi-nal codebook model, as well as with other background sub-traction methods. Such analysis will provide motivation forre-tuning and adjusting the current model to apply in moregeneral situations.

Finally, even though HSV space empirically works quitewell, other color spaces may also be appropriate. These

Proceedings of the IEEE International Conferenceon Video and Signal Based Surveillance (AVSS'06)0-7695-2688-8/06 $20.00 © 2006

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on August 06,2010 at 22:31:57 UTC from IEEE Xplore. Restrictions apply.

Page 6: Hybrid Cone-Cylinder' Codebook Model for Foreground Detection …cvrr.ucsd.edu/publications/2006/doshi-HybridCodebook.pdf · 2010-08-06 · “Hybrid Cone-Cylinder” Codebook Model

Figure 5. Background Removal at Night. In-put (top), Segmentation: shadows as darkgray and highlights as light gray (middle), Fi-nal foreground (bottom).

include spaces such as CIE L*a*b*, which encode the es-timated luminance in one component and chrominance intwo other independent components. It may be the case thatsuch a space is slightly more precise; however, careful anal-ysis will be necessary and parameters will need to be re-tuned. At the same time, the volume of space that signifiesthe cluster can be adjusted as well. A thorough investigationinto the effects of these changes should be performed.

5. Conclusions

A demonstrably robust algorithm to perform foregroundextraction has been introduced in this paper. The HybridCone-Cylinder Codebook algorithm is designed to be real-time and robust against shadows, highlights, and movingbackgrounds. Additionally, it is an adaptive model and thuscapable of handling global illumination changes. Theseproperties make the algorithm an excellent candidate to runon very long video sequences, in applications such as long-term surveillance and vehicle counting.

Illustrative results are given to show the useful featuresof the algorithm. It clearly performs well on nighttime data,with complex backgrounds, and does so quite sensitively.Quantitatively it is also shown to remove shadows and high-lights relatively well on several data sets. Thus the HC3model combining codebook and shadow/highlight suppres-sion in HSV space is truly a robust foreground extractionmethod.

Acknowledgements

The authors are grateful for the support of TSWG, andthe constructive comments provided by Dr. Tarak Gandhiand AVSS reviewers. They are most appreciative of thecontributions and assistance of their colleagues from theCVRR laboratory, who made the experimental phase of theresearch possible.

References

[1] P. Amnuaykanchanasin, T. Thongkamwitoon, N. Srisawai-wilai, S. Aramvith, and T. H. Chalidabhongse. Adaptiveparametric statistical background subtraction for video seg-mentation. In Proceedings of the third ACM int’l workshopon Video surveillance & sensor networks, pages 63–66, NewYork, NY, USA, 2005. ACM Press.

[2] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, and S. Sirotti.Improving shadow suppression in moving object detectionwith hsv color information. Proc. IEEE Int’l Conf. Intelli-gent Transportation Systems, pages 334–339, August 2001.

[3] A. M. Elgammal, D. Harwood, and L. S. Davis. Non-parametric model for background subtraction. EuropeanConference on Computer Vision, 2:751–767, 2000.

[4] H. Han, Z. Wang, J. Liu, Z. Li, B. Li, and Z. Han. Adap-tive background modeling with shadow suppression. In In-telligent Transportation Systems, 2003. Proceedings. 2003IEEE, volume 1, pages 720–724, 2003.

[5] T. Horprasert, D. Harwood, and L. S. Davis. A statis-tical approach for realtime robust background subtractionand shadow detection. In Proceedings of IEEE Frame RateWorkshop, pages 1–19, 1999.

[6] K. Kim, T. H. Chalidabhonse, D. Harwood, and L. Davis.Real-time foreground-background segmentation using code-book model. Elsevier Real-Time Imaging, 11(3):167–256,June 2005.

[7] D.-S. Lee, J. J. Hull, and B. Erol. A bayesian framework forgaussian mixture background modeling. IEEE InternationalConference on Image Processing, 3:973–976, 2003.

[8] J. Lluis, X. Miralles, and O. Bastidas. Reliable real-timeforeground detection for video surveillance applications.In Proceedings of the third ACM int’l workshop on Videosurveillance & sensor networks, pages 59–62, New York,NY, USA, 2005. ACM Press.

[9] M. Piccardi. Background subtraction techniques: a review.Systems, Man and Cybernetics, 2004 IEEE InternationalConference on, 4:3099–3104, October 2004.

[10] F. Porikli and O. Tuzel. Bayesian background modeling forforeground detection. In Proceedings of the third ACM int’lworkshop on Video surveillance & sensor networks, pages55–58, New York, NY, USA, 2005. ACM Press.

[11] A. Prati, I. Mikic, M. Trivedi, and R. Cucchiara. Detect-ing moving shadows: Algorithms and evaluation. IEEETransactions on Pattern Analysis and Machine Intelligence,25:918–923, 2003.

[12] C. Stauffer and W. L. Grimson. Adaptive background mix-ture models for real-time tracking. CVPR99, II:246–252,1999.

Proceedings of the IEEE International Conferenceon Video and Signal Based Surveillance (AVSS'06)0-7695-2688-8/06 $20.00 © 2006

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on August 06,2010 at 22:31:57 UTC from IEEE Xplore. Restrictions apply.