8
www.ijatir.org ISSN 23482370 Vol.09,Issue.02, February-2017, Pages:0260-0267 Copyright @ 2017 IJATIR. All rights reserved. Hybrid Model Based on Color, Shape and Motion Detection for Video Surveillance by Raspberry-Pi AKULA PRASHANTH 1 , K. RAMBABU 2 1 PG Scholar, Dept of ECE, Kasireddy Narayan Reddy College of Engineering and Research, Hyderabad, TS, India. 2 Associate Professor, Dept of ECE, Kasireddy Narayan Reddy College of Engineering and Research, Hyderabad, TS, India. Abstract: Fire causes irreversible damage to fragile natural ecosystems and greatly affects the socio-economic systems of many nations especially in the tropics where forest fires are more prevalent. Early detection of these fires may help reduce these impacts. Conventional point smoke and fire detectors are widely used in buildings. They typically detect the presence of certain particles generated by smoke and fire by ionization or photometry. Alarm is not issued unless particles reach the sensors to activate them. Therefore, they cannot be used in open spaces and large covered areas. Video based fire detection systems can be useful to detect fire in large auditoriums, tunnels, atriums, etc. The strength of using video in fire detection makes it possible to serve large and open spaces. In addition, closed circuit television (CCTV) surveillance systems are currently installed in various public places monitoring indoors and outdoors. Such systems may gain an early fire detection capability with the use of fire detection software processing the outputs of CCTV cameras in real time. Keywords: Video Surveillance, Fire Detection, Multi Expert System. I. INTRODUCTION Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image. Usually Image Processing system includes treating images as two dimensional signals while applying already set signal processing methods to them. t is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science disciplines too. The two types of methods used for Image Processing are Analog and Digital Image Processing. Analog or visual techniques of image processing can be used for the hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these visual techniques. The image processing is not just confined to area that has to be studied but on knowledge of analyst. Association is another important tool in image processing through visual techniques. So analysts apply a combination of personal knowledge and collateral data to image processing. Digital Processing techniques help in manipulation of the digital images by using computers. As raw data from imaging sensors from satellite platform contains deficiencies. To get over such flaws and to get originality of information, it has to undergo various phases of processing. The three general phases that all types of data have to undergo while using digital technique are Pre- processing, enhancement and display, information extraction. In the last years several methods have been proposed, with the aim to analyze the videos acquired by traditional video surveillance cameras and detect fires or smoke, and the current scientific effort focused on improving the robustness and performance of the proposed approaches, so as to make possible a commercial exploitation. Although a strict classification of the methods is not simple, two main classes can be distinguished, depending on the analyzed features: color based and motion based. The methods using the first kind of features are based on the consideration that a flame, under the assumption that it is generated by common combustibles as wood, plastic, paper or other, can be reliably characterized by its color, so that the evaluation of the color components (in RGB, YUV or any other color space) is adequately robust to identify the presence of flames. This simple idea inspires several recent methods: for instance, in fire pixels are recognized by an advanced background subtraction technique and a statistical RGB color model: a set of images have been used and a region of the color space has been experimentally identified, so that if a pixel belongs to this particular region, then it can be classified as fire. The introduction of the HSI color space significantly simplifies the definition of the rules for the designer, being more suitable for providing a people-oriented way of describing the color. A similar approach has been used in [6], where a cumulative fire matrix has been defined by combining RGB color and HSV saturation: in particular, starting from the assumption that the green component of the fire pixels has a wide range of changes if compared with red and blue ones, this method evaluates thes patial color variation in pixel values in order to distinguish non-fire moving objects from

Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

www.ijatir.org

ISSN 2348–2370

Vol.09,Issue.02,

February-2017,

Pages:0260-0267

Copyright @ 2017 IJATIR. All rights reserved.

Hybrid Model Based on Color, Shape and Motion Detection for Video

Surveillance by Raspberry-Pi AKULA PRASHANTH

1, K. RAMBABU

2

1PG Scholar, Dept of ECE, Kasireddy Narayan Reddy College of Engineering and Research, Hyderabad, TS, India.

2Associate Professor, Dept of ECE, Kasireddy Narayan Reddy College of Engineering and Research, Hyderabad, TS, India.

Abstract: Fire causes irreversible damage to fragile natural

ecosystems and greatly affects the socio-economic systems

of many nations especially in the tropics where forest fires

are more prevalent. Early detection of these fires may help

reduce these impacts. Conventional point smoke and fire

detectors are widely used in buildings. They typically detect

the presence of certain particles generated by smoke and fire

by ionization or photometry. Alarm is not issued unless

particles reach the sensors to activate them. Therefore, they

cannot be used in open spaces and large covered areas.

Video based fire detection systems can be useful to detect

fire in large auditoriums, tunnels, atriums, etc. The strength

of using video in fire detection makes it possible to serve

large and open spaces. In addition, closed circuit television

(CCTV) surveillance systems are currently installed in

various public places monitoring indoors and outdoors. Such

systems may gain an early fire detection capability with the

use of fire detection software processing the outputs of

CCTV cameras in real time.

Keywords: Video Surveillance, Fire Detection, Multi Expert

System. I. INTRODUCTION

Image processing is a method to convert an image into

digital form and perform some operations on it, in order to

get an enhanced image or to extract some useful information

from it. It is a type of signal dispensation in which input is

image, like video frame or photograph and output may be

image or characteristics associated with that image. Usually

Image Processing system includes treating images as two

dimensional signals while applying already set signal

processing methods to them. t is among rapidly growing

technologies today, with its applications in various aspects of

a business. Image Processing forms core research area within

engineering and computer science disciplines too. The two

types of methods used for Image Processing are Analog and

Digital Image Processing. Analog or visual techniques of

image processing can be used for the hard copies like

printouts and photographs. Image analysts use various

fundamentals of interpretation while using these visual

techniques. The image processing is not just confined to area

that has to be studied but on knowledge of analyst.

Association is another important tool in image processing

through visual techniques. So analysts apply a combination

of personal knowledge and collateral data to image

processing. Digital Processing techniques help in

manipulation of the digital images by using computers. As

raw data from imaging sensors from satellite platform

contains deficiencies. To get over such flaws and to get

originality of information, it has to undergo various phases

of processing. The three general phases that all types of data

have to undergo while using digital technique are Pre-

processing, enhancement and display, information

extraction.

In the last years several methods have been proposed,

with the aim to analyze the videos acquired by traditional

video surveillance cameras and detect fires or smoke, and the

current scientific effort focused on improving the robustness

and performance of the proposed approaches, so as to make

possible a commercial exploitation. Although a strict

classification of the methods is not simple, two main classes

can be distinguished, depending on the analyzed features:

color based and motion based. The methods using the first

kind of features are based on the consideration that a flame,

under the assumption that it is generated by common

combustibles as wood, plastic, paper or other, can be reliably

characterized by its color, so that the evaluation of the color

components (in RGB, YUV or any other color space) is

adequately robust to identify the presence of flames. This

simple idea inspires several recent methods: for instance, in

fire pixels are recognized by an advanced background

subtraction technique and a statistical RGB color model: a

set of images have been used and a region of the color space

has been experimentally identified, so that if a pixel belongs

to this particular region, then it can be classified as fire. The

introduction of the HSI color space significantly simplifies

the definition of the rules for the designer, being more

suitable for providing a people-oriented way of describing

the color.

A similar approach has been used in [6], where a

cumulative fire matrix has been defined by combining RGB

color and HSV saturation: in particular, starting from the

assumption that the green component of the fire pixels has a

wide range of changes if compared with red and blue ones,

this method evaluates thes patial color variation in pixel

values in order to distinguish non-fire moving objects from

Page 2: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

AKULA PRASHANTH, K. RAMBABU

International Journal of Advanced Technology and Innovative Research

Volume. 09, IssueNo.02, February-2017, Pages: 0260-0267

uncontrolled fires. In this paper we propose a method able to

detectfires by analyzing the videos acquired by surveillance

cameras. Two main novelties have been introduced: first,

complementary information, respectively based on color,

shape variation and motion analysis, are combined by a multi

expert system. The main advantage deriving from this

approach lies in the fact that the overall performance of the

system significantly increases with a relatively small effort

made by designer. Second, a novel descriptor based on a

bag-of-words approach has been proposed for representing

motion. The proposed method has been tested on a very

large dataset of fire videos acquired both in real

environments and from the web. The obtained results

confirm a consistent reduction in the number of false

positives, without paying in terms of accuracy or renouncing

the possibility to run the system on embedded platforms.

II. EXISTING AND PROPOSED METNODS

A. Existing Method

In general, the use of flame detectors is restricted to "No

Smoking" areas or anywhere where highly flammable

mater1als are stored or used. Existing method followed the

rules for filtering fire pixels in the HSI color space. This

simple idea inspires several recent methods: for instance, fire

pixels are recognized by an advanced background

subtraction technique and a statistical RGB color model: a

set of images have been used and a region of the color space

has been experimentally identified, so that if a pixel belongs

to this particular region, then it can be classified as fire. The

common limitation of the above mentioned approaches is

that they are particularly sensitive to changes in brightness,

so causing a high number of false positive due to the

presence of shadows or to different tonalities of the red.

B. Proposed Method

Flame detectors are generally only used in high hazard

areas such as fuel loading platforms, industrial process areas,

hyperbaric chambers, high ceiling areas, and any other areas

with atmospheres in which explosions or very rapid fires

may occur. Flame detectors are "line of sight" devices as

they must be able to see" the fire, and they are subject to

being blocked by objects placed in front of them. However,

the infrared type of flame detector has some capability for

detecting radiation reflected from walls. In this paper we

propose a method able to detect fires by analyzing the videos

acquired by surveillance cameras. Two main novelties have

been introduced: first, complementary information,

respectively based on color, shape variation and motion

analysis, are combined by a multi expert system. The main

advantage deriving from this approach lies in the fact that the

overall performance of the system significantly increases

with a relatively small effort made by designer. Second, a

novel descriptor based on a bag-of-words approach has been

proposed for representing motion. The existing system uses

only contrast based approach. It does not give efficient

result. It takes long time identification and also the result is

not accurate. The purpose of the System Analysis is to

produce the brief analysis task and also to establish complete

information about the concept, behavior and other

constraints such as performance measure and system

optimization. The goal of System Analysis is to completely

specify the technical details for the main concept in a concise

and unambiguous manner. The package selected to develop

watermarking is MATLAB and the package has more

advanced features. As the system is to be developed in

Watermarking, MATLAB platform with windows

Application is preferred.

III. LITERATURE SURVEY

Fire And Smoke Detection In Video With Optimal

Mass Transport Based Optical Flow And Neural

Networks.Kolesov, P.Karasev, A.Tannenbaum .E.Haber:

Detection of fire and smoke in video is of practical and

theoretical interest. In this paper, we propose the use of

optimal mass transport (OMT) optical flow as a low-

dimensional descriptor of these complex processes. The

detection process is posed as a supervised Bayesian

classification problem with spatio-temporal neighborhoods

of pixels;feature vectors are composed of OMT velocities

and R,G,B color channels. The classifier is implemented as a

single-hidden-layer neural network. Sample results show

probability of pixels belonging to fire or smoke. In

particular, the classifier successfully distinguishes between

smoke and similarly colored white wall, as well as fire from

a similarly colored background. A Probabilistic Approach

for Vision-Based Fire Detection in Videos Paulo Vinicius

Koerich Borges, Member, IEEE, and EbroulIzquierdo,

Senior Member, IEEE: Automated fire detection is an active

research topic in computer vision. In this paper, we propose

and analyze a new method for identifying fire in videos.

Computer vision-based fire detection algorithms are usually

applied in closed-circuit television surveillance scenarios

with controlled background.

In contrast, the proposed method can be applied not only

to surveillance but also to automatic video classification for

retrieval of fire catastrophes in databases of newscast

content. In the latter case, there are large variations in fire

and background characteristics depending on the video

instance. The proposed method analyzes the frame-to-frame

changes of specific low-level features describing potential

fire regions. These features are color, area size, surface

coarseness, boundary roughness, and skewness within

estimated fire regions. Because of flickering and random

characteristics of fire, these features are powerful

discriminant. The behavioral change of each one of these

features is evaluated, and the results are then combined

according to the Bayes classy- fire for robust fire

recognition. In addition, a priori knowledge of fire events

captured in videos is used to significantly improve the

classification results. For edited newscast videos, the fire

region is usually located in the center of the frames. This fact

is used to model the probability of occurrence of fire as a

function of the position. Experiments illustrated the

applicability of the method. Visual-based Smoke Detection

using Support Vector Machine Jing Yang, Feng Chen,

Weidong Zhang: Smoke detection becomes more and more

appealing because of its important application in fire

Page 3: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

Hybrid Model Based on Color, Shape and Motion Detection for Video Surveillance by Raspberry-Pi

International Journal of Advanced Technology and Innovative Research

Volume. 09, IssueNo.02, February-2017, Pages: 0260-0267

protection. In this paper, we suggest some more universal

features, such as the changing unevenness of density

distribution and the changing irregularities of the contour of

smoke. In order to integrate these features reasonably and

gain a low generalization error rate, we propose a support

vector machine based smoke detector.

The feature set and the classifier can be used in various

smoke cases contrary to the limited applications of other

methods. Experimental results on different styles of smoke

in different scenes show that the algorithm is reliable and

effective. Face Image Abstraction by Ford-Fulkerson

Algorithm and Invariant Feature Descriptor for Human

Identification DakshinaRanjanKiskuDebanjanChatterjee, S.

Trivedy Massimo Tistarelli: This paper discusses a face

image abstraction method by using SIFT features and Ford-

Fulkerson algorithm. Ford-Fulkerson algorithm is used to

compute the maximum flow in a flow network drawn on

SIFT features extracted from a face image. The idea is to

obtain an augmenting path which is a path from the source

vertex to destination vertex with the available capacities on

all edges along a set of paths and flow is calculated along

one of these paths. The process is repeated until it is obtained

more paths with the available capacities. At the initial stage,

face image is characterized by SIFT (Scale Invariant Feature

Transform) features and the keypoints descriptor information

is taken as features set for further processing. Keypoint’s

descriptor is used to generate several face representations by

using a series of matrix operations which are further used to

determine a Directed Acyclic Graph (DAG). The resultant

directed graph contains sparse and distinctive face

characteristics of a subject from which the face image is

captured. We then apply the Ford-Fulkerson algorithm on

the directed graph to maintain the capacity constraints, skew

symmetry and flow conservation to obtain an augmenting

path with available capacities (relation between SIFT

points).

Finally, we obtain a mathematical representation of a face

image and this representation is further encoded to be used

as a set of distinctive features for matching. The time

complexity of the proposed face abstraction algorithm is

found to be O(VE2 ) where V is the set of vertices and E is

the set of edges in a directed graph. Optical Flow Estimation

for Flame Detection in Videos Martin Mueller, Member,

IEEE, Peter Karasev, Member, IEEE, Ivan Kolesov,

Member, IEEE, and Allen Tannenbaum, Fellow, IEEE:

Computational vision-based flame detection has drawn

significant attention in the past decade with camera

surveillance systems becoming ubiquitous. Whereas many

discriminating features, such as color, shape, texture, etc.,

have been employed in the literature, this paper proposes a

set of motion features based on motion estimators. The key

idea consists of exploiting the difference between the

turbulent, fast, fire motion, and the structured, rigid motion

of other objects. Since classical optical flow methods do not

model the characteristics of fire motion (e.g., non-

smoothness of motion, non-constancy of intensity), two

optical flow methods are specifically designed for the fire

detection task: optimal mass transport models fire with

dynamic texture, while a data-driven optical flow scheme

models saturated flames. Then, characteristic features related

to the flow magnitudes and directions are computed from the

flow fields to discriminate between fire and non-fire motion.

The proposed features are tested on a large video database

to demonstrate their practical usefulness. Moreover, a novel

evaluation method is proposed by fire simulations that allow

for a controlled environment to analyze parameter

influences, such as flame saturation, spatial resolution, frame

rate, and random noise. Detection of Multiple Dynamic

Textures Using Feature Space Mapping AshfaqurRahman

and ManzurMurshed, Member, IEEE: Image sequences of

smoke, fire, etc. are known as dynamic textures. Research is

mostly limited to characterization of single dynamic textures.

In this paper we address the problem of detecting the

presence of multiple dynamic textures in an image sequence

by establishing a correspondence between the feature space

of dynamic textures and that of their mixture in an image

sequence. Accuracy of our proposed technique is both

analytically and empirically established with detection

experiments yielding 92.5% average accuracy on a diverse

set of dynamic texture mixtures in synthetically generated as

well as real-world image sequences. Detection of Anomalous

Events in Shipboard Video using Moving Object

Segmentation and Tracking: Ben Wenger and

ShreekanthMandayam Patrick J. Violante and Kimberly J.

Drake Anomalous indications in monitoring equipment

onboard U.S. Navy vessels must be handled in a timely

manner to prevent catastrophic system failure. The

development of sensor data analysis techniques to assist a

ship's crew in monitoring machinery and summon required

ship-to-shore assistance is of considerable benefit to the

Navy.

In addition, the Navy has a large interest in the

development of distance support technology in its ongoing

efforts to reduce manning on ships. In this paper, we present

algorithms for the detection of anomalous events that can be

identified from the analysis of monochromatic stationary

ship surveillance video streams. The specific anomalies that

we have focused on are the presence and growth of smoke

and fire events inside the frames of the video stream. The

algorithm consists of the following steps. First, a foreground

segmentation algorithm based on adaptive Gaussian mixture

models is employed to detect the presence of motion in a

scene. The algorithm is adapted to emphasize gray-level

characteristics related to smoke and fire events in the frame.

Next, shape discriminant features in the foreground are

enhanced using morphological operations. Following this

step, the anomalous indication is tracked between frames

using Kalman filtering. Finally, gray level shape and motion

features corresponding to the anomaly are subjected to

principal component analysis and classified using a

multilayer Perceptron neural network. The algorithm is

exercised on 68 video streams that include the presence of

anomalous events (such as fire and smoke) and

benign/nuisance events (such as humans walking the field of

Page 4: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

AKULA PRASHANTH, K. RAMBABU

International Journal of Advanced Technology and Innovative Research

Volume. 09, IssueNo.02, February-2017, Pages: 0260-0267

view). Initial results show that the algorithm is successful in

detecting anomalies in video streams, and is suitable for

application in shipboard environments. One of the principal

advantages of this technique is that the method can be

applied to monitor legacy shipboard systems and

environments where highquality, color video may not be

available.

IV. EXPERIMENTAL RESULTS

Most of the methods in the literature (especially the ones

based on the color evaluation) are tested using still images

instead of videos. Furthermore, no standard datasets for

benchmarking purposes have been made available up to

now. One of the biggest collections of videos for fire and

smoke detection has been made available by the research

group of Cetin. Starting from this collection, composed by

approx imative 31250 frames, we added several long videos

acquired in both indoor and outdoor situations so resulting in

a new dataset composed by 62.690 frames and more than

one hour of recording. More information about the different

videos is reported in Table I, while some visual examples are

shown in Fig.1. Note that the dataset can be seen as

composed by two main parts: the first 14 videos

characterized by the presence of fire and the last 17 videos

which do not contain fires; in particular, this second part is

characterized by objects or situations which can be wrongly

classified as containing fire: a scene containing red objects

may be misclassified by color based approaches, while a

mountain with smoke, fog or clouds may be misclassified by

motion based approaches. Such composition allows us to

stress the system and to test it in several conditions which

may happen in real environments. The dataset has been

partitioned into two parts: 80% has been used to test the

proposed approach while 20% for training the system by

determining the weights of the MES.

An overview of the performance achieved on the test set,

both in terms of accuracy and false positives, is summarized

in Table II. Among the three experts considered in this paper

(CE, ME and SV), the best one is the CE, which achieves on

the considered dataset a very promising performance

(accuracy = 83.87% and false positives = 29.41%). Note that

such performance is comparable with the one reached by the

authors in [8], where over a different dataset the number of

false positives is about 31%. On the other hand, we can also

note that the expert ME, introduced for the first time in this

paper for identifying the disordered movement of fire,

reveals to be very effective. In fact, we obtain 71.43%

accuracy and 53.33% false positives. It is worth pointing out

that the considered dataset is very challenging for this

expert: in fact, the disordered movement of smoke as well as

of trees moving in the forests can be easily confused with the

disordered movement of the fire. This consideration explains

the high number of false positives introduced by using only

ME. As expected, the best results are achieved by the

proposed MES, which outperforms all the other methods,

both in terms of accuracy (93.55%) and false positives

(11.76%). The very low false positive rate, if compared with

state of the art methods, is mainly due to the fact that ME

and SV act, in a sense, as a filter with respect to CE.

In other words, ME and SV are able to reduce the

number of false positives introduced by CE without paying

in terms of accuracy: this consideration is confirmed by the

results shown in Fig.3, where the percentage of the number

of experts which simultaneously take the correct decision is

reported. In particular, Fig.3a details the percentage of the

number of experts correctly assigning the class fire: we can

note that all the experts correctly recognize the fire in most

of the situations (69%), while two experts assign the class

fire in the remaining 31%. The advantage in using a MES is

much more evident inFig.3b, which refers to non fire videos.

In this case, only17% of videos are correctly classified by all

the experts. On the other hand, most of the videos (61%) are

assigned to the correct class by two experts, so confirming

the successful combination obtained thanks to the proposed

approach. In order to better appreciate the behavior

described above, a few examples are shown in Fig.2; in

Fig.2a the fire is correctly recognized by all the experts: the

color respects

TABLE I: The Dataset Used For The Experimentation

All the rules, the shape variation in consecutive frames is

consistent and the movement of the corner points detected is

very disordered. A different situation happens in

Fig.2b,where the only classifier detecting the fire is the one

Page 5: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

Hybrid Model Based on Color, Shape and Motion Detection for Video Surveillance by Raspberry-Pi

International Journal of Advanced Technology and Innovative Research

Volume. 09, IssueNo.02, February-2017, Pages: 0260-0267

based on the color: in this case, the uniform movement of the

salient points associated to the ball as well as its constant

shape allows the MES to avoid a false positive introduced by

the use of a single expert. In Fig.2c and 2d other two

examples are shown: in particular, in the former a small fire

with a variable shape has both a uniform color and a uniform

movement of the Salient points.

Fig.1. Examples of images extracted from the videos used

for testing the method.

The combination of color and shape variation experts

helps the proposed system to correctly detect the fire. The

last example shows a very big but settled fire, whose Shape

is stable and so it is not useful for the detection. In

thissituation, the combination of the experts based on color

and motion allows the MES to take the correct decision

about the presence of fire.

Fig.2. The three experts in action; the red box indicates

the position of the fire, while the letter on it refers to the

expert recognizing the presence of the fire.

Fig.3. Number of experts simultaneously taking the

correct decision in fire (a) and non fire (b) videos. For

instance, 31% of situations are correctly assigned to the

class fire by two experts over three while in the

remaining 69% all the three experts correctly recognize

the fire.

Table II also shows a comparison of the proposed approach

with three recent, state-of-the art methodologies that have

been chosen because they too are based on the combined use

of color, motion and shape information. For we have used

two different versions: the origin alone, which, as proposed

by its authors, analyzes the images in the RGB color space,

and a version modified by us, working instead in YUV

Page 6: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

AKULA PRASHANTH, K. RAMBABU

International Journal of Advanced Technology and Innovative Research

Volume. 09, IssueNo.02, February-2017, Pages: 0260-0267

space; we have chosen this modification on the base of [8],

where it is shown experimentally that color-based methods

work better in YUV than in RGB, as confirmed by the

results in Table II. The table shows that methods based on

the combination of different kinds of information

significantly outperform the single experts in terms of False

Positives; the difference in terms of False Negatives is not so

strong. Thus the combination helps more to improve the

specificity than the sensitivity of the system. The proposed

approach overcomes all the other considered methodologies

in terms of accuracy (93.55% against 89.29%, 90.32% and

87.10%,respectively). On the other hand, the best method in

terms of False Positives is (11.76% of the proposed approach

with respect to 5.88%).

TABLE II: Comparison Of The Proposed Approach

With State Of The Art Methodologies In Terms Of

Accuracy, False Positives And False Negatives

The better False Positive Rate is however balanced by an

improved False Negative Rate of our method, which shows

no False Negatives (i.e.no fires are missed) versus a 14.29%

False Negative Rate. While the difference between the two

algorithms in terms of accuracy may seem not very large, the

differences in the distribution of False Positives and False

Negatives can make each of the two methods preferable

depending on the requirements of the specific application. A

more detailed comparison for each of the considered videos

is shown in Table I of the Electronic Annex: we can note

that, differently from the other considered approaches, our

method achieves a 100% true positive rate, since it is able to

also retrieve very small flames (as the ones in videos

fire1,fire2, fire6 or fire13). This is mainly due to the

introduction of the MES for taking the final decision about

the event, which is able to detect the onset of small fires at

an early stage, when the amount of motion is still not very

large. It is also evident that the method is impressive for its

reduced false positive rate, causing on the whole dataset just

a single False Positive.

In order to further confirm the effectiveness of the

proposed approach, we also evaluated it over a second freely

available dataset (hereinafter D2)2. It is composed by 149

videos, each lasting approximative 15 minutes, so resulting

in more than35 hours of recording; D2 contains very

challenging situations, often recovered as fire by traditional

color based approaches: red houses in a wide valley (see

Fig.4a and 4d), a mountain at sunset (see Fig.4b) and lens

flares (bright spots due to reflections of the sunlight on lens

surfaces, see Fig.4a and 4c).Although the situations are very

challenging, no false positives are detected by our MES. The

result is very encouraging, especially if compared with CE,

achieving on the same dataset12% of false positives. It is

worth pointing out that such errors are localized in

approximative 7 hours, mainly at sunset, and are due to lens

flares. Such typology of errors is completely solved by the

proposed approach, able to take advantage of the disordered

movement of the flames. Finally, we have also evaluated the

computational cost of the proposed approach over two very

different platforms: the former is a traditional low-cost

computer, equipped with an Intel dual core T7300 processor

and with a RAM of 4GB.The latter is a Raspberry Pi B, a

Broadcom BCM2835 System-on-a-chip (SoC), equipped

with an ARM processor running at700 MHz and with a

RAM of 512 Mb. The main advantage in using such device

lies in its affordable cost, around 35 dollars.

Fig.4. Some examples of the Dataset D2, showing red

houses in the wide valley, the mountain at sunset and

some lens flares.

The proposed method is able to work, considering

1CIFvideos, with an average frame rate of 60 fps and 3 fps

respectively over the above mentioned platforms. Note that

60fps is significantly higher than the traditional 25 - 30 fps

that a traditional camera can reach during the acquisition. It

implies that the proposed approach can be easily and very

effectively used on existing intelligent video surveillance

systems without requiring additional costs for the hardware

needed for the images processing. In order to better

characterize the performance of the proposed approach, we

also evaluated the time required by the different modules,

namely the three experts (CE, ME and SV) and the module

in charge of updating the background, extracting the

Page 7: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

Hybrid Model Based on Color, Shape and Motion Detection for Video Surveillance by Raspberry-Pi

International Journal of Advanced Technology and Innovative Research

Volume. 09, IssueNo.02, February-2017, Pages: 0260-0267

foreground mask and labeling the connected components

(FM). The contribution of each module is highlighted in

Fig.5 the average time required to process the single frame

has been computed and the percentage of each module with

respect to the total time is reported. We can note that SV

only marginally impacts on the execution time; this is due to

the fact that the search of the minimum bounding boxes

enclosing the blobs and of its properties (in terms of

perimeter and area) is a very low-cost operation. Although

the introduction of SV only slightly increases the

performance of the MES (from 92.86% to 93.55% in terms

of accuracy), the small additional effort strongly justifies its

introduction in the proposed MES.

On the other side, the higher impacts are due to ME and

CE: as for the former (85%), it is evident that the

computation of the salient points, as well as their matching,

is a very onerous operation. As for the latter, it may appear

surprising the big effort required by the CE with respect to

FM (CE: 11%, FM:2%). It is worth pointing out that FM’s

operations (such as background updating and connected

component labeling) are very common in computer vision,

and thus very optimized versions have been proposed in

standard libraries such as Open CV. Finally, it is worth

pointing out that the computation time is strongly dependent

on the particular image the algorithm is processing. In fact, it

is evident that pixel-based modules (such as FM and CE)

need to process the whole image independently of the

objects moving inside. On the other hand, it is evident that

the more are the objects moving inside the scene, the higher

is the effort required by FM for detecting and analyzing the

salient points. It implies that the variance with respect to the

overall time required for the computation is about 51%of the

overall time. Note that the final combination of the decisions

taken by the three experts has not been considered, since the

time required is very small with respect to the other modules.

In conclusion, the obtained results, both from a quantitative

and a computational point of views, are very encouraging

since they allow the proposed approach to be profitably used

in real environments.

Fig.5. The average execution time of our algorithm, in

terms of percentage of the total time for any expert (CE,

SVe ME) and the preliminary low level vision

elaborations (FM).

V. RESULT

Results of this paper is as shown in bellow Figs.6 to 10.

Fig.6. System activated message displaying “welcome” in

robot section.

Fig.7. System activated message displaying “welcome”.

Fig.8. Fire sensor activated.

Fig.9. camera checks the height and motion of the fire.

Page 8: Hybrid Model Based on Color, Shape and Motion Detection ... · systems may gain an early fire detection capability with the use of fire detection software processing the outputs of

AKULA PRASHANTH, K. RAMBABU

International Journal of Advanced Technology and Innovative Research

Volume. 09, IssueNo.02, February-2017, Pages: 0260-0267

Fig.10. GSM it sends the message to monitoring section

and the buzzer will be on.

VI. CONCLUSION

In this paper we propose a fire detection system using an

ensemble of experts based on information about color, shape

and flame movements. The approach has been tested on a

wide database with the aim of assessing its performance both

in terms of sensitivity and specificity. Experimentation

confirmed the effectiveness of the MES approach, which

allows to achieve better performance in terms of true

positive rate with respect to any of its composing experts.

For the future work will be devoted to the integration in the

same MES framework of a smoke detection algorithm, and

to the extension of the approach to operating conditions

currently not covered, such as its execution directly on board

of the camera, and its use on Pan-Tilt-Zoom cameras.

VII. REFERENCES

[1] Pasquale Foggia, AlessiaSaggese and Mario Vento,

IAPR Fellow, “Real-time Fire Detection for Video

SurveillanceApplications using a Combination of Experts

based on Color, Shape and Motion”, IEEE Transactions on

Circuits and Systems for Video Technology.

[2]A.E.Cetin,K.Dimitropoulos,B.Gouverneur,N.Grammalidi

s, O. Gunay,Y. H. Habiboglu, B. U. Toreyin, and S.

Verstockt, “Video firedetection: a review,” Digital Signal

Processing, vol. 23, no. 6, pp. 1827– 1843, 2013.

[3] Z. Xiong, R. Caballero, H. Wang, A. Finn, and P.-

y.Peng, “Video firedetection: Techniques and applications in

the fire industry,” in MultimediaContent Analysis, ser.

Signals and Communication Technology,A. Divakaran, Ed.

Springer US, 2009, pp. 1–13.

[4]T. Celik, H. Demirel, H. Ozkaramanli, and M. Uyguroglu,

“Fire detectionusing statistical color model in video

sequences,” J. Vis. Comun.Image Represent., vol. 18, no. 2,

pp. 176–185, Apr. 2007.

[5] H.-Y. J. Yoon-Ho Kim, Alla Kim, “Rgb color model

based the firedetection algorithm in video sequences on

wireless sensor network,”International Journal of Distributed

Sensor Networks, 2014.

[6] C. Yu, Z. Mei, and X. Zhang, “A real-time video fire

flame andsmoke detection algorithm,” Procedia Engineering,

vol.62,no.0, pp.891–898,2013, 9th Asia-Oceania Symposium

on Fire Science andTechnology.

[6] X. Qi and J. Ebert, “A computer vision-based method for

fire detectionin color videos,” International Journal of

Imaging, vol. 2, no. 9 S, pp.22–34, 2009.

[8] T. Celik and H. Demirel, “Fire detection in video

sequences using ageneric color model,” Fire Safety Journal,

vol. 44, no. 2, pp. 147–158,2009.

[9] T. Celik, H. Ozkaramanli, and H. Demirel, “Fire pixel

classificationusing fuzzy logic and statistical color model,”

in ICASSP, vol. 1, April2007, pp. I–1205–I–1208.

[10] B. C. Ko, K.-H. Cheong, and J.-Y. Nam, “Fire detection

based on visionsensor and support vector machines,” Fire

Safety Journal, vol. 44, no. 3,pp. 322 – 329, 2009.

[11] M. Mueller, P. Karasev, I. Kolesov, and A.

Tannenbaum, “Optical flowestimation for flame detection in

videos,” IEEE Trans. Image Process,vol. 22, no. 7, pp.

2786–2797, July 2013.

[12] A. Rahman and M. Murshed, “Detection of multiple

dynamic texturesusing feature space mapping,” IEEE Trans.

Circuits Syst. Video Technol.,vol. 19, no. 5, pp. 766–771,

May 2009.

Author’s Profile:

Mr.Akula Prashanth has completed his

B.tech in ECE Department from Netaji

institute of Engg. & Technology, JNTU

University, Hyderabad. Presently, he is

pursuing his Masters in Embedded system

from kasireddy Narayan Reddy college of

Engineering and Research, Hyderabad, TS.

India.

Mr.K.Rambabu has completed B.Tech

(ECE) from Sri Kottam Tulsi Reddy

Memorial College of Engineering,

Mahaboobnagar Dist.,JNTUH University,

M.Tech (Image Processing) from Aurora

College of Engineering, JNTUH

University, Hyderabad. He is having 6

years of experience in teaching field. Currently, He is

working as an Associate Professor and HOD of ECE

Department in kasireddy Narayan Reddy college of

Engineering and Research, Hyderabad, TS. India.