73
1 Decision Making and Reasoning with Uncertain Image and Sensor Data Pramod K Varshney Kishan G Mehrotra Chilukuri K Mohan Syracuse University

1 Decision Making and Reasoning with Uncertain Image and Sensor Data Pramod K Varshney Kishan G Mehrotra Chilukuri K Mohan Syracuse University

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

1

Decision Making and Reasoning with Uncertain Image and Sensor Data

Pramod K VarshneyKishan G MehrotraChilukuri K Mohan

Syracuse University

2

Outline Introduction Main overall themes Results since last review

Scenario Recognition: Audio Visual Sensor Fusion

Path Planning for dynamic military applications

Concluding remarks

3

Information Acquisition andFusion Model for Visualization

Dynamic network connectivity with varying bandwidths

Mobile agents with heterogeneous resources and capabilities

CommunicationNetwork

HCI andVisualization

MobileAgent 1

MobileAgent N

Command &Control Center

Mobile Agent iInfo. processing &

fusion

HCI andVisualization

Info. processing &fusion

4

Our Main Overall Themes Decentralized inferencing Data/information fusion Uncertainty representation and

visualization Planning and decision making in

dynamic battlefield environments.

5

Outline Introduction Main overall themes Results since last review

Scenario Recognition: Audio Visual Sensor Fusion

Path Planning for dynamic military applications

Concluding remarks

6

Objectives Develop a proof of concept of a system,

which Classifies Activity based on

Video based detection and tracking of moving objects

Detects and classifies situational sounds Fuses information from two different

modalities to provide enhanced scene context Handles issues such as uncertainty in sensor

data and coupling between events in different streams

7

System Block Diagram

Video

Audio

Video Processing

Audio Processing

FusionFramework

Scene Descriptor

8

Video Processing Pipeline

Image Acquisition

Background Subtraction

Detection

Feature Extraction

Activity Classification

9

Video Processing Pipeline Video Features

aspect ratio speed relative densities of pixels in upper middle

and lower bands Activity classes include walking, sitting,

bending, etc Classifier is a multi-module back-

propagation neural network

10

Example: Multi Object Tracking

Multi-Object Tracking in Infra Red Modality

11

Example : Tracking with ID Tags

Unique ID assigned to each tracked object.

Tracking using object properties and last known location.

12

Head Tracking for Improved Performance

Head is tracked separately

Maintain tracking of individuals in groups

Locate head in top 1/6 of the object.

13

Audio Processing PipelineAudio

Acquisition

Histogram Features

Spectral Features

Relative Band Energies

LPC /Cepstral

Coefficients

Choice of Features

Audio Event Classification

14

Audio Processing Pipeline Sound Classes

Silence/Background Hum Machine noise Alarm sounds Human Speech

Features used for sound classification Amplitude Histogram Features Spectral Centroid and Zero Crossing Rate Spectrum shape modeling coefficients Relative Band energies Linear Predictive Coding Coefficients Cepstral Coefficients

15

Audio Video Fusion Having defined the processing

pipeline for the two modalities, we develop a framework for information

fusion from sensors apply to the surveillance domain recognize different scenarios.

16

Fusion ApproachTwo steps:1. Decisions regarding certain activities or

events are made in each information stream, based on low level data processing.

2. Fusion of these stream-level decisions takes place, involving three main challenges:

Asynchronism Linkage Uncertainty

17

Asynchronism Events in different streams are

processed asynchronously– e.g., Video events are detected on a per frame basis; and Audio events are detected over a period of time.

This asynchronism makes it challenging to fuse information from different modalities

18

Linkage The information sensed in different

streams may not be independent; it may describe the same event.

Framework for fusion must accommodate causal coupling between events across streams.

19

Modeling Linkages

Correlation analysis on the training data is

used to extract linkage informationbetween features of different sensor streams.

20

Fusion Model

21

Fusion Model

Sound Sensor

Video / IR sensor

Inference : Theft in Progress!!

Inference: running

k

22

Stream Model At fixed intervals δi , decisions oi regarding

the presence of events are made for the mth stream by classifiers. We use trained multi-module feed-forward neural networks to make these decisions.

At time instants k, a decision O* is calculated using the decisions (oi’s) available in that time interval for a given stream: a fuzzy rule-based approach facilitates computation of O*).

23

Fusion Framework Fusion rules are generated using the Linkage

Information learnt from the training data.

When events with stronger linkages are detected, we write the sub-scenario being corroborated by both events, whereas weakly linked events are written as separate eventsex : running and alarm sound (0.1)-> possible suspicious activity bending and human speech (0.01)->2 uncorrelated events

24

Fusion Framework (continued) Certainty values for sub-scenario

observations are modified incrementally based on Linkage information.

A time series of sub-scenarios is generated, giving a complete scenario description over a period of time.

25

Illustrative Example 1 Parking lot setting.

26

Video Sensor Input

Raw video of a staged robbery

Processed video of the robbery

27

Description Generated with only Video Information

28

Audio Sensor Input

29

Description Generated with only Audio Information

30

Description Generated with Audio and Video Information

31

Illustrative Example 2Conversation

32

Visualizing other Information Some scene variables can be

visualized in sensor space Decision Uncertainty Threat levels Classes of Moving Objects Classes of Activity

33

Uncertainty Visualization Modulate bounding box brightness

to indicate object classBrightness of bounding box indicates certainty

Color of bounding box indicates whether single person or group

34

Uncertainty Visualization Bar Indicators to indicate object

class Height of Bar indicator proportional to certainty

Color of the bar indicates whether single person or a group

35

Summary Demonstrated a framework for fusion of audio

and video information. Fusion of information from sensors of different

modalities provides a richer scene context . Use of probabilistic models for fusion and

feature level fusion being considered. We have shown the feasibility of activity

recognition using combined video and audio information.

Next section (path planning): after activity recognition, battlefield decision-maker must act.

36

Outline Introduction Main overall themes Results since last review

Scenario Recognition: Audio Visual Sensor Fusion

Path Planning for dynamic military applications

Concluding remarks

37

Path Planning in a Battlefield To determine safe paths for

personnel in a battlefield The battlefield is represented as a

graph where nodes correspond to different geographical locations with risk values

The quality of a path is measured by cumulative risk associated with the path

38

Problem Formulation Path P : A non cyclic sequence

(L1,L2….Ln) where L1 is the initial location of personnel, Ln is a target or exit point, and each Li is adjacent to Li+1 in the graph.

Determine safe paths which maximize path quality Q(P) defined as

k

Q(P)= log(1-risk(Li)) i=1

39

Modeling Risks We define risk as the probability of

occurrence of a high level of damage to personnel traversing a path

Risk values at different locations can be modeled by probability distributions.

40

Optimal path computation for situational visualization (in collaboration with Bill Ribarsky)

Green route: the optimal path

Red semitransparent circle: the range of risks

41

Hierarchical Path Planning in Dynamic Environments

42

Problem Formulation To compute near-optimal paths

from a source to a destination with minimum computational effort.

The battlefield is modeled via a graph where the nodes represent different geographical locations

Quality measure: The cumulative risk associated with the path

43

Why Hierarchical Path Planning? Non-hierarchical approaches such as

Dijkstra’s algorithm are computationally very expensive for graphs with large number of nodes.

Hierarchical approaches Solve the path planning problem in a

hierarchcial graph with smaller number of nodes

Minimize the computational effort which is critical in real time applications such as battlefield path planning

44

Our Approach Partition the original graph into different

subgraphs and compute representative risks for each subgraph

Higher level path: compute a path in the hierarchical graph where each node represents a subgraph

Lower level path: Compute actual paths within each subgraph. The final path is a concatenation of the lower level paths in different subgraphs

45

Illustration of a Path Computed by HIPLA:

Edge connecting boundary nodes

Boundary node

Source, destination nodes

Sub path

46

Dynamic Scenario Risk values associated with different

locations change with time Example: A segment of the path may

become more risky due to the occurrence of a new event such as an explosion

Problem: compute a new path from the current location of the personnel to the destination node.

47

Our Solution for the Dynamic Path Planning Problem Re-estimate the representative risk

values only for subgraphs whose risk values have changed

Refine the current path by recomputing a new path from the current node to the destination bypassing the subgraphs whose risk values have increased.

48

An Illustration of the Dynamic Scenario

Current location

New path

Current path

Source, destination nodes

49

Simulation Results We compared HIPLA with two other well known

path planning algorithms viz., hierarchical

shortest paths algorithm (SPAH) [Jung et al., IEEE

trans. KDE, 2002] and Dijkstra’s algorithm with

pruning (DP) [Wagner et al., Journal of Experimental algorithmics, 2005]

HIPLA obtains near-optimal solutions (worst case quality penalty within 5%) with much less computational effort compared to DP and SPAH

* USA road map available at: http://www.cs.princeton.edu/courses/spring05/cos226/assignments/map/

50

Quality Degradation of HIPLA Compared to the Optimal Solution

Number of nodes

Number of edges

Percent quality degradation

4900 34300 2.12

10,000 80,000 2.34

87,575* 121,961 1.89

51

Comparison of Computational Times of HIPLA, SPAH and DP

Computational times of HIPLA, SPAH and DP on various random graphs

0

2

4

6

8

10

12

14

Number of nodes

Co

mp

uta

tio

nal

tim

e (s

eco

nd

s)

HIPLA

SPAH

DP

Path re-computation times of HIPLA, SPAH and DP on various random graphs

0123456789

10

900 2500 4900 10000 16900 19600

Number of nodes

Path

re-c

ompu

tatio

n tim

e (in

se

cond

s)

HIPLA

DP

SPAH

DP has a high preprocessing cost(a few hours)

52

Preprocessing: Performance Comparison of HIPLA and SPAH

Preprocessing costs of HIPLA and SPAH

0

10

20

30

40

50

60

900 2500 4900 10000 16900 19600

Number of nodes

Com

puta

tiona

l tim

e (s

econ

ds)

HIPLA

SPAH

53

Summary Efficient path planning algorithms for

risk minimization Near-optimal solutions Proposed a new hierarchical path

planning algorithm for fast computation of near optimal paths in dynamic environments

54

Outline Introduction Main overall themes Results since last review

Scenario Recognition: Audio Visual Sensor Fusion

Path Planning for dynamic military applications

Concluding remarks

55

Main Contributions Target Tracking

Decision making with aging observations in temporal Bayesian network

Outdoor video tracking using multiple cameras Temporal sensor staggering

Scenario Recognition Multimodal Sensor fusion; event recognition/scenario

classification with audio information Detection of unusual activities from video image

sequences Model for enhanced scene description and inference

56

Main Contributions Decision making in dynamic

environments New algorithms for path planning in practical military

applications Formulation of path planning as a multi-objective

optimization problem and proposal of a new multi-objective evolutionary algorithm

Development of a time-efficient hierarchical dynamic path planning algorithm

57

Tech TransitionsThis MURI resulted in several grants, contracts and cooperative agreements including the following:

Army Research Laboratory (Coop Research Agreement in the area of personnel detection based on the work on heterogeneous sensor fusion, POC Dr T. Damarla)

Air Force Office of Scientific Research (Grant in the area of information fusion and information exploitation, POC Dr A. Magnus)

Office of Naval Research/Oak Ridge National Laboratory (Coop Research Agreement in the area of multi-domain networks for detection of nuclear radiation and for perimeter surveillance, POC Dr. N. Rao)

58

Tech Transitions ANDRO Computational Solutions received several SBIR

projects in the areas of image registration and sensor fusion for missile tracking and recognition from AFRL and Missile Defense Agency, POC Mr. A. Drozd

Sensis Corp. is incorporating the temporal Bayesian network model in their situational awareness engine, POC Mr. A. Biss.

Syracuse Center of Excellence on Environment and Energy Systems (sponsored by the state of NY) is incorporating many of the concepts developed in this MURI in the design of its fully instrumented headquarters building.

59

Students Ruixin Niu – Post Doctoral Research Associate Long Zuo – Ph.D. Nojong Heo –Ph.D. Ramesh Rajagopalan – MS, Ph.D. Deepak Devicharan – MS Qi Cheng - MS Ersin Elbasi - MS Jie Yang - MS A. Hasbun - MS

60

Honors and Awards Best paper award for R. Niu, P. Varshney, M.H. Moore, and D.

Klamer, ``Decision Fusion in a Wireless Sensor Network with a Large Number of Sensors'', at the Seventh International Conference on Information Fusion, Stockholm, Sweden, June 2004.

IEEE Distinguished Lecturer for AES Society. Have lectured at Rochester, Long Island , Syracuse, Waterloo and Atlanta Sections.

Colloquium speaker at MIT, UTRC, NIST, UTS (Sydney, Australia), Raytheon, ITT, CMU among other places.

Plenary lectures at National Systems Conf. in India and Passive Covert Radar Conf.

61

Publications Books G.L. Foresti, C.S. Regazzoni and P.K. Varshney

(Eds.), Multisensor Surveillance Systems : The Fusion Perspective , Kluwer Academic Press, 2003.

62

Publications Journal Papers Q. Zhang and P. K. Varshney, "Decentralized M-ary Detection

via Hierarchical Binary Decision Fusion", Information Fusion, vol 2, pp 3-16, March 2001.

Q. Zhang, P.K. Varshney and R.D.Wesel, "Optimal Bi-level Quantization of i.i.d. Sensor Observations for Binary Hypothesis Testing", IEEE Trans. on Information Theory, vol 48, pp 2105-2111, July 2002.

Nojeong Heo and Pramod K. Varshney, "Energy-Efficient Deployment of Intelligent Mobile Sensor Networks," IEEE Trans. on Systems, Man, and Cybernetics, PART A, vol. 35, no. 1, pp.78-92, January 2005.

H. Chen, S. Lee, R. M. Rao, M. A. Slamani and P. K. Varshney, "Imaging for Concealed Weapon Detection," IEEE Signal Processing Magazine, vol.22, no. 2, pp.52-61, March 2005

63

Publications Journal Papers R. Niu, P. Varshney, K. Mehrotra and C. Mohan, “Temporally

Staggered Sensors in Multi-Sensor Target Tracking Systems,” IEEE Transactions on Aerospace and Electronic Systems, pp 794-808, July 2005

Qi Cheng, Pramod K. Varshney, Kishan G. Mehrotra and Chilukuri K. Mohan, “Bandwidth Management in Distributed Sequential Detection,” IEEE Trans. Inform. Theory, Vol. 51, No. 8, pp. 2954-2961, Aug. 2005

R. Niu and P. Varshney, “Distributed Detection and Fusion in a Large Wireless Sensor Network of Random Size,” EURASIP Journal on Wireless Communications and Networking, pp 462-472, September 2005.

E. Elbasi, L. Zuo, K.G.Mehrotra, C. Mohan and P.K. Varshney, “Control Charts Approach for Scenario Recognition in Video Sequences”, Turkish Journal of Electrical Engineering & Computer Sciences, Volume 13, Issue 3, pp. 303-310, 2005

64

Publications Journal Papers R. Niu and P. Varshney, “Target Location Estimation in Sensor

Networks with Quantized Data,” to appear in IEEE Transactions on Signal Processing

R. Niu, P. Varshney, and Q. Cheng, "Distributed Detection in a Wireless Sensor Network with a Large Number of Sensors'', to appear in International Journal on Information Fusion..

Chilukuri K. Mohan, Kishan G. Mehrotra, Pramod K. Varshney and Jie Yang, “Temporal Uncertainty Reasoning Networks for Evidence Fusion with Applications to Object Detection and Tracking,” International Journal of Information Fusion, (to appear)

R. Niu, B. Chen, P. Varshney, “Fusion of Decisions Transmitted over Rayleigh Fading Channels in Wireless Sensor Networks'', to appear by IEEE Transactions on Signal Processing.

65

Publications Conference Proc. C. K. Mohan, K. G. Mehrotra, and P. K. Varshney, “ Temporal

Update Mechanisms for Decision Making with Aging Observations in Probabilistic Networks,” Proc. AAAI Fall Symposium, Cape Cod, MA, Nov. 2001.

R. Niu, P. K. Varshney, K. G. Mehrotra and C. K. Mohan, “ Temporal Fusion in Multi-Sensor Target Tracking Systems,’’ in Proceedings of the Fifth International Conference on Information Fusion, July 2002, Annapolis, Maryland.

Suresh K. Lodha, Nikolai M. Faaland, Amin P. Charaniya, Pramod Varshney, Kishan Mehrotra, and Chilukuri Mohan, “Uncertainty Visualization of Probabilistic Particle Movement,” in the Proceedings of The IASTED Conference on Computer Graphics and Imaging", August 2002.

66

Publications Conference Proc. Q. Cheng, P. K. Varshney, K. G. Mehrotra and C. K. Mohan,

“ Optimal Bandwidth Assignment for Distributed Sequential Detection,’’ in Proceedings of the Fifth International Conference on Information Fusion, July 2002, Annapolis, Maryland.

C.K. Mohan, K. Mehrotra, and P. Varshney,” Temporal Uncertainty Processing,” Fusion'02 Workshop, Utica (NY), July 2002.

Ramesh Rajagopalan, Chilukuri K. Mohan, Kishan G. Mehrotra and Pramod K. Varshney, “Evolutionary multi-objective crowding algorithm for path computations,” Proc. fifth international conference on knowledge based computer systems, Hyderabad, India, December 2004.

67

Publications Conference Proc. R.Niu and P.K.Varshney, “Target Location Estimation in Wireless

Sensor Networks Using Binary Data,” Proceedings of the 38th Annual Conference on Information Sciences and Systems, Princeton, NJ, March 2004

Ramesh Rajagopalan, Pramod K. Varshney, Chilukuri K. Mohan and Kishan G. Mehrotra, “Sensor Placement for Energy Efficient Target Detection in Wireless Sensor Networks: A multi-objective Optimization Approach,” Proc. of the39th Annual Conference on Information Sciences and Systems, Baltimore, Maryland, March 2005.

D. Devicharan, K. Mehrotra, P.K. Varshney, C.K. Mohan, L. Zuo, “Scenario Recognition with Audio-Visual Sensor Fusion,” Proc. of the SPIE Defense and Security Symposium, Orlando, FL, March 2005.

68

Publications Conference Proc.

Ramesh Rajagopalan, Chilukuri K. Mohan, Pramod K. Varshney and Kishan Mehrotra, “Multi-objective mobile agent routing in wireless sensor networks,” Proc. of the IEEE Congress on Evolutionary Computation, Edinburgh, Scotland, April 2005

E. Elbasi, L. Zuo, K. Mehrotra, C.K. Mohan and P. Varshney, “Control Charts Approach for Scenario Recognition,” Proc. Turkish Artificial Intelligence and Neural Networks Symp., June 2004.

Ramesh Rajagopalan, Chilukuri K. Mohan, Kishan Mehrotra and Pramod K Varshney, “An Evolutionary Multi-objective Crowding Algorithm(EMOCA): Benchmark Test Function Results,” 2nd Indian International Conference on Artificial Intelligence, Pune, India, December 2005.

69

Publications Conference Proc.

P. K. Varshney and I. L. Coman, "Distributed Multi-Sensor Surveillance: Issues and Recent Advances", Proc. 2nd European Workshop on Advanced Video-Based Surveillance systems, Kingston, UK, Sept. 2001.

Q. Cheng, P. Varshney, K. Mehrotra and C. Mohan, "Optimal Bandwidth Assignment for Distributed Sequential Detection", Proceedings of the Fifth International Conference on Information Fusion, July 2002, Annapolis, Maryland.

C. Regazzoni and P.K Varshney, "Multisensor Surveillance Systems Based on Image and Video Data", Proc. of the IEEE Conf. on Image Proc., Rochester, NY, Sept. 2002.

J. Yang, C. Mohan, K. Mehrotra and P. Varshney , "A Tool for Belief Updating over Time in Bayesian Networks" , in Proc. 5th Int. Conf. on Tools for A.I., Washington (D.C.), Nov. 2002, pp.284-289.

70

Publications Conference Proc. N. Heo and P. K. Varshney, "A Distributed Self Spreading Algorithm for

Mobile Wireless Sensor Networks," Proc. of IEEE Wireless Communications and Networking Conference, WCNC 2003, March 2003.

R. Niu, P. Varshney, K. Mehrotra and C. Mohan, ``Sensor Staggering in Multi-Sensor Target Tracking Systems'', Proceedings of the 2003 IEEE Radar Conference, Huntsville AL, May 2003.

L. Snidaro, R. Niu, P. Varshney, and G.L. Foresti, ``Automatic Camera Selection and Fusion for Outdoor Surveillance under Changing Weather Conditions'', Proceedings of the 2003 IEEE International Conference on Advanced Video and Signal Based Surveillance, Miami FL, July 2003.

R. M. Rao, H.Chen, M. A. Slamani, and P. K. Varshney, "Imaging for Concealed Weapon Detection", International Conference on Advanced Technologies for Homeland Security, Sept. 25-26, 2003, University of Connecticut, Storrs, Connecticut.

71

Publications Conference Proc. N. Heo and P. K. Varshney, "An Intelligent Deployment and Clustering

Algorithm for a Distributed Mobile Sensor Network," Proc. of the 2003 IEEE International Conference on Systems, Man & Cybernetics, Oct. 2003.

R.Niu and P.K.Varshney, “Target Location Estimation in Wireless Sensor Networks Using Binary Data,”Proceedings of the 38th Annual Conference on Information Sciences and Systems, Princeton, NJ, March 2004.

R. Niu and P. Varshney, “Sampling Schemes for Sequential Detection in Colored Noise”, Proc. of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, Canada, May 2004.

L. Snidaro, R. Niu, P. Varshney, and G.L. Foresti, ``Sensor Fusion for Video Surveillance'', Proceedings of the Seventh International Conference on Information Fusion, Stockholm, Sweden, June 2004.

72

Publications Conference Proc. R. Niu, P. Varshney, M.H. Moore, and D. Klamer, ``Decision Fusion in a

Wireless Sensor Network with a Large Number of Sensors'', Proceedings of the Seventh International Conference on Information Fusion, Stockholm, Sweden, June 2004.

M. Xu, R. Niu, and P. Varshney, “ Detection and Tracking of Moving Objects in Image Sequences with Varying Illumination'', Proceedings of the 2004 IEEE International Conference on Image Processing, Singapore, October 2004.

P. K. Varshney, H. Chen and R.M. Rao, “On signal/image processing for concealed weapons detection from stand-off range,” Invited paper, Proc. of the SPIE defense & Security symposium, pp. 93-97, March 29-31, 2005, Orlando, Florida USA.

Xin Zhang, Tazama Upendo St Julien, Ramesh Rajagopalan, William Ribarsky, Pramod Varshney, Chilukuri K Mohan and Kishan Mehrotra, “An integrated path engine for mobile situational visualization,” Applied Vis Conference, Asheville, NC, April 2005.

73

Publications Conference Proc. R. Niu and P. Varshney, “Decision Fusion in a Wireless Sensor

Network with a Random Number of Sensors,” Proceedings of the 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, March 2005..

Ramesh Rajagopalan, Chilukuri K. Mohan, Pramod K. Varshney and Kishan Mehrotra, “Multi-objective mobile agent routing in wireless sensor networks,” Proc. of the IEEE Congress on Evolutionary Computation, Edinburgh, Scotland, April 2005.

Ramesh Rajagopalan, Pramod K. Varshney, Kishan G. Mehrotra and Chilukuri K. Mohan, “Fault tolerant mobile agent routing in sensor networks: A multi-objective optimization approach,” Proc. of the 2nd IEEE Upstate NY workshop on Comm. and Networking , Rochester, NY, November 2005.