IEEE Project1

Embed Size (px)

Citation preview

  • 7/29/2019 IEEE Project1

    1/5

    Optimal Placement of Multiple Visual Sensors

    using Simulation of Pedestrian Movement

    Yunyoung Nam and Sangjin Hong

    Xeron Healthcare Corp.

    Daerung Post Tower III, 410 182-4 Guro-dong, Guro-gu, Seoul, Korea

    Email: [email protected] of Electrical and Computer Engineering,

    Stony Brook University-SUNY, Stony Brook, NY, USA

    Email: [email protected]

    AbstractThis paper present an optimal camera placementmethod that analyzes a spatial-temporal model and calculatespriorities of spaces using simulation of pedestrian movement. Inorder to cover the space efficiently, we accomplished an agent-based simulation based on classification of space and patternanalysis of moving people. We have developed an agent-based

    camera placement method considering camera performance andspace utility extracted from a path finding algorithm. We demon-strate that the method not only determines the optimal numberof cameras, but also coordinates the position and orientation ofthe cameras with considering the installation costs. To validatethe method, we show simulation results in a specific space.

    I. INTRODUCTION

    Traditional video surveillance systems using multiple cam-

    eras are used to gather data and to identify clues after an

    incident has taken into placed to detect an event requiring

    attention. However, to install these cameras for monitoring a

    large-scale coverage area, it requires hundreds of cameras and

    sensors which can bring the cost of a surveillance system up tomillions of dollars. Most of the surveillance systems require

    the layout of cameras to assure a minimum level of image

    quality or image resolution and to have as much coverage as

    possible within a pre-defined region, with an acceptable level

    of quality-of-service and with minimum setup cost.

    Therefore, the camera placement is one of the most im-

    portant issue to develop the effective surveillance system that

    maximizes the observability of the motions taking place. This

    paper proposes a new camera placement method to optimize

    the views for providing the highest resolution images of ob-

    jects under the circumstances of a limited number of mounting

    cameras. To efficiently calculate the camera placement with

    certain task-specific constraints and minimal camera setupcost, assumptions can be defined as the capabilities of real-

    world cameras. In this paper, variants employ several assump-

    tions about the capabilities that make these algorithms suitable

    for most real-world computer vision applications: limited field

    of view, finite depth of field, angle of cameras.

    This research is supported by the International Collaborative R&D Pro-gram of the Ministry of Knowledge Economy (MKE), the Korean government,as a result of Development of Security Threat Control System with Multi-Sensor Integration and Image Analysis Project, 2010-TD-300802-002.

    The proposed method focuses on optimizing the aggregate

    path observability as a whole, based on a probabilistic frame-

    work, and calculates the coverage and cost constraints. This

    framework tries to optimize coverage area and to minimize

    overlapping views. The method used in this paper assumes

    that the surveillance systems operate on a set of fixed camerasto monitor moving subjects.

    The remainder of this paper is organized as follows. Section

    II addresses problems and describes related work. Section

    III represents models for a space and an agent. Section IV

    presents the automatic camera placement method to determine

    appropriate camera positions and an appropriate number of

    cameras. Section V shows the details of simulation results.

    Finally, Section VI concludes this paper.

    I I . PROBLEM DESCRIPTION AND RELATED WOR K

    Most of the video surveillance camera placement ap-

    proaches that appear in the literature focus on space coverage

    and sensor deployment. The sensor deployment problem isclosely related to the Art Gallery Problem (AGP) that deter-

    mines the minimum number of guards required to cover the

    interior of an art gallery [1] and maximizes camera coverage

    of an area, where the camera fields of view do not overlap.

    The AGP has been solved optimally in two dimensions and

    shown to be NP-hard in the three-dimensional case. Several

    variants of AGP have been studied in the literature, including

    mobile guards, exterior visibility, and polygons with holes.

    Yao and Allen [2] formulated the problem of sensor place-

    ment to satisfy feature detectability constraints as an un-

    constrained optimization problem, and applied tree-annealing

    to compute optimal camera viewpoints in the presence of

    noise. Erdem and Sclaroff [3] proposed a camera placementalgorithm based on a binary optimization technique. Only

    polygonal spaces are considered, which are presented as

    occupancy grids. They converted such a continuous domain

    into discrete domain and developed a 0-1 integer programming

    for solving the optimal locations of cameras such that all

    discrete grid points in the specific areas/lanes are monitored.

    The effort [4] is made to tackle the problem of task-

    specific camera placement, in which the authors optimize

    camera placement to maximize observability of the set of

    Workshop on Computing, Networking and Communications

    978-1-4673-0009-4/12/$26.00 2012 IEEE 67

  • 7/29/2019 IEEE Project1

    2/5

    actions performed in a defined area. Yao et al. [5] presented a

    camera placement algorithm to optimally preserve overlapped

    FOVs and minimize the installation cost with respect to a

    maximization of the quality of the recorded data.

    In the previous work [6], we presented a camera placement

    approach to minimize occlusion for effective object detection

    and tracking. In [7], they addressed grid coverage problems

    and sensor deployment with sensors sensing events that occur

    within the sensing range of the sensor. In the approache, space

    was presented as a regular grid. In [8], they developed and

    presented a framework that had a novel approach for deter-

    mining the optimal number of sensors, along with locating

    and setting their orientational sensor-specific parameters on

    a synthetically generated 3-D terrain with multiple objectives.

    Their solution approach relies on the rational tradeoff between

    three conflicting objectives that maximize the coverage area

    while maintaining the maximum stealth, and minimize the

    total acquisition cost of deploying the sensors.

    In this paper, we explain several models in agent-based

    pedestrian simulations based on a space model as a two-

    dimensional grid map. To determine appropriate camera posi-tions and an appropriate number of cameras to be installed in

    view of installation cost, the automatic camera placement is

    accomplished by four steps. These steps are space modeling

    including a priority space extraction, agent modeling, trajecto-

    ries generating with estimating priority areas, and a placement

    location selection.

    III. SIMULATION MODEL

    A. Space Model

    Space design is constrained by the performance and applied

    algorithms of a camera in constructing a conventional video

    surveillance system. The space modeling module models a

    specified space as a 2D grid map. The space priority extractionmodule expresses the space priority of each cell of the grid

    map in a numerical value based on an amount of movement of

    an agent in a first area of the grid map and a probability that

    the agent will move from the first area to a second area. The

    space model unit consists three layers that are a structure layer,

    an area layer, a priority layer. The structure layer contains

    accessible areas and inaccessible areas in a space in which

    vision sensors are to be placed. This information is applied to

    a path-finding algorithm.

    The area layer contains information about the amount of

    movement of a person in a specified area of a 2D grid

    map and information about a probability that the person will

    move from a starting point to a destination. The area layercontains area attribute needed to extract a priority area. The

    area attribute includes location (x, y), range r, the amount

    of movement of a person, and probability that the person

    will move from a starting point to a destination. A movement

    probability refers to a probability that a person arriving at a

    second area (a destination) from a first area (a starting point)

    on a 2D grid map. The probability is calculated based on

    (n 1) destinations excluding a current destination, at whichthe person has arrived, among a total number n of destinations.

    The priority layer presents space priorities that are de-

    termined based on a path-finding of the agent. The space

    priorities are updated based on the movement pattern of the

    agent and used to determine locations at which cameras are

    to be placed.

    B. Agent Model

    Agent trajectories are simulated based on the assumptionthat the agents are people. The artificial intelligence-based

    path-finding algorithm considers neither a local field of view

    by assuming that the agent has learned regional geographical

    information nor the cost of the direction change.

    In order to minimize the difference between paths actually

    traveled by people and selected by an agent, the path-finding

    algorithm should be improved by estimating available routes

    around a route that is found using a path-finding algorithm.

    First of all, an agent examines all accessible areas to move to

    all destinations and performs an inference-based path-finding

    simulation. In this paper, eight different directions in a 2D

    grid map are considered to apply the A* algorithm [9] to the

    improved path-finding algorithm using a heuristic evaluationfunction as follows:

    F = G + H, (1)

    where G is the total cost of movement from a start node to

    a current node and H is the total cost of movement from the

    current node to a goal node, which ignores obstacles between

    the two nodes.

    Therefore, F is a final criterion used in the A* algorithm

    to determine a priority in path-finding. The cost of movement

    is set to 1 for up, down, right, and left directions and is set to2 for a diagonal direction. In addition, the cost of movement

    between the start node and the current node is calculated using

    the Manhattan method [10].

    To reduce the difference between trajectories actually trav-

    eled by people and found using the A* algorithm, a path

    expansion algorithm is applied to the A* algorithm. The path

    expansion algorithm infers areas where people can move in an

    accessible space and updates a priority in priority area layer.

    IV. CAMERA PLACEMENT

    A. Camera Coverage Model

    The position of one camera is represented by X and Y

    integer coordinates on a 2D grid map, and the camera has

    a look direction angle of maximum 360 degrees at a fixed

    position. Figure 1 shows the FOV model of a camera that has

    a triangular structure which is defined by a viewing distance

    d, an angular range a, and a look direction angle of the

    camera at a camera position P=(xc, yc). Thus, the F OV isdenoted by

    F OV =

    x y

    , (2)

    where xctan(a2 )d x xc+tan(a2 )d and yctan(a2 )d y yc + tan(a2 ) d. After the new camera position istranslated to an origin (0, 0), the new F OV is denoted by

    68

  • 7/29/2019 IEEE Project1

    3/5

    Fig. 1. FOV model of a camera

    F OV =

    x y

    (3a)

    =

    x xc y yc

    (3b)

    After the triangle is rotated counterclockwise degrees by

    the coordinate origin, new F OV are follows:

    F OV =

    x y

    (4a)

    =

    x cos() y sin()y cos() + x sin()

    (4b)

    Thus,

    x d (5)tan( a

    2) x y tan( a

    2) x (6)

    Equation 7 is calculated by using equations 2, 3, 4, 5.

    (x xc) cos() (y yc) sin() d (7)Equation 8 is calculated by using equations 2, 3, 4, 6.

    tan( a2 ) {(x xc) cos() (y yc) sin()} (y yc) cos() + (x xc) sin()

    tan(a

    2) {(x xc) cos() (y yc) sin()} (8)

    In order to calculate the coverage of a camera considering

    the detection ratio of moving objects, the space is divided

    into accessible areas and inaccessible areas such as walls and

    obstacles in the FOV of the camera. The utility of an area

    is measured by the coverage ratio of FOVs. Thus, the utility

    of accessible area Uacc is calculated by ratio of FOVs and a

    space as follows:

    Uacc(P) =Racc(V)

    Rzcc(S) , (9)

    where P is a camera position, V is a FOV, S is a space, and

    Racc is ratio of accessible areas to the overall space.

    The utility of paths is calculated by the number of paths

    that are passed by agents. Let Rpath be the path ratio of grid

    cells of the area to overall grid cells on paths. Thus, the utility

    of paths extracted by A* algorithm Upath is calculated by

    Upath(P) = Ppath Rpath(V)Rpath(S)

    , (10)

    where Ppath is the path probability and is determined by the

    probability extracted from area layers.

    Similarly, the utility of expansion paths Uexp is calculated

    by Upath and the number of expansion paths as follows:

    Uexp(P) =Upath(P)

    N, (11)

    where N is the number of expansion paths.

    The utility of inaccessible areas Uinacc should be consideredto calculate the coverage of a camera. After determining the

    inaccessible areas such as walls and obstacles in the FOV of

    the camera, Uinacc is calculated by

    Uinacc(P) =Rinacc(V)

    Rinacc(S). (12)

    Finally, the utility U(P) of each camera position is calcu-lated by four priorities as follows:

    U(P) = Uacc(P) + Upath(P) + Uexp(P) Uinacc(P). (13)B. Camera Placement Method

    The utility of visibility at each location on the 2D grid mapvaries depending upon space utility values of cells in an FOV

    of the camera with the performance as well as the cost of the

    camera. The space utility of each cell on the 2D grid map

    is expressed in a numerical value based on the amount of

    movement of an agent in a specified area of the 2D grid map

    and a probabilixty that the agent moves from a starting point

    to a destination.

    In this paper, a greedy algorithm is used to select locations

    at which cameras are to be placed. Using the greedy algorithm,

    a priority area is extracted from the space priority contained in

    the area attribute of space modeling and path obtained using

    an agent path-finding algorithm. Then camera locations are

    selected in the extracted priority area. Through the greedystrategy, all coordination of points on a space model are

    examined to find a coordination of points that has a highest

    utility in the FOV of a camera.

    The utility of visibility is set to a maximum value that can

    be obtained at a specified coordination of point on a grid map

    among sums of space utility values of cells within a virtual

    FOV of a camera for all directions and angles. That is, the

    utility of visibility is calculated based on the maximum space

    utility coverage that is obtained at the position of a specified

    cell on a grid map.

    From all cells on the utility map of visibility, cells in which

    a camera is to be placed are selected in order of highest to

    lowest utility of visibility of the camera. Then, a predeterminedrange from the position of a selected cell on the utility map of

    visibility is reconfigured upon the installation of the camera.

    If cameras cover all space priorities updated on the priority

    layer of an input space in its FOV, the placement of the

    camera is completed. However, the method based on space

    utility coverage does not ensure the optimal placement of the

    camera. Thus, we considered (1) a placement cost limit and

    (2) minimum coverage of the observable space to determine

    the optimal number of cameras to be placed. The placement

    69

  • 7/29/2019 IEEE Project1

    4/5

    Fig. 2. Experimental environments and a simulation application

    cost limit is the maximum cost when a maximum number of

    cameras is placed which does not exceed the placement cost

    limit. The minimum space coverage is ratio of the observable

    space to the overall space when a minimum number of cameras

    is placed which satisfies the minimum space utility coverage.

    The number of cameras to be placed may be calculated based

    on resources or the range of an area that can be monitored by

    a camera as summarized in Algorithm 1.

    Algorithm 1 Camera placement algorithm

    1) Create the utility map of visibility based on the greedy

    strategy by the cost of a camera and space coverage of

    the FOV of a camera in the entire map space.

    2) Select a point having a highest utility of visibility value

    on the utility map of visibility.

    3) Place the camera at the selected point and update a

    camera placement list.

    4) Terminate camera placement when it is determined that

    an optimal number of cameras has been placed.

    5) Recalculate the utility of visibility around the positionof the camera on the utility map of visibility.

    6) Return to step 2.

    V. SIMULATION RESULTS

    A. Simulation Setup

    In order to compare with simulation results and experimen-

    tal results on level terrain (2-D), the experiments have been

    conducted with one top-down camera installed in the fourth

    floor. We used a layout of the building with base length 63

    meter and height 40 meter. To input areas and regions, a

    graphical user interface has been developed as shown in Figure

    2.The utility of visibility varies depending upon the perfor-

    mance as well as cost of the camera. In this paper, three types

    of cameras are used to evaluate the camera placement method.

    The viewing distance of the camera B is 1.5 times longer than

    that of the camera A. The available angle of the camera A is

    about 1.33 times greater than that of the camera B. Though

    the viewing distance of the camera C is 2 times longer than

    that of the camera A, the installation cost of camera C is 2.5

    times larger than that of camera A.

    (a) Trajectories extracted for 3 hours (b) Trajectories extracted for 6 hours

    Fig. 3. Priority layer with trajectories

    (a) Cost limit is 500 (b) Cost limit is 1000

    (c) Cost limit is 1200 (d) Cost limit is 1500

    Fig. 4. Camera positions that meet a camera installation cost limit set

    Figure 3(a) and Figure 3(b) shows the priority layers withtrajectories that are extracted from the structure layer and the

    area layer of the building for 3 hours and 6 hours, respectively.

    As time goes on, it is obvious that the extracted trajectories

    are represented by target motion paths taken through an area

    of interest as shown in Figure 3.

    B. Results

    The camera placement algorithm using the greedy algorithm

    calculates the utility of visibility as well as the utility of cost of

    each type of camera in each cell on a grid map. The utility map

    of visibility is continuously updated until an optimal member

    of cameras to be placed is finally set.

    Figure 4 shows camera placement positions and FOVsdepend on each camera installation cost limit set. In the figure,

    it is optimal to install 6 A-type cameras, 5 C-type cameras,

    15 A-type cameras, and 18 A-type cameras, when cost limits

    are 500, 1000, 1200, 1500, respectively.

    Figure 5 shows the space coverage until 20 cameras are

    installed. As shown in Figure 5(a), as the number of installed

    C-type cameras increases, coverage rate per camera is lower

    than the others. The camera C is the most appropriate for

    installation, when the number of camera is less than 5 cameras.

    70

  • 7/29/2019 IEEE Project1

    5/5

    The number of cameras

    0 5 10 15 20

    Spacecoverage

    0.0

    20.0x10 3

    40.0x10 3

    60.0x10 3

    80.0x10 3

    100.0x10 3

    120.0x10 3

    Camera A

    Camera B

    Camera C

    (a) Space coverage

    The number of cameras

    0 5 10 15 20

    Spacecoverageratepercost

    0

    100

    200

    300

    400

    500

    600

    700

    Camera A

    Camera B

    Camera C

    (b) Space coverage rate per cost

    The number of cameras

    0 5 10 15 20

    Accumulatedspacecoveragerate(%)

    0

    20

    40

    60

    80

    100

    Camera A

    Camera B

    Camera C

    (c) Accumulated space coverage rate

    Fig. 5. Space coverage

    When the number of camera is more than 6 cameras and

    less than 8 cameras, the camera C is the most appropriate

    for installation. The camera A is the most appropriate for

    installation when the number of camera is more than 9

    cameras. As shown in Figure 5(b), the camera A has the

    highest coverage rate per cost. Figure 5(c) shows accumulated

    space utility coverage rate. When 20 cameras are installed,

    coverage rates of the camera A, B, C are 85.74%, 96.69%,

    99.87%, respectively. Although the coverage rate is 98.98%

    when 14 C-type cameras are installed, the cost of the camera

    C is the highest. Because cameras are selected in order of

    highest to lowest utility of visibility of the camera, Figure5(c) shows the fastest rate of increase of coverage at first.

    However, the rate of increase of coverage rate is continuing to

    decline. For example, the coverage rate of the camera C falls

    below 0.08% when the number of installed cameras is over

    15.

    Figure 6 shows the space utility coverage and the cost when

    each type of camera is installed with the cost limit or the

    minimum space coverage rate. When the cost limit is below

    810 or over 1200, the camera A is the most appropriate for

    installation as shown in Figure 6(a). The camera C is the most

    appropriate for installation, when the cost limit is over 810

    and below 1200. As shown in Figure 6(b), the camera C is

    the most appropriate when the coverage rate is less than 54%.The camera A is the most appropriate when the coverage rate

    is more than 54%.

    VI . CONCLUSIONS

    We have presented the camera placement with certain

    task-specific constraints and the minimal camera setup cost,

    assumptions defined as the performance of real-world cameras.

    The camera placement method uses an agent which is modeled

    and implemented using the A* algorithm to estimate the

    Cost

    500 600 700 800 900 1000 1100 1200 1300 1400 1500

    Coverage

    100x10 3

    200x10 3

    300x10 3

    400x10 3

    500x10 3

    600x10 3

    Camera A

    Camera B

    Camera C

    (a) Space utility coverage

    Coverage rate

    20% 40% 60% 80%

    Cost

    0

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    Camera A

    Camera B

    Camera C

    (b) Space utility coverage cost

    Fig. 6. Space utility coverage and cost

    trajectories of moving people. The camera placement method

    determines appropriate camera positions and an appropriate

    number of cameras to be installed in view of installation cost.

    The number of cameras to be placed has been calculated

    based on minimum camera placement cost and maximum

    space coverage. We considered three types of camera and

    sum of installation cost of each camera. It is essential to

    deal with a combination of various types of cameras to

    determine appropriate camera positions and an appropriatenumber of cameras. In the future, we will improve our method

    to calculate overall utility of visibility considering various

    types of camera by mixture.

    REFERENCES

    [1] J. ORourke, Art gallery theorems and algorithms. New York, NY,USA: Oxford University Press, Inc., 1987.

    [2] Y. Yao and P. Allen, Computing robust viewpoints with multi-constraints using tree annealing, in Systems, Man and Cybernetics,1995. Intelligent Systems for the 21st Century., IEEE InternationalConference on, vol. 2, Oct. 1995, pp. 993 998 vol.2.

    [3] U. M. Erdem and S. Sclaroff, Optimal placement of cameras infloorplans to satisfy task requirements and cost constraints, in In Proc.of OMNIVIS Workshop, 2004.

    [4] R. Bodor, A. Drenner, P. Schrater, and N. Papanikolopoulos, Optimalcamera placement for automated surveillance tasks, Journal of Intelli-gent & Robotic Systems, vol. 50, pp. 257295, 2007.

    [5] Y. Yao, C.-H. Chen, B. Abidi, D. Page, A. Koschan, and M. Abidi,Sensor planning for automated and persistent object tracking withmultiple cameras, in Computer Vision and Pattern Recognition, 2008.CVPR 2008. IEEE Conference on, 2008, pp. 1 8.

    [6] J. Ryu, Y. Nam, W.-D. Cho, and M. Stanacevic, Camera placement forminimizing occlusion in object tracking systems, Journal of UbiquitousConvergence Technology, vol. 3, no. 1, pp. 1319, 2009.

    [7] J. Wang and N. Zhong, Efficient point coverage in wirelesssensor networks, Journal of Combinatorial Optimization, vol. 11,pp. 291304, 2006, 10.1007/s10878-006-7909-z. [Online]. Available:http://dx.doi.org/10.1007/s10878-006-7909-z

    [8] H. Topcuoglu, M. Ermis, and M. Sifyan, Positioning and utilizingsensors on a 3-d terrain part i - theory and modeling, Systems, Man,and Cybernetics, Part C: Applications and Reviews, IEEE Transactions

    on, vol. 41, no. 3, pp. 376 382, may 2011.[9] P. Hart, N. Nilsson, and B. Raphael, A formal basis for the heuristic de-termination of minimum cost paths, Systems Science and Cybernetics,

    IEEE Transactions on, vol. 4, no. 2, pp. 100 107, 1968.[10] P. Ballard and F. Vacherand, The manhattan method: a fast cartesian

    elevation map reconstruction from range data, in Robotics and Automa-tion, 1993. Proceedings., 1993 IEEE International Conference on, May1993, pp. 143 148 vol.3.

    71