Now showing 1 - 5 of 5
  • Placeholder Image
    Publication
    Constraint integration for efficient multiview pose estimation with self-occlusions
    (01-03-2008)
    Gupta, Abhinav
    ;
    ;
    Davis, Larry S.
    Automatic initialization and tracking of human pose is an important task in visual surveillance. We present a part-based approach that incorporates a variety of constraints in a unified framework. These constraints include the kinematic constraints between parts that are physically connected to each other, the occlusion of one part by another and the high correlation between the appearance of certain parts, such as the arms. The location probability distribution of each part is determined by evaluating appropriate likelihood measures. The graphical (non-tree) structure representing the interdependencies between parts is utilized to "connect" such part distributions via nonparametric belief propagation. Methods are also developed to perform this optimization efficiently in the large space of pose configurations. © 2008 IEEE.
  • Placeholder Image
    Publication
    A general method for Sensor planning in multi-sensor systems: Extension to random occlusion
    (01-01-2008) ;
    Davis, Larry S.
    Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some user-specified characteristics of such objects. For such systems, we deal with the tasks of determining measures for evaluating their performance and of determining good sensor configurations that would maximize such measures for better system performance. We introduce a constraint in sensor planning that has not been addressed earlier: visibility in the presence of random occluding objects. occlusion causes random loss of object capture from certain necessitates the use of other sensors that have visibility of this object. Two techniques are developed to analyze such visibility constraints: a probabilistic approach to determine "average" visibility rates and a deterministic approach to address worst-case scenarios. Apart from this constraint, other important constraints to be considered include image resolution, field of view, capture orientation, and algorithmic constraints such as stereo matching and background appearance. Integration of such constraints is performed via the development of a probabilistic framework that allows one to reason about different occlusion events and integrates different multi-view capture and visibility constraints in a natural way. Integration of the thus obtained capture quality measure across the region of interest yields a measure for the effectiveness of a sensor configuration and maximization of such measure yields sensor configurations that are best suited for a given scenario. The approach can be customized for use in many multi-sensor applications and our contribution is especially significant for those that involve randomly occurring objects capable of occluding each other. These include security systems for surveillance in public places, industrial automation and traffic monitoring. Several examples illustrate such versatility by application of our approach to a diverse set of different and sometimes multiple system objectives. © 2007 Springer Science+Business Media, LLC.
  • Placeholder Image
    Publication
    Constructing task visibility intervals for video surveillance
    (01-12-2006)
    Lim, Ser Nam
    ;
    Davis, Larry S.
    ;
    Vision systems are increasingly being deployed to perform complex surveillance tasks. While improved algorithms are being developed to perform these tasks, it is also important that data suitable for these algorithms be acquired - a non-trivial task in a dynamic and crowded scene viewed by multiple PTZ cameras. In this paper, we describe a real-time multi-camera system that collects images and videos of moving objects in such scenes, subject to task constraints. The system constructs "task visibility intervals" that contain information about what can be sensed in future time intervals. Constructing these intervals requires prediction of future object motion and consideration of several factors such as object occlusion and camera control parameters. Such intervals can also be combined to form multi-task intervals, during which a single camera can collect videos suitable for multiple tasks simultaneously. Experimental results are provided to illustrate the system capabilities in constructing such task visibility intervals, followed by scheduling them using a greedy algorithm. © Springer-Verlag 2006.
  • Placeholder Image
    Publication
    COST*: An approach for camera selection and multi-object inference ordering in dynamic scenes
    (01-12-2007)
    Gupta, Abhinav
    ;
    ;
    Davis, Larry S.
    Development of multiple camera based vision systems for analysis of dynamic objects such as humans is challenging due to occlusions and similarity in the appearance of a person with the background and other people- visual "confusion". Since occlusion and confusion depends on the presence of other people in the scene, it leads to a dependency structure where there are often loops in the resulting Bayesian network. While approaches such as loopy belief propagation can be used for inference, they are computationally expensive and convergence is not guaranteed in many situations. We present a unified approach, COST, that reasons about such dependencies and yields an order for the inference of each person in a group of people and a set of cameras to be used for inferences for a person. Using the probabilistic distribution of the positions and appearances of people, COST performs visibility and confusion analysis for each part of each person and computes the amount of information that can be computed with and without more accurate estimation of the positions of other people. We present an optimization problem to select set of cameras and inference dependencies for each person which attempts to minimize the computational cost under given performance constraints. Results show the efficiency of COST in improving the performance of such systems and reducing the computational resources required. ©2007 IEEE.
  • Placeholder Image
    Publication
    Constraint integration for multiview pose estimation of humans with self-occlusions
    (01-01-2006)
    Gupta, Abhinav
    ;
    ;
    Davis, Larry S.
    Detection of articulated objects such as humans is an important task in computer vision. We present a system that incorporates a variety of constraints in a unified multiview framework to automatically detect humans in possibly crowded scenes. These constraints include the kinematic constraints, the occlusion of one part by another and the high correlation between the appearance of parts such as the two arms. The graphical structure (non-tree) obtained is optimized in a nonparametric belief propagation framework using prior based search. © 2006 IEEE.