Options
Catch Me if You Can: A Novel Task for Detection of Covert Geo-Locations (CGL)
Date Issued
01-01-2022
Author(s)
Saha, Binoy
Indian Institute of Technology, Madras
Abstract
Most of the visual scene understanding tasks in the field of computer vision involve the identification of the objects present in the scene. Image regions like hideouts, turns, and other obscured regions of the scene also contain crucial information, for specific surveillance tasks. In this work, we propose an intelligent visual aid for the identification of such locations in an image, which has either the potential to create an imminent threat from an adversary or appear as the target zones needing further investigation to identify concealed objects. Covert places (CGL) for hiding behind an occluding object are concealed 3D locations, not usually detectable from the viewpoint (camera). Hence, this involves delineating specific image regions around the outer boundary of the projections of the occluding objects, as places to be accessed around the potential hideouts. CGL detection finds applications in military counter-insurgency operations and surveillance with path planning for an exploratory robot. Given an RGB image, the goal is to identify all CGLs in the 2D scene. Identification of such regions would require knowledge about the 3D boundaries of obscuring items (pillars, furniture), their spatial location with respect to the background, and other neighboring regions of the scene. We propose this as a novel task, termed Covert Geo-Location (CGL) Detection. Classification of any region of an image as a CGL (as boundary sub-segments of an occluding object, concealing the hideout) requires examining the relation with its surroundings. CGL detection would thus require an understanding of the 3D spatial relationships between boundaries of occluding objects and their neighborhoods. Our method successfully extracts relevant depth features from a single RGB image and quantitatively yields significant improvement over existing object detection and segmentation models adapted and trained for CGL detection. We also introduce a novel hand-annotated CGL detection dataset containing 1.5K real-world images for experimentation.ACK: IMPRINT (MHRD/DRDO) GoI, for support
Volume
924