Options
A N Rajagopalan
Loading...
Preferred name
A N Rajagopalan
Official Name
A N Rajagopalan
Alternative Name
Rajagopalan, Ambasamudram Narayanan
Rajagopalan, Ambasamudram N.
Rajagopalan, Rajagopalan A.N.
Rajagopalan, A. N.
Rajagopalan, Aswin
Rajagopalan, A.
Main Affiliation
Email
ORCID
Scopus Author ID
Google Scholar ID
11 results
Now showing 1 - 10 of 11
- PublicationFace Recognition Across Non-Uniform Motion Blur, Illumination, and Pose(01-07-2015)
;Punnappurath, Abhijith; ;Taheri, Sima ;Chellappa, RamaSeetharaman, GunaExisting methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1-norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose. - PublicationCueing motion blur for registration of inclined planar scenes(01-01-2015)
;Nair, Arun Asokan ;Rao, M. Purnachandra; Seetharaman, GunaExisting image registration methods that work in the presence of motion blur assume the scene to be fronto-parallel. In this work, we extend the state-of-the-art by cueing motion blur itself to infer plane inclination. This is achieved by matching extremities of blur kernels computed at different locations in the image. Because these extremities correspond to the same homography, we show that it is possible to find the orientation. Following this, we propose a registration method that reorients the source image to the target plane within a reblur-difference framework to detect the actual changes. - PublicationEfficient change detection for very large motion blurred images(24-09-2014)
;Rengarajan, Vijay ;Punnappurath, Abhijith; Seetharaman, GunaIn this paper, we address the challenging problem of registration and change detection in very large motion blurred images. The unreasonable demand that this task puts on computational and memory resources precludes the possibility of any direct attempt at solving this problem. We address this issue by observing the fact that the camera motion experienced by a sufficiently large sub-image is approximately the same as that of the entire image itself. We devise an algorithm for judicious sub-image selection so that the camera motion can be deciphered correctly, irrespective of the presence or absence of occluder. We follow a reblur-difference framework to detect changes as this is an artifact-free pipeline unlike the traditional deblur-difference approach. We demonstrate the results of our algorithm on both synthetic and real data. - PublicationRegistration and occlusion detection in motion blur(01-01-2013)
;Punnappurath, Abhijith; Seetharaman, GunaWe address the problem of automatically detecting occluded regions given a blurred/unblurred image pair of a scene taken from different viewpoints. The occlusion can be due to single or multiple objects. We present a unified framework for detecting occluder(s) that is reasonably robust to non-uniform motion blur as well as variations in camera pose (without the need for deblurring). We assume that the occluded pixels occupy only a relatively small area and that the camera motion trajectory is sparse in the camera motion space. We validate the performance of our algorithm with experiments on synthetic and real data. © 2013 IEEE. - PublicationCamera Shutter-Independent Registration and Rectification(01-04-2018)
;Vasu, Subeesh; Seetharaman, GunaInevitable camera motion during exposure does not augur well for free-hand photography. Distortions introduced in images can be of different types and mainly depend on the structure of the scene, the nature of camera motion, and the shutter mechanism of the camera. In this paper, we address the problem of registering images taken from global shutter and rolling shutter cameras and reveal the constraints on camera motion that admit registration, change detection, and rectification. Our analysis encompasses degradations arising from camera motion during exposure and differences in shutter mechanisms. We also investigate conditions under which camera motions causing distortions in reference and target image can be decoupled to yield the underlying latent image through RS rectification. We validate our approach using several synthetic and real examples. - PublicationRestoration of foggy and motion-blurred road scenes(01-01-2013)
;Veeramani, Thangamani; Seetharaman, GunaExisting single image defogging techniques can restore contrast loss and yield a rough estimate of the depth map of a scene. The ubiquity of hand-held imaging devices has attracted considerable attention to motion blur but this has not been addressed in the context of images captured under foggy conditions. In this paper, we show how to restore foggy motion-blurred images using depth cues derived from fog itself. Initially, we address restoration of images blurred primarily due to in-plane translational camera motion. This is followed by a scheme for handling general camera motion blur with a projective blur model. We demonstrate that foggy road scene images can be segmented into road, left, right and sky planes, and that each of these planes can be deblurred individually. © 2013 IEEE. - PublicationBlind restoration of aerial imagery degraded by spatially varying motion blur(01-01-2014)
;Punnappurath, Abhijith; Seetharaman, GunaThis paper deals with deblurring of aerial imagery and develops a methodology for blind restoration of spatially varying blur induced by camera motion caused by instabilities of the moving platform. This is a topic of significant relevance with a potential impact on image analysis, characterization and exploitation. A sharp image is beneficial not only from the perspective of visual appeal but also because it forms the basis for applications such as moving object tracking, change detection, and robust feature extraction. In the presence of general camera motion, the apparent motion of scene points in the image will vary at different locations resulting in space-variant blurring. However, due to the large distances involved in aerial imaging, we show that the blurred image of the ground plane can be expressed as a weighted average of geometrically warped instances of the original focused but unknown image. The weight corresponding to each warp denotes the fraction of the total exposure duration the camera spent in that pose. Given a single motion blurred aerial observation, we propose a scheme to estimate the original focused image affected by arbitrarily-shaped blur kernels. The latent image and its associated warps are estimated by optimizing suitably derived cost functions with judiciously chosen priors within an alternating minimization framework. Several results are given on the challenging VIRAT aerial dataset for validation. © 2014 SPIE. - PublicationInferring plane orientation from a single motion blurred image(04-12-2014)
;Rao, M. Purnachandra; Seetharaman, GunaWe present a scheme for recovering the orientation of a planar scene from a single translation ally-motion blurred image. By leveraging the homography relationship among image coordinates of 3D points lying on a plane, and by exploiting natural correspondences among the extremities of the blur kernels derived from the motion blurred observation, the proposed method can accurately infer the normal of the planar surface. We validate our approach on synthetic as well as real planar scenes. - PublicationHarnessing motion blur to unveil splicing(01-04-2014)
;Rao, Makkena Purnachandra; Seetharaman, GunaThe extensive availability of sophisticated image editing tools has rendered it relatively easy to produce fake images. Image splicing is a form of tampering in which an original image is altered by copying a portion from a different source. Because the phenomenon of motion blur is a common occurrence in hand-held cameras, we propose a passive method to automatically detect image splicing using blur as a cue. Specifically, we address the scenario of a static scene in which the cause of blur is due to hand shake. Existing methods for dealing with this problem work only in the presence of uniform space-invariant blur. In contrast, our method can expose the presence of splicing by evaluating inconsistencies in motion blur even under space-variant blurring situations. We validate our method on several examples for different scene situations and camera motions of interest. © 2014 IEEE. - PublicationIllumination robust change detection with CMOS imaging sensors(01-01-2015)
;Rengarajan, Vijay ;Gupta, Sheetal B.; Seetharaman, GunaChange detection between two images in the presence of degradations is an important problem in the computer vision community, more so for the aerial scenario which is particularly challenging. Cameras mounted on moving platforms such as aircrafts or drones are subject to general six-dimensional motion as the motion is not restricted to a single plane. With CMOS cameras increasingly in vogue due to their low power consumption, the inevitability of rolling-shutter (RS) effect adds to the challenge. This is caused by sequential exposure of rows in CMOS cameras unlike conventional global shutter cameras where all pixels are exposed simultaneously. The RS effect is particularly pronounced in aerial imaging since each row of the imaging sensor is likely to experience a different motion. For fast-moving platforms, the problem is further compounded since the rows are also affected by motion blur. Moreover, since the two images are shot at different times, illumination differences are common. In this paper, we propose a unified computational framework that elegantly exploits the sparsity constraint to deal with the problem of change detection in images degraded by RS effect, motion blur as well as non-global illumination differences. We formulate an optimization problem where each row of the distorted image is approximated as a weighted sum of the corresponding rows in warped versions of the reference image due to camera motion within the exposure period to account for geometric as well as photometric differences. The method has been validated on both synthetic and real data.