Options
A N Rajagopalan
Loading...
Preferred name
A N Rajagopalan
Official Name
A N Rajagopalan
Alternative Name
Rajagopalan, Ambasamudram Narayanan
Rajagopalan, Ambasamudram N.
Rajagopalan, Rajagopalan A.N.
Rajagopalan, A. N.
Rajagopalan, Aswin
Rajagopalan, A.
Main Affiliation
Email
ORCID
Scopus Author ID
Google Scholar ID
9 results
Now showing 1 - 9 of 9
- PublicationFace Recognition Across Non-Uniform Motion Blur, Illumination, and Pose(01-07-2015)
;Punnappurath, Abhijith; ;Taheri, Sima ;Chellappa, RamaSeetharaman, GunaExisting methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1-norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose. - PublicationRolling shutter super-resolution in burst mode(03-08-2016)
;Rengarajan, Vijay ;Punnappurath, Abhijith; Seetharaman, GunasekaranCapturing multiple images using the burst mode of handheld cameras can be a boon to obtain a high resolution (HR) image by exploiting the subpixel motion among the captured images arising from handshake. However, the caveat with mobile phone cameras is that they produce rolling shutter (RS) distortions that must be accounted for in the super-resolution process. We propose a method in which we obtain an RS-free HR image using HR camera trajectory estimated by leveraging the intra- and inter-frame continuity of the camera motion. Experimental evaluations demonstrate that our approach can effectively recover a super-resolved image free from RS artifacts. - PublicationEfficient change detection for very large motion blurred images(24-09-2014)
;Rengarajan, Vijay ;Punnappurath, Abhijith; Seetharaman, GunaIn this paper, we address the challenging problem of registration and change detection in very large motion blurred images. The unreasonable demand that this task puts on computational and memory resources precludes the possibility of any direct attempt at solving this problem. We address this issue by observing the fact that the camera motion experienced by a sufficiently large sub-image is approximately the same as that of the entire image itself. We devise an algorithm for judicious sub-image selection so that the camera motion can be deciphered correctly, irrespective of the presence or absence of occluder. We follow a reblur-difference framework to detect changes as this is an artifact-free pipeline unlike the traditional deblur-difference approach. We demonstrate the results of our algorithm on both synthetic and real data. - PublicationRegistration and occlusion detection in motion blur(01-01-2013)
;Punnappurath, Abhijith; Seetharaman, GunaWe address the problem of automatically detecting occluded regions given a blurred/unblurred image pair of a scene taken from different viewpoints. The occlusion can be due to single or multiple objects. We present a unified framework for detecting occluder(s) that is reasonably robust to non-uniform motion blur as well as variations in camera pose (without the need for deblurring). We assume that the occluded pixels occupy only a relatively small area and that the camera motion trajectory is sparse in the camera motion space. We validate the performance of our algorithm with experiments on synthetic and real data. © 2013 IEEE. - PublicationRecognizing blurred, nonfrontal, illumination, and expression variant partially occluded faces(01-09-2016)
;Punnappurath, AbhijithThe focus of this paper is on the problem of recognizing faces across space-varying motion blur, changes in pose, illumination, and expression, as well as partial occlusion, when only a single image per subject is available in the gallery. We show how the blur, incurred due to relative motion between the camera and the subject during exposure, can be estimated from the alpha matte of pixels that straddle the boundary between the face and the background. We also devise a strategy to automatically generate the trimap required for matte estimation. Having computed the motion via the matte of the probe, we account for pose variations by synthesizing from the intensity image of the frontal gallery a face image that matches the pose of the probe. To handle illumination, expression variations, and partial occlusion, we model the probe as a linear combination of nine blurred illumination basis images in the synthesized nonfrontal pose, plus a sparse occlusion. We also advocate a recognition metric that capitalizes on the sparsity of the occluded pixels. The performance of our method is extensively validated on synthetic as well as real face data. - PublicationBlind restoration of aerial imagery degraded by spatially varying motion blur(01-01-2014)
;Punnappurath, Abhijith; Seetharaman, GunaThis paper deals with deblurring of aerial imagery and develops a methodology for blind restoration of spatially varying blur induced by camera motion caused by instabilities of the moving platform. This is a topic of significant relevance with a potential impact on image analysis, characterization and exploitation. A sharp image is beneficial not only from the perspective of visual appeal but also because it forms the basis for applications such as moving object tracking, change detection, and robust feature extraction. In the presence of general camera motion, the apparent motion of scene points in the image will vary at different locations resulting in space-variant blurring. However, due to the large distances involved in aerial imaging, we show that the blurred image of the ground plane can be expressed as a weighted average of geometrically warped instances of the original focused but unknown image. The weight corresponding to each warp denotes the fraction of the total exposure duration the camera spent in that pose. Given a single motion blurred aerial observation, we propose a scheme to estimate the original focused image affected by arbitrarily-shaped blur kernels. The latent image and its associated warps are estimated by optimizing suitably derived cost functions with judiciously chosen priors within an alternating minimization framework. Several results are given on the challenging VIRAT aerial dataset for validation. © 2014 SPIE. - PublicationMulti-image blind super-resolution of 3D scenes(01-11-2017)
;Punnappurath, Abhijith ;Nimisha, Thekke MadamWe address the problem of estimating the latent high-resolution (HR) image of a 3D scene from a set of non-uniformly motion blurred low-resolution (LR) images captured in the burst mode using a hand-held camera. Existing blind super-resolution (SR) techniques that account for motion blur are restricted to fronto-parallel planar scenes. We initially develop an SR motion blur model to explain the image formation process in 3D scenes. We then use this model to solve for the three unknowns - the camera trajectories, the depth map of the scene, and the latent HR image. We first compute the global HR camera motion corresponding to each LR observation from patches lying on a reference depth layer in the input images. Using the estimated trajectories, we compute the latent HR image and the underlying depth map iteratively using an alternating minimization framework. Experiments on synthetic and real data reveal that our proposed method outperforms the state-of-the-art techniques by a significant margin. - PublicationDeep decoupling of defocus and motion blur for dynamic segmentation(01-01-2016)
;Punnappurath, Abhijith ;Balaji, Yogesh ;Mohan, MaheshWe address the challenging problem of segmenting dynamic objects given a single space-variantly blurred image of a 3D scene captured using a hand-held camera. The blur induced at a particular pixel on a moving object is due to the combined effects of camera motion, the object’s own independent motion during exposure, its relative depth in the scene, and defocusing due to lens settings. We develop a deep convolutional neural network (CNN) to predict the probabilistic distribution of the composite kernel which is the convolution of motion blur and defocus kernels at each pixel. Based on the defocus component, we segment the image into different depth layers. We then judiciously exploit the motion component present in the composite kernels to automatically segment dynamic objects at each depth layer. Jointly handling defocus and motion blur enables us to resolve depth-motion ambiguity which has been a major limitation of the existing segmentation algorithms. Experimental evaluations on synthetic and real data reveal that our method significantly outperforms contemporary techniques. - PublicationRolling shutter super-resolution(17-02-2015)
;Punnappurath, Abhijith ;Rengarajan, VijayClassical multi-image super-resolution (SR) algorithms, designed for CCD cameras, assume that the motion among the images is global. But CMOS sensors that have increasingly started to replace their more expensive CCD counterparts in many applications do not respect this assumption if there is a motion of the camera relative to the scene during the exposure duration of an image because of the row-wise acquisition mechanism. In this paper, we study the hitherto unexplored topic of multi-image SR in CMOS cameras. We initially develop an SR observation model that accounts for the row-wise distortions called the "rolling shutter" (RS) effect observed in images captured using non-stationary CMOS cameras. We then propose a unified RS-SR framework to obtain an RS-free high-resolution image (and the row-wise motion) from distorted low-resolution images. We demonstrate the efficacy of the proposed scheme using synthetic data as well as real images captured using a hand-held CMOS camera. Quantitative and qualitative assessments reveal that our method significantly advances the state-of-the-art.