Options
A N Rajagopalan
Loading...
Preferred name
A N Rajagopalan
Official Name
A N Rajagopalan
Alternative Name
Rajagopalan, Ambasamudram Narayanan
Rajagopalan, Ambasamudram N.
Rajagopalan, Rajagopalan A.N.
Rajagopalan, A. N.
Rajagopalan, Aswin
Rajagopalan, A.
Main Affiliation
Email
ORCID
Scopus Author ID
Google Scholar ID
7 results
Now showing 1 - 7 of 7
- PublicationRolling shutter super-resolution in burst mode(03-08-2016)
;Rengarajan, Vijay ;Punnappurath, Abhijith; Seetharaman, GunasekaranCapturing multiple images using the burst mode of handheld cameras can be a boon to obtain a high resolution (HR) image by exploiting the subpixel motion among the captured images arising from handshake. However, the caveat with mobile phone cameras is that they produce rolling shutter (RS) distortions that must be accounted for in the super-resolution process. We propose a method in which we obtain an RS-free HR image using HR camera trajectory estimated by leveraging the intra- and inter-frame continuity of the camera motion. Experimental evaluations demonstrate that our approach can effectively recover a super-resolved image free from RS artifacts. - PublicationMixed-dense connection networks for image and video super-resolution(20-07-2020)
;Purohit, Kuldeep ;Mandal, SrimantaEfficiency of gradient propagation in intermediate layers of convolutional neural networks is of key importance for super-resolution task. To this end, we propose a deep architecture for single image super-resolution (SISR), which is built using efficient convolutional units we refer to as mixed-dense connection blocks (MDCB). The design of MDCB combines the strengths of both residual and dense connection strategies, while overcoming their limitations. To enable super-resolution for multiple factors, we propose a scale-recurrent framework which reutilizes the filters learnt for lower scale factors recursively for higher factors. This leads to improved performance and promotes parametric efficiency for higher factors. We train two versions of our network to enhance complementary image qualities using different loss configurations. We further employ our network for video super-resolution task, where our network learns to aggregate information from multiple frames and maintain spatio-temporal consistency. The proposed networks lead to qualitative and quantitative improvements over state-of-the-art techniques on image and video super-resolution benchmarks. - PublicationSuper-resolution using motion and defocus cues(01-01-2007)
;Suresh, K. V.Reconstruction-based super-resolution algorithms use either sub-pixel shifts or relative blur among low-resolution observations as a cue to obtain a high-resolution image. In this paper, we propose a super-resolution algorithm that exploits the information available in the low-resolution observations due to both sub-pixel shifts and relative blur to yield a better quality image. Performance analysis is carried out based on the Cramér-Rao lower bound. Several experimental results on synthetic and real images are given for validation. © 2007 IEEE. - PublicationHigh resolution image reconstruction in shape from focus(01-01-2007)
;Sahay, R. R.In the Shape from Focus (SPF) method, a sequence of images of a 3D object is captured for computing its depth profile. However, it is useful in several applications to also derive a high resolution focused image of the 3D object. Given the space-variantly blurred frames and the depth map, we propose a method to optimally estimate a high resolution image of the object within the SFF framework. ©2007 IEEE. - PublicationRange map superresolution-inpainting, and reconstruction from sparse data(01-04-2012)
;Bhavsar, Arnav V.Range images often suffer from issues such as low resolution (LR) (for low-cost scanners) and presence of missing regions due to poor reflectivity, and occlusions. Another common problem (with high quality scanners) is that of long acquisition times. In this work, we propose two approaches to counter these shortcomings. Our first proposal which addresses the issues of low resolution as well as missing regions, is an integrated super-resolution (SR) and inpainting approach. We use multiple relatively-shifted LR range images, where the motion between the LR images serves as a cue for super-resolution. Our imaging model also accounts for missing regions to enable inpainting. Our framework models the high resolution (HR) range as a Markov random field (MRF), and uses inhomogeneous MRF priors to constrain the solution differently for inpainting and super-resolution. Our super-resolved and inpainted outputs show significant improvements over their LR/interpolated counterparts. Our second proposal addresses the issue of long acquisition times by facilitating reconstruction of range data from very sparse measurements. Our technique exploits a cue from segmentation of an optical image of the same scene, which constrains pixels in the same color segment to have similar range values. Our approach is able to reconstruct range images with as little as 10% data. We also study the performance of both the proposed approaches in a noisy scenario as well as in the presence of alignment errors. © 2011 Elsevier Inc. All rights reserved. - PublicationScale-recurrent multi-residual dense network for image super-resolution(01-01-2019)
;Purohit, Kuldeep ;Mandal, SrimantaRecent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR). The boost in performance can be attributed to the presence of residual or dense connections within the intermediate layers of these networks. The efficient combination of such connections can reduce the number of parameters drastically while maintaining the restoration quality. In this paper, we propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs)) that allow extraction of abundant local features from the image. Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches. To further improve the performance of our network, we employ multiple residual connections in intermediate layers (referred to as Multi-Residual Dense Blocks), which improves gradient propagation in existing layers. Recent works have discovered that conventional loss functions can guide a network to produce results which have high PSNRs but are perceptually inferior. We mitigate this issue by utilizing a Generative Adversarial Network (GAN) based framework and deep feature (VGG) losses to train our network. We experimentally demonstrate that different weighted combinations of the VGG loss and the adversarial loss enable our network outputs to traverse along the perception-distortion curve. The proposed networks perform favorably against existing methods, both perceptually and objectively (PSNR-based) with fewer parameters. - PublicationSuper-resolution of face images using kernel PCA-based prior(01-06-2007)
;Chakrabarti, Ayan; Chellappa, RamaWe present a learning-based method to super-resolve face images using a kernel principal component analysis-based prior model. A prior probability is formulated based on the energy lying outside the span of principal components identified in a higher-dimensional feature space. This is used to regularize the reconstruction of the high-resolution image. We demonstrate with experiments that including higher-order correlations results in significant improvements. © 2007 IEEE.