Options
A N Rajagopalan
Loading...
Preferred name
A N Rajagopalan
Official Name
A N Rajagopalan
Alternative Name
Rajagopalan, Ambasamudram Narayanan
Rajagopalan, Ambasamudram N.
Rajagopalan, Rajagopalan A.N.
Rajagopalan, A. N.
Rajagopalan, Aswin
Rajagopalan, A.
Main Affiliation
Email
ORCID
Scopus Author ID
Google Scholar ID
14 results
Now showing 1 - 10 of 14
- PublicationEfficient Motion Deblurring with Feature Transformation and Spatial Attention(01-09-2019)
;Purohit, KuldeepConvolutional Neural Networks (CNN) have recently advanced the state-of-the-art in generalized motion deblurring. Literature suggests that restoration of high-resolution blurred images requires a design with a large receptive field, which existing networks achieve by increasing the number of generic convolution layers, kernel-size, or the scales at which the image is processed. However, increasing the network capacity in this form comes with the burden of increased model size and lower speed. To resolve this, we propose a novel architecture composed of dynamic convolutional modules, namely feature transformation (FT) and spatial attention (SA). An FT module addresses the camera shifts responsible for the global blur in the input image, while a SA module addresses spatially varying blur due to dynamic objects and depth changes. Qualitative and quantitative comparisons on deblurring benchmarks demonstrate that our network outperforms prior art across factors of accuracy, compactness, and speed, enabling real-time deblurring. - PublicationMultilevel weighted enhancement for underwater image dehazing(01-06-2019)
;Purohit, Kuldeep ;Mandal, SrimantaAttenuation and scattering of light are responsible for haziness in images of underwater scenes. To reduce this effect, we propose an approach for single-image dehazing by multilevel weighted enhancement of the image. The underlying principle is that enhancement at different levels of detail can undo the degradation caused by underwater haze. The depth information is captured implicitly while going through different levels of details due to the depth-variant nature of haze. Hence, we judiciously assign weights to different levels of image details and reveal that their linear combination along with the coarsest information can successfully restore the image. Results demonstrate the efficacy of our approach as compared to state-of-the-art underwater dehazing methods. - PublicationAIM 2019 challenge on bokeh effect synthesis: Methods and results(01-10-2019)
;Ignatov, Andrey ;Patel, Jagruti ;Timofte, Radu ;Zheng, Bolun ;Ye, Xin ;Huang, Li ;Tian, Xiang ;Dutta, Saikat ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Xiong, Zhiwei ;Huang, Jie ;Dong, Guanting ;Yao, Mingde ;Liu, Dong ;Yang, Wenjin ;Hong, Ming ;Lin, Wenying ;Qu, Yanyun ;Choi, Jae Seok ;Park, Woonsung ;Kim, Munchurl ;Liu, Rui ;Mao, Xiangyu ;Yang, Chengxi ;Yan, Qiong ;Sun, Wenxiu ;Fang, Junkai ;Shang, Meimei ;Gao, Fei ;Ghosh, Sujoy ;Sharma, Prasen KumarSur, ArijitThis paper reviews the first AIM challenge on bokeh effect synthesis with the focus on proposed solutions and results. The participating teams were solving a real-world image-to-image mapping problem, where the goal was to map standard narrow-aperture photos to the same photos captured with a shallow depth-of-field by the Canon 70D DSLR camera. In this task, the participants had to restore bokeh effect based on only one single frame without any additional data from other cameras or sensors. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved baseline results, defining the state-of-the-art for practical bokeh effect simulation. - PublicationNTIRE 2019 challenge on video deblurring: Methods and results(01-06-2019)
;Nah, Seungjun ;Timofte, Radu ;Baik, Sungyong ;Hong, Seokil ;Moon, Gyeongsik ;Son, Sanghyun ;Lee, Kyoung Mu ;Wang, Xintao ;Chan, Kelvin C.K. ;Yu, Ke ;Dong, Chao ;Loy, Chen Change ;Fan, Yuchen ;Yu, Jiahui ;Liu, DIng ;Huang, Thomas S. ;Sim, Hyeonjun ;Kim, Munchurl ;Park, Dongwon ;Kim, Jisoo ;Chun, Se Young ;Haris, Muhammad ;Shakhnarovich, Greg ;Ukita, Norimichi ;Zamir, Syed Waqas ;Arora, Aditya ;Khan, Salman ;Khan, Fahad Shahbaz ;Shao, Ling ;Gupta, Rahul Kumar ;Chudasama, Vishal ;Patel, Heena ;Upla, Kishor ;Fan, Hongfei ;Li, Guo ;Zhang, Yumei ;Li, Xiang ;Zhang, Wenjie ;He, Qingwen ;Purohit, Kuldeep; ;Kim, Jeonghun ;Tofighi, Mohammad ;Guo, TiantongMonga, VishalThis paper reviews the first NTIRE challenge on video deblurring (restoration of rich details and high frequency components from blurred video frames) with focus on the proposed solutions and results. A new REalistic and Diverse Scenes dataset (REDS) was employed. The challenge was divided into 2 tracks. Track 1 employed dynamic motion blurs while Track 2 had additional MPEG video compression artifacts. Each competition had 109 and 93 registered participants. Total 13 teams competed in the final testing phase. They gauge the state-of-the-art in video deblurring problem. - PublicationAIM 2019 challenge on real-world image super-resolution: Methods and results(01-10-2019)
;Lugmayr, Andreas ;Danelljan, Martin ;Timofte, Radu ;Fritsche, Manuel ;Gu, Shuhang ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Joon, Nam Hyung ;Won, Yu Seung ;Kim, Guisik ;Kwon, Dokyeong ;Hsu, Chih Chung ;Lin, Chia Hsiang ;Huang, Yuanfei ;Sun, Xiaopeng ;Lu, Wen ;Li, Jie ;Gao, XinboBell-Kligler, SefiThis paper reviews the AIM 2019 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable. For training, only one set of source input images is therefore provided in the challenge. In Track 1: Source Domain the aim is to super-resolve such images while preserving the low level image characteristics of the source input domain. In Track 2: Target Domain a set of high-quality images is also provided for training, that defines the output domain and desired quality of the super-resolved images. To allow for quantitative evaluation, the source input images in both tracks are constructed using artificial, but realistic, image degradations. The challenge is the first of its kind, aiming to advance the state-of-the-art and provide a standard benchmark for this newly emerging task. In total 7 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem. - PublicationNTIRE 2019 challenge on image colorization: Report(01-06-2019)
;Nah, Seungjun ;Timofte, Radu ;Zhang, Richard ;Suin, Maitreya ;Purohit, Kuldeep; ;Athi Narayanan, S. ;Pinjari, Jameer Babu ;Xiong, Zhiwei ;Shi, Zhan ;Chen, Chang ;Liu, Dong ;Sharma, Manoj ;Makwana, Megh ;Badhwar, Anuj ;Singh, Ajay Pratap ;Upadhyay, Avinash ;Trivedi, Akkshita ;Saini, Anil ;Chaudhury, Santanu ;Sharma, Prasen Kumar ;Jain, Priyankar ;Sur, ArijitOzbulak, GokhanThis paper reviews the NTIRE challenge on image colorization (estimating color information from the corresponding gray image) with focus on proposed solutions and results. It is the first challenge of its kind. The challenge had 2 tracks. Track 1 takes a single gray image as input. In Track 2, in addition to the gray input image, some color seeds (randomly samples from the latent color image) are also provided for guiding the colorization process. The operators were learnable through provided pairs of gray and color training images. The tracks had 188 registered participants, and 8 teams competed in the final testing phase. - PublicationSplicing localization in motion blurred 3D scenes(03-08-2016)
;Purohit, KuldeepWe propose a passive forgery detection technique for locating spliced regions in motion blurred images of 3D scenes. We consider general camera motion in hand-held cameras and utilize discrepancies in local motion blur patterns as a cue for splicing detection. We first devise an automatic and computationally efficient scheme to estimate the camera motion using only the blur kernels from authentic region. Next, we utilize the relationship among blur kernels, camera trajectory and local depth to predict a set of authentic blur kernels for any depth map to directly flag spliced regions. - PublicationNTIRE 2019 challenge on video super-resolution: Methods and results(01-06-2019)
;Nah, Seungjun ;Timofte, Radu ;Gu, Shuhang ;Baik, Sungyong ;Hong, Seokil ;Moon, Gyeongsik ;Son, Sanghyun ;Lee, Kyoung Mu ;Wang, Xintao ;Chan, Kelvin C.K. ;Yu, Ke ;Dong, Chao ;Loy, Chen Change ;Fan, Yuchen ;Yu, Jiahui ;Liu, DIng ;Huang, Thomas S. ;Liu, Xiao ;Li, Chao ;He, Dongliang ;DIng, Yukang ;Wen, Shilei ;Porikli, Fatih ;Kalarot, Ratheesh ;Haris, Muhammad ;Shakhnarovich, Greg ;Ukita, Norimichi ;Yi, Peng ;Wang, Zhongyuan ;Jiang, Kui ;Jiang, Junjun ;Ma, Jiayi ;Dong, Hang ;Zhang, Xinyi ;Hu, Zhe ;Kim, Kwanyoung ;Kang, Dong Un ;Chun, Se Young ;Purohit, Kuldeep; ;Tian, Yapeng ;Zhang, Yulun ;Fu, Yun ;Xu, Chenliang ;Tekalp, A. Murat ;Yilmaz, M. Akin ;Korkmaz, Cansu ;Sharma, Manoj ;Makwana, Megh ;Badhwar, Anuj ;Singh, Ajay Pratap ;Upadhyay, Avinash ;Mukhopadhyay, Rudrabha ;Shukla, Ankit ;Khanna, Dheeraj ;Mandal, A. S. ;Chaudhury, Santanu ;Miao, Si ;Zhu, YongxinHuo, XiaoThis paper reviews the first NTIRE challenge on video super-resolution (restoration of rich details in low-resolution video frames) with focus on proposed solutions and results. A new REalistic and Diverse Scenes dataset (REDS) was employed. The challenge was divided into 2 tracks. Track 1 employed standard bicubic downscaling setup while Track 2 had realistic dynamic motion blurs. Each competition had 124 and 104 registered participants. There were total 14 teams in the final testing phase. They gauge the state-of-the-art in video super-resolution. - PublicationLearning based single image blur detection and segmentation(29-08-2018)
;Purohit, Kuldeep ;Shah, Anshul B.This paper addresses the problem of obtaining a blur-based segmentation map from a single image affected by motion or defocus blur. Since traditional hand-designed priors have fundamental limitations, we utilise deep neural networks to learn features related to blur and enable a pixel-level blur classification. Our approach mitigates the ambiguities present in blur detection task by introducing joint learning of global context and local features into the framework. Specifically, we train two sub-networks to perform the task at global (image) and local (patch) levels. We aggregate the pixel-level probabilities estimated by two networks and feed them to a MRF based framework which returns a refined and dense segmentation-map of the image with respect to blur. We also demonstrate via both qualitative and quantitative evaluation, that our approach performs favorably against state-of-the-art blur detection or segmentation works, and show its utility to applications of automatic image matting and blur magnification. - PublicationMosaicing deep underwater imagery(18-12-2016)
;Purohit, Kuldeep ;Vasu, Subeesh; ;Jyothi, V. Bala NagaRaju, RameshNumerous sources of distortions render mosaicing of underwater (UW) images an immensely challenging effort. Methods that can process conventional photographs (terrestrial/aerial) fail to deliver the desired results on UWimages. Taking the sources of underwater degradations into account is central to ensuring quality performance. The work described in this paper specifically deals with the problem of mosaicing deep UW images captured by Remotely Operated Vehicles (ROVs). These images are mainly degraded by haze, color changes, and non-uniform illumination. We propose a framework that restores these images in accordance with a suitably derived degradation model. Furthermore, our scheme harnesses the scene geometry information present in each image to aid in constructing a mosaic that is free from artifacts such as local blurring, ghosting, double contouring and visible seams. Several experiments on real underwater images sequences have been carried out to demonstrate the performance of our mosaicing pipeline along with comparisons.