Options
A N Rajagopalan
Loading...
Preferred name
A N Rajagopalan
Official Name
A N Rajagopalan
Alternative Name
Rajagopalan, Ambasamudram Narayanan
Rajagopalan, Ambasamudram N.
Rajagopalan, Rajagopalan A.N.
Rajagopalan, A. N.
Rajagopalan, Aswin
Rajagopalan, A.
Main Affiliation
Email
ORCID
Scopus Author ID
Google Scholar ID
6 results
Now showing 1 - 6 of 6
- PublicationMixed-dense connection networks for image and video super-resolution(20-07-2020)
;Purohit, Kuldeep ;Mandal, SrimantaEfficiency of gradient propagation in intermediate layers of convolutional neural networks is of key importance for super-resolution task. To this end, we propose a deep architecture for single image super-resolution (SISR), which is built using efficient convolutional units we refer to as mixed-dense connection blocks (MDCB). The design of MDCB combines the strengths of both residual and dense connection strategies, while overcoming their limitations. To enable super-resolution for multiple factors, we propose a scale-recurrent framework which reutilizes the filters learnt for lower scale factors recursively for higher factors. This leads to improved performance and promotes parametric efficiency for higher factors. We train two versions of our network to enhance complementary image qualities using different loss configurations. We further employ our network for video super-resolution task, where our network learns to aggregate information from multiple frames and maintain spatio-temporal consistency. The proposed networks lead to qualitative and quantitative improvements over state-of-the-art techniques on image and video super-resolution benchmarks. - PublicationAIM 2019 challenge on bokeh effect synthesis: Methods and results(01-10-2019)
;Ignatov, Andrey ;Patel, Jagruti ;Timofte, Radu ;Zheng, Bolun ;Ye, Xin ;Huang, Li ;Tian, Xiang ;Dutta, Saikat ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Xiong, Zhiwei ;Huang, Jie ;Dong, Guanting ;Yao, Mingde ;Liu, Dong ;Yang, Wenjin ;Hong, Ming ;Lin, Wenying ;Qu, Yanyun ;Choi, Jae Seok ;Park, Woonsung ;Kim, Munchurl ;Liu, Rui ;Mao, Xiangyu ;Yang, Chengxi ;Yan, Qiong ;Sun, Wenxiu ;Fang, Junkai ;Shang, Meimei ;Gao, Fei ;Ghosh, Sujoy ;Sharma, Prasen KumarSur, ArijitThis paper reviews the first AIM challenge on bokeh effect synthesis with the focus on proposed solutions and results. The participating teams were solving a real-world image-to-image mapping problem, where the goal was to map standard narrow-aperture photos to the same photos captured with a shallow depth-of-field by the Canon 70D DSLR camera. In this task, the participants had to restore bokeh effect based on only one single frame without any additional data from other cameras or sensors. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved baseline results, defining the state-of-the-art for practical bokeh effect simulation. - PublicationAIM 2019 challenge on real-world image super-resolution: Methods and results(01-10-2019)
;Lugmayr, Andreas ;Danelljan, Martin ;Timofte, Radu ;Fritsche, Manuel ;Gu, Shuhang ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Joon, Nam Hyung ;Won, Yu Seung ;Kim, Guisik ;Kwon, Dokyeong ;Hsu, Chih Chung ;Lin, Chia Hsiang ;Huang, Yuanfei ;Sun, Xiaopeng ;Lu, Wen ;Li, Jie ;Gao, XinboBell-Kligler, SefiThis paper reviews the AIM 2019 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable. For training, only one set of source input images is therefore provided in the challenge. In Track 1: Source Domain the aim is to super-resolve such images while preserving the low level image characteristics of the source input domain. In Track 2: Target Domain a set of high-quality images is also provided for training, that defines the output domain and desired quality of the super-resolved images. To allow for quantitative evaluation, the source input images in both tracks are constructed using artificial, but realistic, image degradations. The challenge is the first of its kind, aiming to advance the state-of-the-art and provide a standard benchmark for this newly emerging task. In total 7 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem. - PublicationPIRM challenge on perceptual image enhancement on smartphones: Report(01-01-2019)
;Ignatov, Andrey ;Timofte, Radu ;Van Vu, Thang ;Luu, Tung Minh ;Pham, Trung X. ;Van Nguyen, Cao ;Kim, Yongwoo ;Choi, Jae Seok ;Kim, Munchurl ;Huang, Jie ;Ran, Jiewen ;Xing, Chen ;Zhou, Xingguang ;Zhu, Pengfei ;Geng, Mingrui ;Li, Yawei ;Agustsson, Eirikur ;Gu, Shuhang ;Van Gool, Luc ;de Stoutz, Etienne ;Kobyshev, Nikolay ;Nie, Kehui ;Zhao, Yan ;Li, Gen ;Tong, Tong ;Gao, Qinquan ;Hanwen, Liu ;Michelini, Pablo Navarrete ;Dan, Zhu ;Fengshuo, Hu ;Hui, Zheng ;Wang, Xiumei ;Deng, Lirui ;Meng, Rang ;Qin, Jinghui ;Shi, Yukai ;Wen, Wushao ;Lin, Liang ;Feng, Ruicheng ;Wu, Shixiang ;Dong, Chao ;Qiao, Yu ;Vasu, Subeesh ;Thekke Madam, Nimisha ;Kandula, Praveen; ;Liu, JieJung, CheolkonThis paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones. - PublicationScale-recurrent multi-residual dense network for image super-resolution(01-01-2019)
;Purohit, Kuldeep ;Mandal, SrimantaRecent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR). The boost in performance can be attributed to the presence of residual or dense connections within the intermediate layers of these networks. The efficient combination of such connections can reduce the number of parameters drastically while maintaining the restoration quality. In this paper, we propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs)) that allow extraction of abundant local features from the image. Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches. To further improve the performance of our network, we employ multiple residual connections in intermediate layers (referred to as Multi-Residual Dense Blocks), which improves gradient propagation in existing layers. Recent works have discovered that conventional loss functions can guide a network to produce results which have high PSNRs but are perceptually inferior. We mitigate this issue by utilizing a Generative Adversarial Network (GAN) based framework and deep feature (VGG) losses to train our network. We experimentally demonstrate that different weighted combinations of the VGG loss and the adversarial loss enable our network outputs to traverse along the perception-distortion curve. The proposed networks perform favorably against existing methods, both perceptually and objectively (PSNR-based) with fewer parameters. - PublicationDeep Dynamic Scene Deblurring for Unconstrained Dual-lens Cameras(01-01-2021)
;R., Mahesh Mohan M. ;K., Nithin G.Dual-lens (DL) cameras capture depth information, and hence enable several important vision applications. Most present-day DL cameras employ unconstrained settings in the two views in order to support extended functionalities. But a natural hindrance to their working is the ubiquitous motion blur encountered due to camera motion, object motion, or both. However, there exists not a single work for the prospective unconstrained DL cameras that addresses this problem (so called dynamic scene deblurring). Due to the unconstrained settings, degradations in the two views need not be the same, and consequently, naive deblurring approaches produce inconsistent left-right views and disrupt scene-consistent disparities. In this paper, we address this problem using Deep Learning and make three important contributions. First, we address the root cause of view-inconsistency in standard deblurring architectures using a Coherent Fusion Module. Second, we address an inherent problem in unconstrained DL deblurring that disrupts scene-consistent disparities by introducing a memory-efficient Adaptive Scale-space Approach. This signal processing formulation allows accommodation of different image-scales in the same network without increasing the number of parameters. Finally, we propose a module to address the Space-variant and Image-dependent nature of dynamic scene blur. We experimentally show that our proposed techniques have substantial practical merit.