Options
A N Rajagopalan
Loading...
Preferred name
A N Rajagopalan
Official Name
A N Rajagopalan
Alternative Name
Rajagopalan, Ambasamudram Narayanan
Rajagopalan, Ambasamudram N.
Rajagopalan, Rajagopalan A.N.
Rajagopalan, A. N.
Rajagopalan, Aswin
Rajagopalan, A.
Main Affiliation
Email
ORCID
Scopus Author ID
Google Scholar ID
4 results
Now showing 1 - 4 of 4
- PublicationAIM 2019 challenge on bokeh effect synthesis: Methods and results(01-10-2019)
;Ignatov, Andrey ;Patel, Jagruti ;Timofte, Radu ;Zheng, Bolun ;Ye, Xin ;Huang, Li ;Tian, Xiang ;Dutta, Saikat ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Xiong, Zhiwei ;Huang, Jie ;Dong, Guanting ;Yao, Mingde ;Liu, Dong ;Yang, Wenjin ;Hong, Ming ;Lin, Wenying ;Qu, Yanyun ;Choi, Jae Seok ;Park, Woonsung ;Kim, Munchurl ;Liu, Rui ;Mao, Xiangyu ;Yang, Chengxi ;Yan, Qiong ;Sun, Wenxiu ;Fang, Junkai ;Shang, Meimei ;Gao, Fei ;Ghosh, Sujoy ;Sharma, Prasen KumarSur, ArijitThis paper reviews the first AIM challenge on bokeh effect synthesis with the focus on proposed solutions and results. The participating teams were solving a real-world image-to-image mapping problem, where the goal was to map standard narrow-aperture photos to the same photos captured with a shallow depth-of-field by the Canon 70D DSLR camera. In this task, the participants had to restore bokeh effect based on only one single frame without any additional data from other cameras or sensors. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved baseline results, defining the state-of-the-art for practical bokeh effect simulation. - PublicationAIM 2019 challenge on real-world image super-resolution: Methods and results(01-10-2019)
;Lugmayr, Andreas ;Danelljan, Martin ;Timofte, Radu ;Fritsche, Manuel ;Gu, Shuhang ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Joon, Nam Hyung ;Won, Yu Seung ;Kim, Guisik ;Kwon, Dokyeong ;Hsu, Chih Chung ;Lin, Chia Hsiang ;Huang, Yuanfei ;Sun, Xiaopeng ;Lu, Wen ;Li, Jie ;Gao, XinboBell-Kligler, SefiThis paper reviews the AIM 2019 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable. For training, only one set of source input images is therefore provided in the challenge. In Track 1: Source Domain the aim is to super-resolve such images while preserving the low level image characteristics of the source input domain. In Track 2: Target Domain a set of high-quality images is also provided for training, that defines the output domain and desired quality of the super-resolved images. To allow for quantitative evaluation, the source input images in both tracks are constructed using artificial, but realistic, image degradations. The challenge is the first of its kind, aiming to advance the state-of-the-art and provide a standard benchmark for this newly emerging task. In total 7 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem. - PublicationPIRM challenge on perceptual image enhancement on smartphones: Report(01-01-2019)
;Ignatov, Andrey ;Timofte, Radu ;Van Vu, Thang ;Luu, Tung Minh ;Pham, Trung X. ;Van Nguyen, Cao ;Kim, Yongwoo ;Choi, Jae Seok ;Kim, Munchurl ;Huang, Jie ;Ran, Jiewen ;Xing, Chen ;Zhou, Xingguang ;Zhu, Pengfei ;Geng, Mingrui ;Li, Yawei ;Agustsson, Eirikur ;Gu, Shuhang ;Van Gool, Luc ;de Stoutz, Etienne ;Kobyshev, Nikolay ;Nie, Kehui ;Zhao, Yan ;Li, Gen ;Tong, Tong ;Gao, Qinquan ;Hanwen, Liu ;Michelini, Pablo Navarrete ;Dan, Zhu ;Fengshuo, Hu ;Hui, Zheng ;Wang, Xiumei ;Deng, Lirui ;Meng, Rang ;Qin, Jinghui ;Shi, Yukai ;Wen, Wushao ;Lin, Liang ;Feng, Ruicheng ;Wu, Shixiang ;Dong, Chao ;Qiao, Yu ;Vasu, Subeesh ;Thekke Madam, Nimisha ;Kandula, Praveen; ;Liu, JieJung, CheolkonThis paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones. - PublicationScale-recurrent multi-residual dense network for image super-resolution(01-01-2019)
;Purohit, Kuldeep ;Mandal, SrimantaRecent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR). The boost in performance can be attributed to the presence of residual or dense connections within the intermediate layers of these networks. The efficient combination of such connections can reduce the number of parameters drastically while maintaining the restoration quality. In this paper, we propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs)) that allow extraction of abundant local features from the image. Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches. To further improve the performance of our network, we employ multiple residual connections in intermediate layers (referred to as Multi-Residual Dense Blocks), which improves gradient propagation in existing layers. Recent works have discovered that conventional loss functions can guide a network to produce results which have high PSNRs but are perceptually inferior. We mitigate this issue by utilizing a Generative Adversarial Network (GAN) based framework and deep feature (VGG) losses to train our network. We experimentally demonstrate that different weighted combinations of the VGG loss and the adversarial loss enable our network outputs to traverse along the perception-distortion curve. The proposed networks perform favorably against existing methods, both perceptually and objectively (PSNR-based) with fewer parameters.