Options
A N Rajagopalan
Loading...
Preferred name
A N Rajagopalan
Official Name
A N Rajagopalan
Alternative Name
Rajagopalan, Ambasamudram Narayanan
Rajagopalan, Ambasamudram N.
Rajagopalan, Rajagopalan A.N.
Rajagopalan, A. N.
Rajagopalan, Aswin
Rajagopalan, A.
Main Affiliation
Email
ORCID
Scopus Author ID
Google Scholar ID
20 results
Now showing 1 - 10 of 20
- PublicationNTIRE 2021 depth guided image relighting challenge(01-06-2021)
;El Helou, Majed ;Zhou, Ruofan ;Susstrunk, Sabine ;Timofte, Radu ;Suin, Maitreya; ;Wang, Yuanzhi ;Lu, Tao ;Zhang, Yanduo ;Wu, Yuntao ;Yang, Hao Hsiang ;Chen, Wei Ting ;Kuo, Sy Yen ;Luo, Hao Lun ;Zhang, Zhiguang ;Luo, Zhipeng ;He, Jianye ;Zhu, Zuo Liang ;Li, Zhen ;Qiu, Jia Xiong ;Kuang, Zeng Sheng ;Lu, Cheng Ze ;Cheng, Ming Ming ;Shao, Xiu Li ;Li, Chenghua ;DIng, Bosong ;Qian, Wanli ;Li, Fangya ;Li, Fu ;Deng, Ruifeng ;Lin, Tianwei ;Liu, Songhua ;Li, Xin ;He, Dongliang ;Yazdani, Amirsaeed ;Guo, Tiantong ;Monga, Vishal ;Nsampi, Ntumba Elie ;Hu, Zhongyun ;Wang, Qing ;Nathan, Sabari ;Kansal, Priya ;Zhao, TongtongZhao, ShanshanImage relighting is attracting increasing interest due to its various applications. From a research perspective, im-age relighting can be exploited to conduct both image normalization for domain adaptation, and also for data augmentation. It also has multiple direct uses for photo montage and aesthetic enhancement. In this paper, we review the NTIRE 2021 depth guided image relighting challenge.We rely on the VIDIT dataset for each of our two challenge tracks, including depth information. The first track is on one-to-one relighting where the goal is to transform the illumination setup of an input image (color temperature and light source position) to the target illumination setup. In the second track, the any-to-any relighting challenge, the objective is to transform the illumination settings of the in-put image to match those of another guide image, similar to style transfer. In both tracks, participants were given depth information about the captured scenes. We had nearly 250 registered participants, leading to 18 confirmed team sub-missions in the final competition stage. The competitions, methods, and final results are presented in this paper. - PublicationGated Spatio-Temporal Attention-Guided Video Deblurring(01-01-2021)
;Suin, MaitreyaVideo deblurring remains a challenging task due to the complexity of spatially and temporally varying blur. Most of the existing works depend on implicit or explicit alignment for temporal information fusion, which either increases the computational cost or results in suboptimal performance due to misalignment. In this work, we investigate two key factors responsible for deblurring quality: how to fuse spatio-temporal information and from where to collect it. We propose a factorized gated spatio-temporal attention module to perform non-local operations across space and time to fully utilize the available information without depending on alignment. First, we perform spatial aggregation followed by a temporal aggregation step. Next, we adaptively distribute the global spatio-temporal information to each pixel. It shows superior performance compared to existing non-local fusion techniques while being considerably more efficient. To complement the attention module, we propose a reinforcement learning-based framework for selecting keyframes from the neighborhood with the most complementary and useful information. Moreover, our adaptive approach can increase or decrease the frame usage at inference time, depending on the user's need. Extensive experiments on multiple datasets demonstrate the superiority of our method. - PublicationExploring the Effectiveness of Mask-Guided Feature Modulation as a Mechanism for Localized Style Editing of Real Images(27-06-2023)
;Tomar, Snehal Singh ;Suin, MaitreyaThe success of Deep Generative Models at high-resolution image generation has led to their extensive utilization for style editing of real images. Most existing methods work on the principle of inverting real images onto their latent space, followed by determining controllable directions. Both inversion of real images and determination of controllable latent directions are computationally expensive operations. Moreover, the determination of controllable latent directions requires additional human supervision. This work aims to explore the efficacy of mask-guided feature modulation in the latent space of a Deep Generative Model as a solution to these bottlenecks. To this end, we present the SemanticStyle Autoencoder (SSAE), a deep Generative Autoencoder model that leverages semantic mask-guided latent space manipulation for highly localized photorealistic style editing of real images. We present qualitative and quantitative results for the same and their analysis. This work shall serve as a guiding primer for future work. - PublicationDistillation-guided Image Inpainting(01-01-2021)
;Suin, Maitreya ;Purohit, KuldeepImage inpainting methods have shown significant improvements by using deep neural networks recently. However, many of these techniques often create distorted structures or blurry inconsistent textures. The problem is rooted in the encoder layers' ineffectiveness in building a complete and faithful embedding of the missing regions from scratch. Existing solutions like course-to-fine, progressive refinement, structural guidance, etc. suffer from huge computational overheads owing to multiple generator networks, limited ability of handcrafted features, and sub-optimal utilization of the information present in the ground truth. We propose a distillation-based approach for inpainting, where we provide direct feature level supervision while training. We deploy cross and self-distillation techniques and design a dedicated completion-block in encoder to produce more accurate encoding of the holes. Next, we demonstrate how an inpainting network's attention module can improve by leveraging a distillation-based attention transfer technique and further enhance coherence by using a pixel-adaptive global-local feature fusion. We conduct extensive evaluations on multiple datasets to validate our method. Along with achieving significant improvements over previous SOTA methods, the proposed approach's effectiveness is also demonstrated through its ability to improve existing inpainting works. - PublicationAIM 2020 Challenge on Efficient Super-Resolution: Methods and Results(01-01-2020)
;Zhang, Kai ;Danelljan, Martin ;Li, Yawei ;Timofte, Radu ;Liu, Jie ;Tang, Jie ;Wu, Gangshan ;Zhu, Yu ;He, Xiangyu ;Xu, Wenjie ;Li, Chenghua ;Leng, Cong ;Cheng, Jian ;Wu, Guangyang ;Wang, Wenyi ;Liu, Xiaohong ;Zhao, Hengyuan ;Kong, Xiangtao ;He, Jingwen ;Qiao, Yu ;Dong, Chao ;Luo, Xiaotong ;Chen, Liang ;Zhang, Jiangtao ;Suin, Maitreya ;Purohit, Kuldeep; ;Li, Xiaochuan ;Lang, Zhiqiang ;Nie, Jiangtao ;Wei, Wei ;Zhang, Lei ;Muqeet, Abdul ;Hwang, Jiwon ;Yang, Subin ;Kang, Jung Heum ;Bae, Sung Ho ;Kim, Yongwoo ;Qu, Yanyun ;Jeon, Geun Woo ;Choi, Jun Ho ;Kim, Jun Hyuk ;Lee, Jong Seok ;Marty, Steven ;Marty, Eric ;Xiong, Dongliang ;Chen, Siang ;Zha, Lin ;Jiang, Jiande ;Gao, Xinbo ;Lu, Wen ;Wang, Haicheng ;Bhaskara, Vineeth ;Levinshtein, Alex ;Tsogkas, Stavros ;Jepson, Allan ;Kong, Xiangzhen ;Zhao, Tongtong ;Zhao, Shanshan ;Hrishikesh, P. S. ;Puthussery, Densen ;Jiji, C. V. ;Nan, Nan ;Liu, Shuai ;Cai, Jie ;Meng, Zibo ;Ding, Jiaming ;Ho, Chiu Man ;Wang, Xuehui ;Yan, Qiong ;Zhao, Yuzhi ;Chen, Long ;Sun, Long ;Wang, Wenhao ;Liu, Zhenbing ;Lan, Rushi ;Umer, Rao MuhammadMicheloni, ChristianThis paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results. The challenge task was to super-resolve an input image with a magnification factor × 4 based on a set of prior examples of low and corresponding high resolution images. The goal is to devise a network that reduces one or several aspects such as runtime, parameter count, FLOPs, activations, and memory consumption while at least maintaining PSNR of MSRResNet. The track had 150 registered participants, and 25 teams submitted the final results. They gauge the state-of-the-art in efficient single image super-resolution. - PublicationAIM 2019 challenge on bokeh effect synthesis: Methods and results(01-10-2019)
;Ignatov, Andrey ;Patel, Jagruti ;Timofte, Radu ;Zheng, Bolun ;Ye, Xin ;Huang, Li ;Tian, Xiang ;Dutta, Saikat ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Xiong, Zhiwei ;Huang, Jie ;Dong, Guanting ;Yao, Mingde ;Liu, Dong ;Yang, Wenjin ;Hong, Ming ;Lin, Wenying ;Qu, Yanyun ;Choi, Jae Seok ;Park, Woonsung ;Kim, Munchurl ;Liu, Rui ;Mao, Xiangyu ;Yang, Chengxi ;Yan, Qiong ;Sun, Wenxiu ;Fang, Junkai ;Shang, Meimei ;Gao, Fei ;Ghosh, Sujoy ;Sharma, Prasen KumarSur, ArijitThis paper reviews the first AIM challenge on bokeh effect synthesis with the focus on proposed solutions and results. The participating teams were solving a real-world image-to-image mapping problem, where the goal was to map standard narrow-aperture photos to the same photos captured with a shallow depth-of-field by the Canon 70D DSLR camera. In this task, the participants had to restore bokeh effect based on only one single frame without any additional data from other cameras or sensors. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved baseline results, defining the state-of-the-art for practical bokeh effect simulation. - PublicationExploring the Effectiveness of Mask-Guided Feature Modulation as a Mechanism for Localized Style Editing of Real Images(27-06-2023)
;Tomar, Snehal Singh ;Suin, MaitreyaThe success of Deep Generative Models at high-resolution image generation has led to their extensive utilization for style editing of real images. Most existing methods work on the principle of inverting real images onto their latent space, followed by determining controllable directions. Both inversion of real images and determination of controllable latent directions are computationally expensive operations. Moreover, the determination of controllable latent directions requires additional human supervision. This work aims to explore the efficacy of mask-guided feature modulation in the latent space of a Deep Generative Model as a solution to these bottlenecks. To this end, we present the SemanticStyle Autoencoder (SSAE), a deep Generative Autoencoder model that leverages semantic mask-guided latent space manipulation for highly localized photorealistic style editing of real images. We present qualitative and quantitative results for the same and their analysis. This work shall serve as a guiding primer for future work. - PublicationSpatially-attentive patch-hierarchical network for adaptive motion deblurring(01-01-2020)
;Suin, Maitreya ;Purohit, KuldeepThis paper tackles the problem of motion deblurring of dynamic scenes. Although end-to-end fully convolutional designs have recently advanced the state-of-the-art in non-uniform motion deblurring, their performance-complexity trade-off is still sub-optimal. Existing approaches achieve a large receptive field by increasing the number of generic convolution layers and kernel-size, but this comesat the expense of of the increase in model size and inference speed. In this work, we propose an efficient pixel adaptive and feature attentive design for handling large blur variations across different spatial locations and process each test image adaptively. We also propose an effective content-aware global-local filtering module that significantly improves performance by considering not only global dependencies but also by dynamically exploiting neighboring pixel information. We use a patch-hierarchical attentive architecture composed of the above module that implicitly discovers the spatial variations in the blur present in the input image and in turn, performs local and global modulation of intermediate features. Extensive qualitative and quantitative comparisons with prior art on deblurring benchmarks demonstrate that our design offers significant improvements over the state-of-the-art in accuracy as well as speed. - PublicationAIM 2019 challenge on real-world image super-resolution: Methods and results(01-10-2019)
;Lugmayr, Andreas ;Danelljan, Martin ;Timofte, Radu ;Fritsche, Manuel ;Gu, Shuhang ;Purohit, Kuldeep ;Kandula, Praveen ;Suin, Maitreya; ;Joon, Nam Hyung ;Won, Yu Seung ;Kim, Guisik ;Kwon, Dokyeong ;Hsu, Chih Chung ;Lin, Chia Hsiang ;Huang, Yuanfei ;Sun, Xiaopeng ;Lu, Wen ;Li, Jie ;Gao, XinboBell-Kligler, SefiThis paper reviews the AIM 2019 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable. For training, only one set of source input images is therefore provided in the challenge. In Track 1: Source Domain the aim is to super-resolve such images while preserving the low level image characteristics of the source input domain. In Track 2: Target Domain a set of high-quality images is also provided for training, that defines the output domain and desired quality of the super-resolved images. To allow for quantitative evaluation, the source input images in both tracks are constructed using artificial, but realistic, image degradations. The challenge is the first of its kind, aiming to advance the state-of-the-art and provide a standard benchmark for this newly emerging task. In total 7 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem. - PublicationNTIRE 2019 challenge on image colorization: Report(01-06-2019)
;Nah, Seungjun ;Timofte, Radu ;Zhang, Richard ;Suin, Maitreya ;Purohit, Kuldeep; ;Athi Narayanan, S. ;Pinjari, Jameer Babu ;Xiong, Zhiwei ;Shi, Zhan ;Chen, Chang ;Liu, Dong ;Sharma, Manoj ;Makwana, Megh ;Badhwar, Anuj ;Singh, Ajay Pratap ;Upadhyay, Avinash ;Trivedi, Akkshita ;Saini, Anil ;Chaudhury, Santanu ;Sharma, Prasen Kumar ;Jain, Priyankar ;Sur, ArijitOzbulak, GokhanThis paper reviews the NTIRE challenge on image colorization (estimating color information from the corresponding gray image) with focus on proposed solutions and results. It is the first challenge of its kind. The challenge had 2 tracks. Track 1 takes a single gray image as input. In Track 2, in addition to the gray input image, some color seeds (randomly samples from the latent color image) are also provided for guiding the colorization process. The operators were learnable through provided pairs of gray and color training images. The tracks had 188 registered participants, and 8 teams competed in the final testing phase.