Options
Mansi Sharma
Loading...
Preferred name
Mansi Sharma
Official Name
Mansi Sharma
Alternative Name
Sharma, Mansi
Main Affiliation
ORCID
Scopus Author ID
Google Scholar ID
3 results
Now showing 1 - 3 of 3
- PublicationA Novel Image Fusion Scheme for FTV View Synthesis Based on Layered Depth Scene Representation & Scale Periodic Transform(01-12-2019)
; Ragavan, GowthamThis paper presents a novel image fusion scheme for view synthesis based on a layered depth profile of the scene and scale periodic transform. To create a layered depth profile of the scene, we utilize the unique properties of scale transform considering the problem of depth map computation from reference images as a certain shift-variant problem. The problem of depth computation is solved without deterministic stereo correspondences or rather than representing image signals in terms of shifts. Instead, we pose the problem of image signals being representable as scale periodic function, and compute appropriate depth estimates determining the scalings of a basis function. The rendering process is formulated as a novel image fusion in which the textures of all probable matching points are adaptively determined, leveraging implicitly the geometric information. The results demonstrate superiority of the proposed approach in suppressing geometric, blurring or flicker artifacts in rendered wide-baseline virtual videos. - PublicationA Rich Stereoscopic 3D High Dynamic Range Image & Video Database of Natural Scenes(01-12-2019)
;Wadaskar, Aditya; Lal, RohanThe consumer market of High Dynamic Range (HDR) displays and cameras is blooming rapidly with the advent of 3D video and display technologies. Specialised agencies like Moving Picture Experts Group and International Telecommunication Union are demanding the standardization of latest display advancements. Lack of sufficient experimental data is a major bottleneck for the development of preliminary research efforts in 3D HDR video technology. We propose to make publicly available to the research community, a diversified database of Stereoscopic 3D HDR images and videos, captured within the beautiful campus of Indian Institute of Technology, Madras, which is blessed with rich flora and fauna, and is home to several rare wildlife species. Further, we have described the procedure of capturing, aligning, calibrating and post-processing of 3D images and videos. We have discussed research opportunities and challenges, and the potential use cases of HDR stereo 3D applications and depth-from-HDR aspects. - PublicationMEStereo-Du2CNN: a dual-channel CNN for learning robust depth estimates from multi-exposure stereo images for HDR 3D applications(01-01-2023)
;Choudhary, Rohit; ;Uma, T. V.Anil, RithvikDisplay technologies have evolved over the years. It is critical to develop practical HDR capturing, processing, and display solutions to bring 3D technologies to the next level. Depth estimation of multi-exposure stereo image sequences is an essential task in the development of cost-effective 3D HDR video content. In this paper, we develop a deep architecture for multi-exposure stereo depth estimation. The proposed architecture has two novel components. First, the stereo matching technique used in traditional stereo depth estimation is revamped. For the stereo depth estimation component of our architecture, a mono-to-stereo transfer learning approach is deployed. The proposed formulation circumvents the cost volume construction requirement, which is replaced by a dual-encoder single-decoder CNN with different weights for feature fusion. EfficientNet-based blocks are used to learn the disparity. Secondly, we combine disparity maps obtained from the stereo images at different exposure levels using a robust disparity feature fusion approach. The disparity maps obtained at different exposures are merged using weight maps calculated for different quality measures. The final predicted disparity map obtained is more robust and retains best features that preserve the depth discontinuities. The proposed CNN offers flexibility to train using standard dynamic range stereo data or with multi-exposure low dynamic range stereo sequences. In terms of performance, the proposed model surpasses state-of-the-art monocular and stereo depth estimation methods, both quantitatively and qualitatively, on challenging Scene flow and differently exposed Middlebury stereo datasets. The architecture performs exceedingly well on complex natural scenes, demonstrating its usefulness for diverse 3D HDR applications.