Now showing 1 - 4 of 4
  • Placeholder Image
    Publication
    A Novel Image Fusion Scheme for FTV View Synthesis Based on Layered Depth Scene Representation & Scale Periodic Transform
    (01-12-2019) ;
    Ragavan, Gowtham
    This paper presents a novel image fusion scheme for view synthesis based on a layered depth profile of the scene and scale periodic transform. To create a layered depth profile of the scene, we utilize the unique properties of scale transform considering the problem of depth map computation from reference images as a certain shift-variant problem. The problem of depth computation is solved without deterministic stereo correspondences or rather than representing image signals in terms of shifts. Instead, we pose the problem of image signals being representable as scale periodic function, and compute appropriate depth estimates determining the scalings of a basis function. The rendering process is formulated as a novel image fusion in which the textures of all probable matching points are adaptively determined, leveraging implicitly the geometric information. The results demonstrate superiority of the proposed approach in suppressing geometric, blurring or flicker artifacts in rendered wide-baseline virtual videos.
  • Placeholder Image
    Publication
    A Novel Randomize Hierarchical Extension of MV-HEVC for Improved Light Field Compression
    (01-12-2019) ;
    Ragavan, Gowtham
    This paper presents a novel scheme for light field compression based on a randomize hierarchical multi-view extension of high efficiency video coding (dubbed as RH-MVHEVC). Specifically, a light field data are arranged as a multiple pseudo-temporal video sequences which are efficiently compressed with MV-HEVC encoder, following an integrated random coding technique and hierarchical prediction scheme. The critical advantage of proposed RH-MVHEVC scheme is that it utilizes not just a temporal and inter-view prediction, but efficiently exploits the strong intrinsic similarities within each sub-aperture image and among neighboring sub-aperture images in both horizontal and vertical directions. Experimental results consistently outperform the state-of-the-art compression methods on benchmark ICME 2016 and ICIP 2017 grand challenge data sets. It achieves an average up to 33.803% BD-rate reduction and 1.7978 dB BD-PSNR improvement compared with an advanced JEM video encoder, and an average 20.4156% BD-rate reduction and 2.0644 dB BD-PSNR improvement compared with a latest image-based JEM-anchor coding scheme.
  • Placeholder Image
    Publication
    A Novel Approach for Multi-View 3D HDR Content Generation via Depth Adaptive Cross Trilateral Tone Mapping
    (01-12-2019) ;
    Venkatesh, M. S.
    ;
    Ragavan, Gowtham
    ;
    Lal, Rohan
    In this work, we proposed a novel depth adaptive tone mapping scheme for stereo HDR imaging and 3D display. We are interested in the case where different exposures are taken from different viewpoints. The scheme employed a new depth-adaptive cross-trilateral filter (DA-CTF) for recovering High Dynamic Range (HDR) images from multiple Low Dynamic Range (LDR) images captured at different exposure levels. Explicitly leveraging additional depth information in the tone mapping operation correctly identify global contrast change and detail visibility change by preserving the edges and reducing halo artifacts in the synthesized 3D views by depth-image-based rendering (DIBR) procedure. The experiments show that the proposed DA-CTF and DIBR scheme outperforms state-of-the-art operators in the enhanced depiction of tone mapped HDR stereo images on LDR displays.
  • Placeholder Image
    Publication
    A Novel Algebaric Variety Based Model for High Quality Free-Viewpoint View Synthesis on a Krylov Subspace
    (01-12-2019) ;
    Ragavan, Gowtham
    ;
    Arathi, B.
    This paper presents a new depth-image-based rendering algorithm for free-viewpoint 3DTV applications. The cracks, holes, ghost countors caused by visibility, disocclusion, resampling problems associated with 3D warping lead to serious rendering artifacts in synthesized virtual views. This challenging problem of hole filling is formulated as an algebraic matrix completion problem on a higher dimensional space of monomial features described by a novel variety model. The high-level idea of this work is to exploit the linear or nonlinear structures of the data and interpolate missing values by solving algebraic varieties associated with Hankel matrices as a member of Krylov subspace. The proposed model effectively handles artifacts appear in wide-baseline spatial view interpolation and arbitrary camera movements. Our model has a low runtime and results excel with state-of-the-art methods in quantitative and qualitative evaluation.