Now showing 1 - 10 of 59
  • Placeholder Image
    Publication
    Depth from motion and optical blur with an unscented Kalman filter
    (01-05-2012)
    Paramanand, C.
    ;
    Space-variantly blurred images of a scene contain valuable depth information. In this paper, our objective is to recover the 3-D structure of a scene from motion blur/optical defocus. In the proposed approach, the difference of blur between two observations is used as a cue for recovering depth, within a recursive state estimation framework. For motion blur, we use an unblurred-blurred image pair. Since the relationship between the observation and the scale factor of the point spread function associated with the depth at a point is nonlinear, we propose and develop a formulation of unscented Kalman filter for depth estimation. There are no restrictions on the shape of the blur kernel. Furthermore, within the same formulation, we address a special and challenging scenario of depth from defocus with translational jitter. The effectiveness of our approach is evaluated on synthetic as well as real data, and its performance is also compared with contemporary techniques. © 1992-2012 IEEE.
  • Placeholder Image
    Publication
    Face Recognition Across Non-Uniform Motion Blur, Illumination, and Pose
    (01-07-2015)
    Punnappurath, Abhijith
    ;
    ;
    Taheri, Sima
    ;
    Chellappa, Rama
    ;
    Seetharaman, Guna
    Existing methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1-norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose.
  • Placeholder Image
    Publication
    Restoration of scanned photographic images
    (01-05-2006)
    Ibrahim Sadhar, S.
    ;
    In this paper, we address the problem of restoring photographic images degraded by motion blur and film-grain noise. Based on the one-dimensional particle filter, a new approach is proposed for restoration under space-invariant as well as space-variant blurring conditions. The method works by propagating the samples of the probability distribution through an appropriate state model. The weights of the samples are computed using the observation model and the degraded image. The samples and their corresponding weights are used to estimate the original image. In order to verify and validate the proposed approach, the method is tested on several images, both synthetic and real. © 2005 Elsevier B.V. All rights reserved.
  • Placeholder Image
    Publication
    Joint Image and depth completion in shape-from-focus: Taking a cue from parallax
    (01-05-2010)
    Sahay, Rajiv R.
    ;
    Shape-from-focus (SFF) uses a sequence of space-variantly defocused observations captured with relative motion between camera and scene. It assumes that there is no motion parallax in the frames. This is a restriction and constrains the working environment. Moreover, SFF cannot recover the structure information when there are missing data in the frames due to CCD sensor damage or unavoidable occlusions. The capability of fillingin plausible information in regions devoid of data is of critical importance in many applications. Images of 3D scenes captured by off-the-shelf cameras with relative motion commonly exhibit parallax-induced pixel motion. We demonstrate the interesting possibility of exploiting motion parallax cue in the images captured in SFF with a practical camera to jointly inpaint the focused image and depth map. © 2010 Optical Society of America.
  • Placeholder Image
    Publication
    Extension of the shape from focus method for reconstruction of high-resolution images
    (01-01-2007)
    Sahay, R. R.
    ;
    Shape from focus (SFF) estimates the depth profile of a 3D object using a sequence of observations. Due to the finite depth of field of real aperture cameras and the 3D nature of the object, none of the observations is completely in focus. However, in many applications, it is important to examine finer image details of the underlying object in conjunction with the depth map. We propose an extension to the traditional SFF method to optimally estimate a high-resolution image of the 3D object, given the low-resolution observations and the depth map derived from traditional SFF. Using the observation stack, we show that it is possible to achieve significant improvement in resolution. We also analyze the special case of region of interest superresolution and show analytically that an optional interframe separation exists for which the quality of the estimated high-resolution image is the best. © 2007 Optical Society of America.
  • Placeholder Image
    Publication
    Depth inpainting by tensor voting
    (01-01-2013)
    Kulkarni, Mandar
    ;
    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data. © 2013 Optical Society of America.
  • Placeholder Image
    Publication
    Deskewing of underwater images
    (01-03-2015)
    Seemakurthy, Karthik
    ;
    We address the problem of restoring a static planar scene degraded by skewing effect when imaged through a dynamic water surface. In particular, we investigate geometric distortions due to unidirectional cyclic waves and circular ripples, phenomena that are most prevalent in fluid flow. Although the camera and scene are stationary, light rays emanating from a scene undergo refraction at the fluid-air interface. This refraction effect is time varying for dynamic fluids and results in nonrigid distortions (skew) in the captured image. These distortions can be associated with motion blur depending on the exposure time of the camera. In the first part of this paper, we establish the condition under which the blur induced due to unidirectional cyclic waves can be treated as space invariant. We proceed to derive a mathematical model for blur formation and propose a restoration scheme using a single degraded observation. In the second part, we reveal how the blur induced by circular ripples (though space variant) can be modeled as uniform in the polar domain and develop a method for deskewing. The proposed methods are tested on synthetic as well as real examples.
  • Placeholder Image
    Publication
    Distortion Disentanglement and Knowledge Distillation for Satellite Image Restoration
    (01-01-2022)
    Kandula, Praveen
    ;
    Satellite images are typically subject to multiple distortions. Different factors affect the quality of satellite images, including changes in atmosphere, surface reflectance, sun illumination, and viewing geometries, limiting its application to downstream tasks. In supervised networks, the availability of paired datasets is a strong assumption. Consequently, many unsupervised algorithms have been proposed to address this problem. These methods synthetically generate a large dataset of degraded images using image formation models. A neural network is then trained with an adversarial loss to discriminate between images from distorted and clean domains. However, these methods yield suboptimal performance when tested on real images that do not necessarily conform to the generation mechanism. Also, they require a large amount of training data and are rendered unsuitable when only a few images are available. We propose a distortion disentanglement and knowledge distillation (KD) framework for satellite image restoration to address these important issues. Our algorithm requires only two images: the distorted satellite image to be restored and a reference image with similar semantics. Specifically, we first propose a mechanism to disentangle distortion. This enables us to generate images with varying degrees of distortion using the disentangled distortion and the reference image. We then propose the use of KD to train a restoration network using the generated image pairs. As a final step, the distorted image is passed through the restoration network to get the final output. Ablation studies show that our proposed mechanism successfully disentangles distortion. Exhaustive experiments on different time stamps of Google-Earth images and publicly available datasets, LEVIR-CD and SZTAKI, show that our proposed mechanism can tackle a variety of distortions and outperforms existing state-of-the-art restoration methods visually as well as on quantitative metrics.
  • Placeholder Image
    Publication
    Mixed-dense connection networks for image and video super-resolution
    (20-07-2020)
    Purohit, Kuldeep
    ;
    Mandal, Srimanta
    ;
    Efficiency of gradient propagation in intermediate layers of convolutional neural networks is of key importance for super-resolution task. To this end, we propose a deep architecture for single image super-resolution (SISR), which is built using efficient convolutional units we refer to as mixed-dense connection blocks (MDCB). The design of MDCB combines the strengths of both residual and dense connection strategies, while overcoming their limitations. To enable super-resolution for multiple factors, we propose a scale-recurrent framework which reutilizes the filters learnt for lower scale factors recursively for higher factors. This leads to improved performance and promotes parametric efficiency for higher factors. We train two versions of our network to enhance complementary image qualities using different loss configurations. We further employ our network for video super-resolution task, where our network learns to aggregate information from multiple frames and maintain spatio-temporal consistency. The proposed networks lead to qualitative and quantitative improvements over state-of-the-art techniques on image and video super-resolution benchmarks.
  • Placeholder Image
    Publication
    Image recovery under nonlinear and non-Gaussian degradations
    (01-01-2005)
    Sadhar, S. I.
    ;
    A new two-dimensional recursive filter for recovering degraded images is proposed that is based on particle-filter theory. The main contribution of this work lies in evolving a framework that has the potential to recover images suffering from a general class of degradations such as system nonlinearity and non-Gaussian observation noise. Samples of the prior probability distribution of the original image are obtained by propagating the samples through an appropriate state model. Given the measurement model and the degraded image, the weights of the samples are computed. The samples and their corresponding weights are used to calculate the conditional mean that yields an estimate of the original image. The proposed method is validated by demonstrating its effectiveness in recovering images degraded by film-grain noise. Synthetic as well as real examples are considered for this purpose. Performance is also compared with that of an existing scheme. © 2005 Optical Society of America.