Now showing 1 - 10 of 163
  • Placeholder Image
    Publication
    Depth from motion and optical blur with an unscented Kalman filter
    (01-05-2012)
    Paramanand, C.
    ;
    Space-variantly blurred images of a scene contain valuable depth information. In this paper, our objective is to recover the 3-D structure of a scene from motion blur/optical defocus. In the proposed approach, the difference of blur between two observations is used as a cue for recovering depth, within a recursive state estimation framework. For motion blur, we use an unblurred-blurred image pair. Since the relationship between the observation and the scale factor of the point spread function associated with the depth at a point is nonlinear, we propose and develop a formulation of unscented Kalman filter for depth estimation. There are no restrictions on the shape of the blur kernel. Furthermore, within the same formulation, we address a special and challenging scenario of depth from defocus with translational jitter. The effectiveness of our approach is evaluated on synthetic as well as real data, and its performance is also compared with contemporary techniques. © 1992-2012 IEEE.
  • Placeholder Image
    Publication
    Face Recognition Across Non-Uniform Motion Blur, Illumination, and Pose
    (01-07-2015)
    Punnappurath, Abhijith
    ;
    ;
    Taheri, Sima
    ;
    Chellappa, Rama
    ;
    Seetharaman, Guna
    Existing methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1-norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose.
  • Placeholder Image
    Publication
    Restoration of scanned photographic images
    (01-05-2006)
    Ibrahim Sadhar, S.
    ;
    In this paper, we address the problem of restoring photographic images degraded by motion blur and film-grain noise. Based on the one-dimensional particle filter, a new approach is proposed for restoration under space-invariant as well as space-variant blurring conditions. The method works by propagating the samples of the probability distribution through an appropriate state model. The weights of the samples are computed using the observation model and the degraded image. The samples and their corresponding weights are used to estimate the original image. In order to verify and validate the proposed approach, the method is tested on several images, both synthetic and real. © 2005 Elsevier B.V. All rights reserved.
  • Placeholder Image
    Publication
    Robust Face Recognition in the Presence of Clutter
    (01-01-2003) ;
    Chellappa, Rama
    ;
    Koterba, Nathan
    We propose a new method within the framework of principal component analysis to robustly recognize faces in the presence of clutter. The traditional eigenface recognition method performs poorly when confronted with the more general task of recognizing faces appearing against a background. It misses faces completely or throws up many false alarms. We argue in favor of learning the distribution of background patterns and show how this can be done for a given test image. An eigenbackground space is constructed and this space in conjunction with the eigenface space is used to impart robustness in the presence of background. A suitable classifier is derived to distinguish non-face patterns from faces. When tested on real images, the performance of the proposed method is found to be quite good. © Springer-Verlag 2003.
  • Placeholder Image
    Publication
    Efficient Motion Deblurring with Feature Transformation and Spatial Attention
    (01-09-2019)
    Purohit, Kuldeep
    ;
    Convolutional Neural Networks (CNN) have recently advanced the state-of-the-art in generalized motion deblurring. Literature suggests that restoration of high-resolution blurred images requires a design with a large receptive field, which existing networks achieve by increasing the number of generic convolution layers, kernel-size, or the scales at which the image is processed. However, increasing the network capacity in this form comes with the burden of increased model size and lower speed. To resolve this, we propose a novel architecture composed of dynamic convolutional modules, namely feature transformation (FT) and spatial attention (SA). An FT module addresses the camera shifts responsible for the global blur in the input image, while a SA module addresses spatially varying blur due to dynamic objects and depth changes. Qualitative and quantitative comparisons on deblurring benchmarks demonstrate that our network outperforms prior art across factors of accuracy, compactness, and speed, enabling real-time deblurring.
  • Placeholder Image
    Publication
    Joint Image and depth completion in shape-from-focus: Taking a cue from parallax
    (01-05-2010)
    Sahay, Rajiv R.
    ;
    Shape-from-focus (SFF) uses a sequence of space-variantly defocused observations captured with relative motion between camera and scene. It assumes that there is no motion parallax in the frames. This is a restriction and constrains the working environment. Moreover, SFF cannot recover the structure information when there are missing data in the frames due to CCD sensor damage or unavoidable occlusions. The capability of fillingin plausible information in regions devoid of data is of critical importance in many applications. Images of 3D scenes captured by off-the-shelf cameras with relative motion commonly exhibit parallax-induced pixel motion. We demonstrate the interesting possibility of exploiting motion parallax cue in the images captured in SFF with a practical camera to jointly inpaint the focused image and depth map. © 2010 Optical Society of America.
  • Placeholder Image
    Publication
    Rolling shutter super-resolution in burst mode
    (03-08-2016)
    Rengarajan, Vijay
    ;
    Punnappurath, Abhijith
    ;
    ;
    Seetharaman, Gunasekaran
    Capturing multiple images using the burst mode of handheld cameras can be a boon to obtain a high resolution (HR) image by exploiting the subpixel motion among the captured images arising from handshake. However, the caveat with mobile phone cameras is that they produce rolling shutter (RS) distortions that must be accounted for in the super-resolution process. We propose a method in which we obtain an RS-free HR image using HR camera trajectory estimated by leveraging the intra- and inter-frame continuity of the camera motion. Experimental evaluations demonstrate that our approach can effectively recover a super-resolved image free from RS artifacts.
  • Placeholder Image
    Publication
    Extension of the shape from focus method for reconstruction of high-resolution images
    (01-01-2007)
    Sahay, R. R.
    ;
    Shape from focus (SFF) estimates the depth profile of a 3D object using a sequence of observations. Due to the finite depth of field of real aperture cameras and the 3D nature of the object, none of the observations is completely in focus. However, in many applications, it is important to examine finer image details of the underlying object in conjunction with the depth map. We propose an extension to the traditional SFF method to optimally estimate a high-resolution image of the 3D object, given the low-resolution observations and the depth map derived from traditional SFF. Using the observation stack, we show that it is possible to achieve significant improvement in resolution. We also analyze the special case of region of interest superresolution and show analytically that an optional interframe separation exists for which the quality of the estimated high-resolution image is the best. © 2007 Optical Society of America.
  • Placeholder Image
    Publication
    Wiretap Polar Codes in Encryption Schemes Based on Learning with Errors Problem
    The Learning with Errors (LWE) problem has been extensively studied in cryptography due to its strong hardness guarantees, efficiency and expressiveness in constructing advanced cryptographic primitives. In this work, we show that using polar codes in conjunction with LWE-based encryption yields several advantages. To begin, we demonstrate the obvious improvements in the efficiency or rate of information transmission in the LWE-based scheme by leveraging polar coding (with no change in the cryptographic security guarantee). Next, we integrate wiretap polar coding with LWE-based encryption to ensure provable semantic security over a wiretap channel in addition to cryptographic security based on the hardness of LWE. To the best of our knowledge this is the first wiretap code to have cryptographic security guarantees as well. Finally, we study the security of the private key used in LWE-based encryption with wiretap polar coding, and propose a key refresh method using random bits used in wiretap coding. Under a known-plaintext attack, we show that non-vanishing information-theoretic secrecy can be achieved for the key. We believe our approach is at least as interesting as our final results: our work combines cryptography and coding theory in a novel 'non blackbox-way' which may be relevant to other scenarios as well.
  • Placeholder Image
    Publication
    Depth inpainting by tensor voting
    (01-01-2013)
    Kulkarni, Mandar
    ;
    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data. © 2013 Optical Society of America.