Now showing 1 - 10 of 92
  • Placeholder Image
    Publication
    Depth from motion and optical blur with an unscented Kalman filter
    (01-05-2012)
    Paramanand, C.
    ;
    Space-variantly blurred images of a scene contain valuable depth information. In this paper, our objective is to recover the 3-D structure of a scene from motion blur/optical defocus. In the proposed approach, the difference of blur between two observations is used as a cue for recovering depth, within a recursive state estimation framework. For motion blur, we use an unblurred-blurred image pair. Since the relationship between the observation and the scale factor of the point spread function associated with the depth at a point is nonlinear, we propose and develop a formulation of unscented Kalman filter for depth estimation. There are no restrictions on the shape of the blur kernel. Furthermore, within the same formulation, we address a special and challenging scenario of depth from defocus with translational jitter. The effectiveness of our approach is evaluated on synthetic as well as real data, and its performance is also compared with contemporary techniques. © 1992-2012 IEEE.
  • Placeholder Image
    Publication
    Face Recognition Across Non-Uniform Motion Blur, Illumination, and Pose
    (01-07-2015)
    Punnappurath, Abhijith
    ;
    ;
    Taheri, Sima
    ;
    Chellappa, Rama
    ;
    Seetharaman, Guna
    Existing methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1-norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose.
  • Placeholder Image
    Publication
    Efficient Motion Deblurring with Feature Transformation and Spatial Attention
    (01-09-2019)
    Purohit, Kuldeep
    ;
    Convolutional Neural Networks (CNN) have recently advanced the state-of-the-art in generalized motion deblurring. Literature suggests that restoration of high-resolution blurred images requires a design with a large receptive field, which existing networks achieve by increasing the number of generic convolution layers, kernel-size, or the scales at which the image is processed. However, increasing the network capacity in this form comes with the burden of increased model size and lower speed. To resolve this, we propose a novel architecture composed of dynamic convolutional modules, namely feature transformation (FT) and spatial attention (SA). An FT module addresses the camera shifts responsible for the global blur in the input image, while a SA module addresses spatially varying blur due to dynamic objects and depth changes. Qualitative and quantitative comparisons on deblurring benchmarks demonstrate that our network outperforms prior art across factors of accuracy, compactness, and speed, enabling real-time deblurring.
  • Placeholder Image
    Publication
    Joint Image and depth completion in shape-from-focus: Taking a cue from parallax
    (01-05-2010)
    Sahay, Rajiv R.
    ;
    Shape-from-focus (SFF) uses a sequence of space-variantly defocused observations captured with relative motion between camera and scene. It assumes that there is no motion parallax in the frames. This is a restriction and constrains the working environment. Moreover, SFF cannot recover the structure information when there are missing data in the frames due to CCD sensor damage or unavoidable occlusions. The capability of fillingin plausible information in regions devoid of data is of critical importance in many applications. Images of 3D scenes captured by off-the-shelf cameras with relative motion commonly exhibit parallax-induced pixel motion. We demonstrate the interesting possibility of exploiting motion parallax cue in the images captured in SFF with a practical camera to jointly inpaint the focused image and depth map. © 2010 Optical Society of America.
  • Placeholder Image
    Publication
    Rolling shutter super-resolution in burst mode
    (03-08-2016)
    Rengarajan, Vijay
    ;
    Punnappurath, Abhijith
    ;
    ;
    Seetharaman, Gunasekaran
    Capturing multiple images using the burst mode of handheld cameras can be a boon to obtain a high resolution (HR) image by exploiting the subpixel motion among the captured images arising from handshake. However, the caveat with mobile phone cameras is that they produce rolling shutter (RS) distortions that must be accounted for in the super-resolution process. We propose a method in which we obtain an RS-free HR image using HR camera trajectory estimated by leveraging the intra- and inter-frame continuity of the camera motion. Experimental evaluations demonstrate that our approach can effectively recover a super-resolved image free from RS artifacts.
  • Placeholder Image
    Publication
    Wiretap Polar Codes in Encryption Schemes Based on Learning with Errors Problem
    The Learning with Errors (LWE) problem has been extensively studied in cryptography due to its strong hardness guarantees, efficiency and expressiveness in constructing advanced cryptographic primitives. In this work, we show that using polar codes in conjunction with LWE-based encryption yields several advantages. To begin, we demonstrate the obvious improvements in the efficiency or rate of information transmission in the LWE-based scheme by leveraging polar coding (with no change in the cryptographic security guarantee). Next, we integrate wiretap polar coding with LWE-based encryption to ensure provable semantic security over a wiretap channel in addition to cryptographic security based on the hardness of LWE. To the best of our knowledge this is the first wiretap code to have cryptographic security guarantees as well. Finally, we study the security of the private key used in LWE-based encryption with wiretap polar coding, and propose a key refresh method using random bits used in wiretap coding. Under a known-plaintext attack, we show that non-vanishing information-theoretic secrecy can be achieved for the key. We believe our approach is at least as interesting as our final results: our work combines cryptography and coding theory in a novel 'non blackbox-way' which may be relevant to other scenarios as well.
  • Placeholder Image
    Publication
    Depth inpainting by tensor voting
    (01-01-2013)
    Kulkarni, Mandar
    ;
    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data. © 2013 Optical Society of America.
  • Placeholder Image
    Publication
    Preface
    (01-01-2013) ;
    Chellappa, Rama
    The computer vision community is witnessing a major resurgence in the area of motion deblurring spurred by the emerging ubiquity of portable imaging devices. Rapid strides are being made in handling motion blur both algorithmically and through tailor-made hardware-assisted technologies. The main goal of this book is to ensure a timely dissemination of recent findings in this very active research area. Given the flurry of activity in the last few years in tackling uniform as well as non-uniform motion blur resulting from incidental shake in hand-held consumer cameras as well as object motion, we felt that a compilation of recent and concerted efforts for restoring images degraded by motion blur was well overdue. Since no single compendium of the kind envisaged here exists, we believe that this is an opportune time for publishing a comprehensive collection of contributed chapters by leading researchers providing in-depth coverage of recently developed methodologies with excellent supporting experiments, encompassing both algorithms and architectures. As is well known, the main cause of motion blur is the result of averaging of intensities due to relative motion between a camera and a scene during exposure time. Motion blur is normally considered a nuisance although one must not overlook the fact that some works have used blur for creating aesthetic appeal or exploited it as a valuable cue in depth recovery and image forensics. Early works were non-blind in the sense that the motion blur kernel (i.e. the point spread function (PSF)) was assumed to be of a simple form, such as those arising from uniform camera motion, and efforts were primarily directed at designing a stable estimate for the original image.
  • Placeholder Image
    Publication
    Deskewing of underwater images
    (01-03-2015)
    Seemakurthy, Karthik
    ;
    We address the problem of restoring a static planar scene degraded by skewing effect when imaged through a dynamic water surface. In particular, we investigate geometric distortions due to unidirectional cyclic waves and circular ripples, phenomena that are most prevalent in fluid flow. Although the camera and scene are stationary, light rays emanating from a scene undergo refraction at the fluid-air interface. This refraction effect is time varying for dynamic fluids and results in nonrigid distortions (skew) in the captured image. These distortions can be associated with motion blur depending on the exposure time of the camera. In the first part of this paper, we establish the condition under which the blur induced due to unidirectional cyclic waves can be treated as space invariant. We proceed to derive a mathematical model for blur formation and propose a restoration scheme using a single degraded observation. In the second part, we reveal how the blur induced by circular ripples (though space variant) can be modeled as uniform in the polar domain and develop a method for deskewing. The proposed methods are tested on synthetic as well as real examples.
  • Placeholder Image
    Publication
    Cueing motion blur for registration of inclined planar scenes
    (01-01-2015)
    Nair, Arun Asokan
    ;
    Rao, M. Purnachandra
    ;
    ;
    Seetharaman, Guna
    Existing image registration methods that work in the presence of motion blur assume the scene to be fronto-parallel. In this work, we extend the state-of-the-art by cueing motion blur itself to infer plane inclination. This is achieved by matching extremities of blur kernels computed at different locations in the image. Because these extremities correspond to the same homography, we show that it is possible to find the orientation. Following this, we propose a registration method that reorients the source image to the target plane within a reblur-difference framework to detect the actual changes.