Now showing 1 - 4 of 4
  • Placeholder Image
    Publication
    Preface
    (01-01-2013) ;
    Chellappa, Rama
    The computer vision community is witnessing a major resurgence in the area of motion deblurring spurred by the emerging ubiquity of portable imaging devices. Rapid strides are being made in handling motion blur both algorithmically and through tailor-made hardware-assisted technologies. The main goal of this book is to ensure a timely dissemination of recent findings in this very active research area. Given the flurry of activity in the last few years in tackling uniform as well as non-uniform motion blur resulting from incidental shake in hand-held consumer cameras as well as object motion, we felt that a compilation of recent and concerted efforts for restoring images degraded by motion blur was well overdue. Since no single compendium of the kind envisaged here exists, we believe that this is an opportune time for publishing a comprehensive collection of contributed chapters by leading researchers providing in-depth coverage of recently developed methodologies with excellent supporting experiments, encompassing both algorithms and architectures. As is well known, the main cause of motion blur is the result of averaging of intensities due to relative motion between a camera and a scene during exposure time. Motion blur is normally considered a nuisance although one must not overlook the fact that some works have used blur for creating aesthetic appeal or exploited it as a valuable cue in depth recovery and image forensics. Early works were non-blind in the sense that the motion blur kernel (i.e. the point spread function (PSF)) was assumed to be of a simple form, such as those arising from uniform camera motion, and efforts were primarily directed at designing a stable estimate for the original image.
  • Placeholder Image
    Publication
    A methodology to reconstruct large damaged regions in heritage structures
    (01-01-2018) ;
    Sahay, Pratyush
    ;
    Vasu, Subeesh
    While it is important to digitize heritage sites “as is”, building 3D models of damaged archaeological structures can be visually unpleasant due to the presence of large missing regions. In this chapter, we discuss geometric reconstruction of such large damaged regions (or holes) in 3D digital models. Without constraining the size or complexity of the damaged region, the missing 3D geometry inference problem is solved by making use of geometric prior from self-similar structures which provide a salient cue about the missing surface characteristics that may be unique to an object class. The underlying surface is then recovered by adaptively propagating 3D surface smoothness from local geometric information around the boundary of the hole and appropriately using the cue provided by the available self-similar examples. We have used two methodologies to effectively harness the geometric prior: (i) a non-iterative framework based on tensor voting when multiple self-similar examples are available and (ii) a dictionary learning-based method when only a single self-similar example is available. We showcase the relevance of our method in the archaeological domain which warrants “filling-in” missing information in damaged heritage sites. We show several examples from Hampi which is a UNESCO heritage site located in Northern Karnataka in India.
  • Placeholder Image
    Publication
    HDR imaging in the presence of motion blur
    (01-01-2013)
    Vijay, C. S.
    ;
    Paramanand, C.
    ;
    Introduction: Digital cameras convert incident light energy into electrical signals and present them as an image after altering the signals through different processes which include sensor correction, noise reduction, scaling, gamma correction, image enhancement, color space conversion, frame-rate change, compression, and storage/transmission (Nakamura 2005). Although today's camera sensors have high quantum efficiency and high signalto-noise ratios, they inherently have an upper limit (full well capacity) for accumulation of light energy. Also, the sensor's least acquisition capacity depends on its pre-set sensitivity. The total variation in the magnitude of irradiance incident at a camera is called the dynamic range (DR) and is defined as DR = (maximum signal value)/(minimum signal value). Most digital cameras available in the market today are unable to account for the entire DR due to hardware limitations. Scenes with high dynamic range (HDR) either appear dark or become saturated. The solution for overcoming this limitation and estimating the original data is referred to as high dynamic range imaging (HDRI) (Debevec & Malik 1997, Mertens, Kautz & Van Reeth 2007, Nayar & Mitsunaga 2000). Over the years, several algorithmic approaches have been investigated for estimation of scene irradiance (see, for example, Debevec & Malik (1997), Mann & Picard (1995), Mitsunaga & Nayar (1999)). The basic idea in these approaches is to capture multiple images of a scene with different exposure settings and algorithmically extract HDR information from these observations. By varying the exposure settings, one can control the amount of energy received by the sensors to overcome sensor bounds/limits.
  • Placeholder Image
    Publication
    Motion deblurring: Algorithms and systems
    (01-01-2013) ;
    Chellappa, Rama
    A comprehensive guide to restoring images degraded by motion blur, bridging the traditional approaches and emerging computational photography-based techniques, and bringing together a wide range of methods emerging from basic theory as well as cutting-edge research. It encompasses both algorithms and architectures, providing detailed coverage of practical techniques by leading researchers. From an algorithms perspective, blind and non-blind approaches are discussed, including the use of single or multiple images; projective motion blur model; image priors and parametric models; high dynamic range imaging in the irradiance domain; and image recognition in blur. Performance limits for motion deblurring cameras are also presented. From a systems perspective, hybrid frameworks combining low-resolution-high-speed and high-resolution-low-speed cameras are described, along with the use of inertial sensors and coded exposure cameras. Also covered is an architecture exploiting compressive sensing for video recovery. A valuable resource for researchers and practitioners in computer vision, image processing, and related fields.