Now showing 1 - 10 of 55
  • Placeholder Image
    Publication
    Scalable pseudo-exhaustive methodology for testing and diagnosis in flow-based microfluidic biochips
    (01-05-2020)
    Vadakkeveedu, Gokulkrishnan
    ;
    ; ;
    Potluri, Seetal
    Microfluidics is an upcoming field of science that is going to be used widely in many safety-critical applicationsincluding healthcare, medical research and defence. Hence, technologies for fault testing and fault diagnosis of these chips areof extreme importance. In this study, the authors propose a scalable pseudo-exhaustive testing and diagnosis methodology forflow-based microfluidic biochips. The proposed approach employs a divide-and-conquer based technique wherein, largearchitectures are split into smaller sub-architectures and each of these are tested and diagnosed independently.
  • Placeholder Image
    Publication
    FadingBF: A Bloom Filter with Consistent Guarantees for Online Applications
    (01-01-2022)
    Vairam, Prasanna Karthik
    ;
    Kumar, Pratyush
    ;
    ;
    Bloom filter (BF), when used by an online application, experiences monotonically increasing false-positive errors. The decay of stale elements can control false-positives. Existing mechanisms for decay require unreasonable storage and computation. Inexpensive methods reset the BF periodically, resulting in inconsistent guarantees and performance issues in the underlying computing system. In this article, we propose Fading Bloom filter (FadingBF), which can provide inexpensive yet safe decay of elements. FadingBF neither requires additional storage nor computation to achieve this but instead exploits the underlying storage medium's intrinsic properties, i.e., DRAM capacitor characteristics. We realize FadingBF by implementing the BF on a DRAM memory module with its periodic refresh disabled. Consequently, the capacitors holding the data elements that are not accessed frequently will predictably lose charge and naturally decay. The retention time of capacitors guarantees against premature deletion. However, some capacitors may store information longer than required due to the FadingBF's software and hardware variables. Using an analytical model of the FadingBF, we show that carefully tuning its parameters can minimize such cases. For a surveillance application, we demonstrate that FadingBF achieves better guarantees through graceful decay, consumes 57 percent lesser energy, and has a system load that is lesser than the standard BF.
  • Placeholder Image
    Publication
    A novel power-managed scan architecture for test power and test time reduction
    (01-01-2008)
    Devanathan, V. R.
    ;
    Ravikumar, C. P.
    ;
    Mehrotra, Rajat
    ;
    In sub-70 nm technologies, leakage power becomes a significant component of the total power. Designers address this concern by extensive use of adaptive voltage scaling techniques to reduce dynamic as well as leakage power. Low-power scan test schemes that have evolved in the past primarily address dynamic power reduction, and are less effective in reducing the total power. This paper proposes a Power-Managed Scan (PMScan) scheme which exploits the presence of adaptive voltage scaling logic to reduce test power. Some practical implementation challenges that arise when the proposed scheme is employed on industrial designs are also discussed. Experimental results on benchmark circuits and industrial designs show that employing the proposed technique leads to a significant reduction in dynamic and leakage power. The proposed method can also be used as a vehicle to trade-off test application time with test power by suitably adjusting the scan shift frequency and scan-mode power supplies. Copyright © 2008 American Scientific Publishers All rights reserved.
  • Placeholder Image
    Publication
    Constructing online testable circuits using reversible logic
    (01-01-2010)
    Mahammad, Sk Noor
    ;
    With the advent of nanometer technology, circuits are more prone to transient faults that can occur during its operation. Of the different types of transient faults reported in the literature, the single-event upset (SEU) is prominent. Traditional techniques such as triple-modular redundancy (TMR) consume large area and power. Reversible logic has been gaining interest in the recent past due to its less heat dissipation characteristics. This paper proposes the following: 1, a novel universal reversible logic gate (URG) and a set of basic sequential elements that could be used for building reversible sequential circuits, with 25% less garbage than the best reported in the literature; (2) a reversible gate that can mimic the functionality of a lookup table (LUT) that can be used to construct a reversible field-programmable gate array (FPGA); and (3) automatic conversion of any given reversible circuit into an online testable circuit that can detect online any single-bit errors, including soft errors in the logic blocks, using theoretically proved minimum garbage, which is significantly lesser than the best reported in the literature. © 2009 IEEE.
  • Placeholder Image
    Publication
    A review of algorithms for border length minimization problem
    (03-11-2014)
    Srinivasan, S.
    ;
    ;
    Bhattacharya, A.
    Genomic analysis is a gaining prominence, specifically in the areas of forensics and drug discovery. DNA microarrays are the devices employed for performing the genomic analysis. Border minimization problem (BMP) is a well-known optimization problem in the automated design of DNA microarrays. The problem of BMP can be considered from two perspectives, namely placement and embedding. This paper presents a comparative study of different techniques reported in the literature for BMP and current open challenges.
  • Placeholder Image
    Publication
    Thermal-safe dynamic test scheduling method using on-chip temperature sensors for 3D MPSoCs
    (01-01-2012)
    Pasumarthi, Rama Kumar
    ;
    Devanathan, V. R.
    ;
    Visvanathan, V.
    ;
    Potluri, Seetal
    ;
    System test and online test techniques are aggressively being used in today's SoCs for improved test quality and reliability (e.g., aging/soft-error robustness). With gaining popularity of vertical integration such as 2.5D and 3D, in the semiconductor industry, ensuring thermal safety of SoCs during these test modes poses a challenge. In this paper, we propose a dynamic test scheduling mechanism for system tests and/or online test that uses dynamic feedback from on-chip thermal sensors to control temperature during shift (or scan) and capture, thereby ensuring thermal-safe conditions while applying the test patterns. The proposed technique is a closed loop test application scheme that eliminates the need for separate thermal simulation of test patterns at design stage. The technique also enables granular field-level configuration of thermal limits, so that different units across multiple cores are subjected to customized thermal profiles. Results from implementation of the proposed schemes on a 4-layer, 16-core, 12.8 million gates, OpenSparc S1 processor subsystem are presented. Copyright © 2012 American Scientific Publishers. All rights reserved.
  • Placeholder Image
    Publication
    Thread synchronization: From mutual exclusion to transactional memory
    (01-07-2011) ;
    Korgaonkar, Kunal
    Transactional memory (TM) is being viewed by researchers as a suitable mechanism to perform shared-memory synchronization on upcoming many-core systems. This paper provides an introductory material on TM, followed by a background of important historical work on synchronization leading to current TM research. The paper reviews recent pure hardware and hardware-software TM design proposals, and finally brings out a list of interesting open problems in this field.
  • Placeholder Image
    Publication
    Theoretical Lower Bound for Border Length Minimization Problem
    (01-05-2017)
    Srinivasan, S.
    ;
    ;
    Bhattacharya, A.
    Biochemical analysis procedures, that include genomics and drug discovery, have been formalized to an extent that they can be automated. Large Microarrays housing DNA probes are used for this purpose. Manufacturing these microarrays involve depositing the respective DNA probes in each of its cells. The deposition is carried out iteratively by masking and unmasking cells in each step. A masked cell of the microarray that is adjacent (shares a border) to an unmasked one is at a high risk of being exposed in a deposition step. Thus, minimizing the number of such borders (Border length minimization) is crucial for reliable manufacturing of these microarrays. Given a microarray and a set of DNA probes, computing a lower bound on the border length is crucial to study the effectiveness of any algorithm that solves the border length minimization problem. A Numerical method for computing this lower bound has been proposed in the literature. This takes prohibitively large time. In practice, the DNA probes are random sequences of nucleotides. Based on this realistic assumption, this paper attempts to estimate the lower bound for the border length analytically using a probability theoretic approach by reducing the same to the problems of computing the probability distribution functions (PDF) for the Hamming Distance and the length of the longest common subsequence (LCS) between two random strings. To the best of our knowledge, no PDF is reported earlier for the length of the LCS between two random strings.
  • Placeholder Image
    Publication
    System-on-programmable-chip implementation for on-line face recognition
    (01-02-2007)
    Pavan Kumar, A.
    ;
    ;
    In this paper, the design of a parallel architecture for on-line face recognition using weighted modular principal component analysis (WMPCA) and its system-on-programmable-chip (SoPC) implementation are discussed. The WMPCA methodology, proposed by us earlier, is based on the assumption that the rates of variation of the different regions of a face are different due to variations in expression and illumination. Given a database of sample faces for training and a query face for recognizing, the WMPCA methodology involves division of the face into horizontal regions. Each of these regions are analyzed independently by computing the eigenfeatures and comparing the same with the corresponding eigenfeatures of the faces stored in the sample database to calculate the corresponding error. The final decision of the face recognizer is based on the weighted sum of the errors computed from each of the regions. These weights are calculated based on the extent to which the various samples of the subject are spread in the eigenspace. The WMPCA methodology has a better recognition rate compared to the modular PCA approach developed by Rajkiran and Vijayan [Rajkiran, G., Vijayan, K., 2004. An improved face recognition technique based on modular PCA approach. Pattern Recognition Letters, 25(4), 429-436]. The methodology also has a wide scope for parallelism. We present an architecture that exploits this parallelism and implement the same as a system-on-programmable-chip on an ALTERA based field programmable gate array (FPGA) platform. The implementation has achieved a processing speed of about 26 frames per second at an operating frequency of 33.33 MHz. © 2006 Elsevier B.V. All rights reserved.