Now showing 1 - 10 of 14
  • Placeholder Image
    Publication
    Scalable pseudo-exhaustive methodology for testing and diagnosis in flow-based microfluidic biochips
    (01-05-2020)
    Vadakkeveedu, Gokulkrishnan
    ;
    ; ;
    Potluri, Seetal
    Microfluidics is an upcoming field of science that is going to be used widely in many safety-critical applicationsincluding healthcare, medical research and defence. Hence, technologies for fault testing and fault diagnosis of these chips areof extreme importance. In this study, the authors propose a scalable pseudo-exhaustive testing and diagnosis methodology forflow-based microfluidic biochips. The proposed approach employs a divide-and-conquer based technique wherein, largearchitectures are split into smaller sub-architectures and each of these are tested and diagnosed independently.
  • Placeholder Image
    Publication
    FadingBF: A Bloom Filter with Consistent Guarantees for Online Applications
    (01-01-2022)
    Vairam, Prasanna Karthik
    ;
    Kumar, Pratyush
    ;
    ;
    Bloom filter (BF), when used by an online application, experiences monotonically increasing false-positive errors. The decay of stale elements can control false-positives. Existing mechanisms for decay require unreasonable storage and computation. Inexpensive methods reset the BF periodically, resulting in inconsistent guarantees and performance issues in the underlying computing system. In this article, we propose Fading Bloom filter (FadingBF), which can provide inexpensive yet safe decay of elements. FadingBF neither requires additional storage nor computation to achieve this but instead exploits the underlying storage medium's intrinsic properties, i.e., DRAM capacitor characteristics. We realize FadingBF by implementing the BF on a DRAM memory module with its periodic refresh disabled. Consequently, the capacitors holding the data elements that are not accessed frequently will predictably lose charge and naturally decay. The retention time of capacitors guarantees against premature deletion. However, some capacitors may store information longer than required due to the FadingBF's software and hardware variables. Using an analytical model of the FadingBF, we show that carefully tuning its parameters can minimize such cases. For a surveillance application, we demonstrate that FadingBF achieves better guarantees through graceful decay, consumes 57 percent lesser energy, and has a system load that is lesser than the standard BF.
  • Placeholder Image
    Publication
    Depending on HTTP/2 for Privacy? Good Luck!
    (01-06-2020)
    Mitra, Gargi
    ;
    Vairam, Prasanna Karthik
    ;
    Patanjali, S. L.P.S.K.
    ;
    ;
    HTTP/2 introduced multi-threaded server operation for performance improvement over HTTP/1.1. Recent works have discovered that multi-threaded operation results in multiplexed object transmission, that can also have an unanticipated positive effect on TLS/SSL privacy. In fact, these works go on to design privacy schemes that rely heavily on multiplexing to obfuscate the sizes of the objects based on which the attackers inferred sensitive information. Orthogonal to these works, we examine if the privacy offered by such schemes work in practice. In this work, we show that it is possible for a network adversary with modest capabilities to completely break the privacy offered by the schemes that leverage HTTP/2 multiplexing. Our adversary works based on the following intuition: restricting only one HTTP/2 object to be in the server queue at any point of time will eliminate multiplexing of that object and any privacy benefit thereof. In our scheme, we begin by studying if (1) packet delays, (2) network jitter, (3) bandwidth limitation, and (4) targeted packet drops have an impact on the number of HTTP/2 objects processed by the server at an instant of time. Based on these insights, we design our adversary that forces the server to serialize object transmissions, thereby completing the attack. Our adversary was able to break the privacy of a real-world HTTP/2 website 90% of the time, the code for which will be released. To the best of our knowledge, this is the first privacy attack on HTTP/2.
  • Placeholder Image
    Publication
    Sparsity-Aware Caches to Accelerate Deep Neural Networks
    (01-03-2020)
    Ganesan, Vinod
    ;
    Sen, Sanchari
    ;
    Kumar, Pratyush
    ;
    Gala, Neel
    ;
    ;
    Raghunathan, Anand
    Deep Neural Networks (DNNs) have transformed the field of artificial intelligence and represent the state-of-the-art in many machine learning tasks. There is considerable interest in using DNNs to realize edge intelligence in highly resource-constrained devices such as wearables and IoT sensors. Unfortunately, the high computational requirements of DNNs pose a serious challenge to their deployment in these systems. Moreover, due to tight cost (and hence, area) constraints, these devices are often unable to accommodate hardware accelerators, requiring DNNs to execute on the General Purpose Processor (GPP) cores that they contain. We address this challenge through lightweight micro-architectural extensions to the memory hierarchy of GPPs that exploit a key attribute of DNNs, viz. sparsity, or the prevalence of zero values. We propose SparseCache, an enhanced cache architecture that utilizes a null cache based on a Ternary Content Addressable Memory (TCAM) to compactly store zero-valued cache lines, while storing non-zero lines in a conventional data cache. By storing address rather than values for zero-valued cache lines, SparseCache increases the effective cache capacity, thereby reducing the overall miss rate and execution time. SparseCache utilizes a Zero Detector and Approximator (ZDA) and Address Merger (AM) to perform reads and writes to the null cache. We evaluate SparseCache on four state-of-the-art DNNs programmed with the Caffe framework. SparseCache achieves 5-28% reduction in miss-rate, which translates to 5-21% reduction in execution time, with only 0.1% area and 3.8% power overhead in comparison to a low-end Intel Atom Z-series processor.
  • Placeholder Image
    Publication
    Clinical Thermography for Breast Cancer Screening: A Systematic Review on Image Acquisition, Segmentation, and Classification
    (01-01-2023)
    Kaushik, R.
    ;
    Sivaselvan, B.
    ;
    There is a life after breast cancer. The prerequisite is early detection. Breast cancer is curable when detected early, tiny, and has not spread–regular screening aids in early detection. Clinical Thermography and artificial intelligence are potentially a good fit for early breast cancer screening. This survey paper presents a systematic review of artificial intelligence-based breast cancer screening using thermal infrared cameras. Initially, we will present the qualitative analysis of the existing literature regarding the trend and distribution. This review manuscript will then explore the literature about infrared thermal image acquisition and storage techniques. We will then highlight various segmentation techniques used for processing infrared thermal images. This paper presents the experimental results of the traditional image processing and deep learning-based segmentation techniques available in the literature using infrared breast thermal images. We then summarize the works that have used artificial intelligence to segment and classify infrared thermal images. The existing literature shows opportunities to explore the area of explainable artificial intelligence (AI). Explainable AI will make clinical Thermography into assistive technology for the medical community.
  • Placeholder Image
    Publication
    JUGAAD: Comprehensive Malware Behavior-as-a-Service
    (08-08-2022)
    Karapoola, Sareena
    ;
    Singh, Nikhilesh
    ;
    ;
    An in-depth analysis of the impact of malware across multiple layers of cyber-connected systems is crucial for confronting evolving cyber-attacks. Gleaning such insights requires executing malware samples in analysis frameworks and observing their run-time characteristics. However, the evasive nature of malware, its dependence on real-world conditions, Internet connectivity, and short-lived remote servers to reveal its behavior, and the catastrophic consequences of its execution, pose significant challenges in collecting its real-world run-time behavior in analysis environments. In this context, we propose JUGAAD, a malware behavior-as-a-service to meet the demands for the safe execution of malware. Such a service enables the users to submit malware hashes or programs and retrieve their precise and comprehensive real-world run-time characteristics. Unlike prior services that analyze malware and present verdicts on maliciousness and analysis reports, JUGAAD provides raw run-time characteristics to foster unbounded research while alleviating the unpredictable risks involved in executing them. JUGAAD facilitates such a service with a back-end that executes a regular supply of malware samples on a real-world testbed to feed a growing data-corpus that is used to serve the users. With heterogeneous compute and Internet connectivity, the testbed ensures real-world conditions for malware to operate while containing its ramifications. The simultaneous capture of multiple execution artifacts across the system stack, including network, operating system, and hardware, presents a comprehensive view of malware activity to foster multi-dimensional research. Finally, the automated mechanisms in JUGAAD ensure that the data-corpus is continually growing and is up to date with the changing malware landscape.
  • Placeholder Image
    Publication
    VNF-DOC: A Dynamic Overload Controller for Virtualized Network Functions in Cloud
    (01-01-2020)
    Murugasen, Sudhakar
    ;
    Raman, Shankar
    ;
    Network Function Virtualization (NFV) supports enterprises and service providers to build reliable network services in a cost-effective way. Such network services are created by combining one or more Virtual Network Functions (VNFs) hosted in private or public cloud infrastructure. However, uncontrolled VNF overload is a major cause of network service failure in NFV. Overload conditions negatively impact throughput, and hence the resiliency requirements of NFV. The ability to detect and mitigate an overload quickly, and ensuring high throughput, for varying overload condition is critical. The existing solutions are unable to meet these combined objectives in VNFs. In this paper, we propose a Dynamic Overload Controller for VNF (VNF-DOC), which uses VNF’s current and predicted load for every sampling interval, to decide on a mitigation action. It mitigates both transient and sustained overload, by dynamically using cloud auto scale, Virtual Machine buffer pool, and traffic throttling. We evaluate our solution on NFV based IP multimedia system, hosted in the AWS cloud environment. The result shows that VNF-DOC mitigates high capacity overload without any adverse side effects and achieves at least 94% throughput. VNF-DOC is robust in handling varying overload with negligible performance overhead.
  • Placeholder Image
    Publication
    Net-Police: A network patrolling service for effective mitigation of volumetric DDoS attacks
    (15-01-2020)
    Karapoola, Sareena
    ;
    Vairam, Prasanna Karthik
    ;
    Raman, Shankar
    ;
    Volumetric Distributed Denial of Service (DDoS) attacks are a significant concern for information technology-based organizations. These attacks result in significant revenue losses in terms of wastage of resources and unavailability of services at the victim (e.g., business websites, DNS servers, etc.) as well as the Internet Service Providers (ISPs) along the path of the attack. The state-of-the-art DDoS mitigation mechanisms attempt to alleviate the losses at either the victim or the ISPs, but not both. In this paper, we present Net-Police, which is a traffic patrolling system for DDoS mitigation. Net-Police identifies the sources of attack so that filters can be employed at these sources in order to quickly mitigate the attack. Such a solution effectively prevents the flow of malicious traffic across the ISP networks, thereby benefiting the ISPs also. Net-Police patrols the network by designating a small number of routers as dynamic packet taggers, to prune benign regions in the network, and localize the search to the Autonomous Systems (AS) from which the attack originates. We evaluate the proposed solution on 257 real-world topologies from the Internet Topology Zoo library and the Internet AS level topology. The paper also presents details of our hardware test-bed platform consisting of 30 routers on which network services such as Net-Police can be implemented and studied for on-field feasibility. Our experiments reveal that Net-Police performs better than the state-of-the-art cloud-based and traceback-based solutions in terms of ISP bandwidth savings and availability of the victim to legitimate clients.
  • Placeholder Image
    Publication
    Brutus: Refuting the Security Claims of the Cache Timing Randomization Countermeasure Proposed in CEASER
    (01-01-2020)
    Bodduna, Rahul
    ;
    Ganesan, Vinod
    ;
    Slpsk, Patanjali
    ;
    ;
    Cache timing attacks are a serious threat to the security of computing systems. It permits sensitive information, such as cryptographic keys, to leak across virtual machines and even to remote servers. Encrypted Address Cache, proposed by CEASER - a best paper candidate at MICRO 2018 - is a promising countermeasure that stymies the timing channel by employing cryptography to randomize the cache address space. The author claims strong security guarantees by providing randomization both spatially (randomizing every address) and temporally (changing the encryption key periodically). In this letter, we point out a serious flaw in their encryption approach that undermines the proposed security guarantees. Specifically, we show that the proposed Low-Latency Block Cipher, used for encryption in CEASER, is composed of only linear functions which neutralizes the spatial and temporal randomization. Thus, we show that the complexity of a cache timing attack remains unaltered even with the presence of CEASER. Further, we compare the encryption overheads if CEASER is implemented with a stronger encryption algorithm.