Now showing 1 - 10 of 39
  • Placeholder Image
    Publication
    A generalized deep learning framework for whole-slide image segmentation and analysis
    (01-12-2021)
    Khened, Mahendra
    ;
    Kori, Avinash
    ;
    Rajkumar, Haran
    ;
    ;
    Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.
  • Placeholder Image
    Publication
    Interpreting Deep Neural Networks for Medical Imaging Using Concept Graphs
    (01-01-2022)
    Kori, Avinash
    ;
    Natekar, Parth
    ;
    ;
    The black-box nature of deep learning models prevents them from being completely trusted in domains like biomedicine. Most explainability techniques do not capture the concept-based reasoning that human beings follow. In this work, we attempt to understand the behavior of trained models that perform image processing tasks in the medical domain by building a graphical representation of the concepts they learn. Extracting such a graphical representation of the model’s behavior on an abstract, higher conceptual level would help us to unravel the steps taken by the model for predictions. We show the application of our proposed implementation on two biomedical problems - brain tumor segmentation and fundus image classification. We provide an alternative graphical representation of the model by formulating a concept level graph as discussed above, and find active inference trails in the model. We work with radiologists and ophthalmologists to understand the obtained inference trails from a medical perspective and show that medically relevant concept trails are obtained which highlight the hierarchy of the decision-making process followed by the model. Our framework is available at https://github.com/koriavinash1/BioExp.
  • Placeholder Image
    Publication
    Asteroseismic determination of fundamental parameters of Sun-like stars using multilayered neural networks
    (01-10-2016)
    Verma, Kuldeep
    ;
    Hanasoge, Shravan
    ;
    Bhattacharya, Jishnu
    ;
    Antia, H. M.
    ;
    The advent of space-based observatories such as Convection, Rotation and planetary Transits (CoRoT) and Kepler has enabled the testing of our understanding of stellar evolution on thousands of stars. Evolutionary models typically require five input parameters, the mass, initial helium abundance, initial metallicity, mixing length (assumed to be constant over time), and the age to which the star must be evolved. Some of these parameters are also very useful in characterizing the associated planets and in studying Galactic archaeology. How to obtain these parameters from observations rapidly and accurately, specifically in the context of surveys of thousands of stars, is an outstanding question, one that has eluded straightforward resolution. For a given star, we typically measure the effective temperature and surface metallicity spectroscopically and low-degree oscillation frequencies through space observatories. Here we demonstrate that statistical learning, using artificial neural networks, is successful in determining the evolutionary parameters based on spectroscopic and seismic measurements. Our trained networks show robustness over a broad range of parameter space, and critically, are entirely computationally inexpensive and fully automated. We analyse the observations of a few stars using this method and the results compare well to inferences obtained using other techniques. This method is both computationally cheap and inferentially accurate, paving the way for analysing the vast quantities of stellar observations from past, current, and future missions.
  • Placeholder Image
    Publication
    Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis
    (07-02-2020)
    Natekar, Parth
    ;
    Kori, Avinash
    ;
    The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.
  • Placeholder Image
    Publication
    GAN-based End-to-End Unsupervised Image Registration for RGB-Infrared Image
    (01-02-2020)
    Kumari, Kanti
    ;
    Image registration is a pre-processing step used in various computer vision applications. This paper presents an unsupervised image registration for a given pair of RGB and infrared images, with the RGB image used as the reference for infrared. This method exploits the GAN architecture with a spatial transformer module, to synthesize the transformed image using an unsupervised loss criterion. This loss used for error backpropagation taken as a linear combination of adversarial loss; Mean Squared Error (MSE) loss between the input RGB image and image synthesized by the generator; KL divergence loss between the IR image and the synthesized image; and another MSE loss estimated using features maps extracted from pretrained VGG-16. The adversarial loss forces the generator to output an IR like image, with the input IR image and the generated IR image labelled as real and fake respectively. The other three losses are backpropagated through the generator network to learn the transformation as well as to preserve the structure and resolution of the generated image. This unsupervised learning process is stopped after a specified number of iterations based on a validation set. A supervised method has also been developed for comparison with the presented method. The SSIM and PSNR values estimated between predicted registered image and its ground truth has been used as evaluation criteria. The unsupervised method has scored 0.8351±0.06 and 35.2723± 0.68 for SSIM and PSNR respectively, while supervised scored as 0.7620± 0.08 and 15.8978+2.21.
  • Placeholder Image
    Publication
    Single Molecule Imaging Using State-of-the- Art Microscopy Techniques
    (01-01-2023)
    Bhupathi, Arun
    ;
    Hema Brindha, M.
    ;
    ;
    Ashwin Kumar, N.
    Biomolecule imaging within the cell enables us to study the molecular mechanism and cellular response. Single-molecule imaging is a technique that investigates the properties of the individual molecular responses of the biological system. Conventional imaging techniques like light, electron, and fluorescence microscopy are used in cell biology to observe the biological system. These imaging techniques require exogenous contrast agents like fluorophores and nanoprobes to improve the imaging parameters. The techniques combining fluorescence light microscopy and electron microscopy for imaging the structures of the biomolecules enable good single-molecule imaging. The use of fluorescent tags for imaging provides high labeling specificity to detect and analyze individual single molecules. This chapter begins with the principle and needs for the single-molecule imaging technique. Focuses on the design of single-molecule detection techniques aided by imaging modalities, fluorescent probes, and labeling methods. Also, this chapter emphasizes the quantitative aspects of the imaging modality, image formation, processing, and different analytical estimation techniques for visualization. The single-molecule imaging technique accomplished various development by different biophysics research groups. Overall, the different imaging techniques enable single-molecule detection with their instrumentation and application using biomolecules in the field of biomedical engineering.
  • Placeholder Image
    Publication
    X-ray scintillator lens-coupled with CMOS camera for pre-clinical cardiac vascular imaging-A feasibility study
    (01-02-2022)
    Balasubramanian, Swathi Lakshmi
    ;
    We present the design and characterization of an X-ray imaging system consisting of an off-the-shelf CMOS sensor optically coupled to a CsI scintillator. The camera can perform both high-resolution and functional cardiac imaging. High-resolution 3D imaging requires microfocus X-ray tubes and expensive detectors, while pre-clinical functional cardiac imaging requires high flux pulsed (clinical) X-ray tubes and high-end cameras. Our work describes an X-ray camera, namely an “optically coupled X-ray(OCX) detector,” used for both the aforementioned applications with no change in the specifications. We constructed the imaging detector with two different CMOS optical imaging cameras called CMOS sensors, 1.A monochrome CMOS sensor coupled with an f1.4 lens and 2.an RGB CMOS sensor coupled with an f0.95 prime lens. The imaging system consisted of our X-ray camera, micro-focus X-ray source (50kVp and 1mA), and a rotary stage controlled from a personal computer (PC) and LabVIEW interface. The detective quantum efficiency (DQE) of the imaging system (monochrome) estimated using a cascaded linear model was 17% at 10 lp/mm. The system modulation transfer function (MTF) and the noise power spectrum (NPS) were inputs to the DQE estimation. Because of the RGB camera's low quantum efficiency (QE), the OCX detector DQE was 19% at 5 lp/mm. The contrast to noise ratio (CNR) at different frame rates was studied using the capillary tubes filled with various dilutions of iodinated contrast agents. In-vivo cardiac angiography demonstrated that blood vessels of the order of 100 microns or above were visible at 40 frames per second despite the low X-ray flux. For high-resolution 3D imaging, the system was characterized by imaging a cylindrical micro-CT contrast phantom and comparing it against images from a commercial scanner.
  • Placeholder Image
    Publication
    Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved?
    (01-11-2018)
    Bernard, Olivier
    ;
    Lalande, Alain
    ;
    Zotti, Clement
    ;
    Cervenansky, Frederick
    ;
    Yang, Xin
    ;
    Heng, Pheng Ann
    ;
    Cetin, Irem
    ;
    Lekadir, Karim
    ;
    Camara, Oscar
    ;
    Gonzalez Ballester, Miguel Angel
    ;
    Sanroma, Gerard
    ;
    Napel, Sandy
    ;
    Petersen, Steffen
    ;
    Tziritas, Georgios
    ;
    Grinias, Elias
    ;
    Khened, Mahendra
    ;
    Kollerathu, Varghese Alex
    ;
    ;
    Rohe, Marc Michel
    ;
    Pennec, Xavier
    ;
    Sermesant, Maxime
    ;
    Isensee, Fabian
    ;
    Jager, Paul
    ;
    Maier-Hein, Klaus H.
    ;
    Full, Peter M.
    ;
    Wolf, Ivo
    ;
    Engelhardt, Sandy
    ;
    Baumgartner, Christian F.
    ;
    Koch, Lisa M.
    ;
    Wolterink, Jelmer M.
    ;
    Isgum, Ivana
    ;
    Jang, Yeonggul
    ;
    Hong, Yoonmi
    ;
    Patravali, Jay
    ;
    Jain, Shubham
    ;
    Humbert, Olivier
    ;
    Jodoin, Pierre Marc
    Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the 'Automatic Cardiac Diagnosis Challenge' dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.
  • Placeholder Image
    Publication
    Estimation of myocardial deformation using correlation image velocimetry
    (05-04-2017)
    Jacob, Athira
    ;
    ;
    Background: Tagged Magnetic Resonance (tMR) imaging is a powerful technique for determining cardiovascular abnormalities. One of the reasons for tMR not being used in routine clinical practice is the lack of easy-to-use tools for image analysis and strain mapping. In this paper, we introduce a novel interdisciplinary method based on correlation image velocimetry (CIV) to estimate cardiac deformation and strain maps from tMR images. Methods: CIV, a cross-correlation based pattern matching algorithm, analyses a pair of images to obtain the displacement field at sub-pixel accuracy with any desired spatial resolution. This first time application of CIV to tMR image analysis is implemented using an existing open source Matlab-based software called UVMAT. The method, which requires two main input parameters namely correlation box size (C B ) and search box size (S B ), is first validated using a synthetic grid image with grid sizes representative of typical tMR images. Phantom and patient images obtained from a Medical Imaging grand challenge dataset ( http://stacom.cardiacatlas.org/motion-tracking-challenge/ ) were then analysed to obtain cardiac displacement fields and strain maps. The results were then compared with estimates from Harmonic Phase analysis (HARP) technique. Results: For a known displacement field imposed on both the synthetic grid image and the phantom image, CIV is accurate for 3-pixel and larger displacements on a 512 × 512 image with (C B ,S B )=(25,55) pixels. Further validation of our method is achieved by showing that our estimated landmark positions on patient images fall within the inter-observer variability in the ground truth. The effectiveness of our approach to analyse patient images is then established by calculating dense displacement fields throughout a cardiac cycle, and were found to be physiologically consistent. Circumferential strains were estimated at the apical, mid and basal slices of the heart, and were shown to compare favorably with those of HARP over the entire cardiac cycle, except in a few (-4) of the segments in the 17-segment AHA model. The radial strains, however, are underestimated by our method in most segments when compared with HARP. Conclusions: In summary, we have demonstrated the capability of CIV to accurately and efficiently quantify cardiac deformation from tMR images. Furthermore, physiologically consistent displacement fields and circumferential strain curves in most regions of the heart indicate that our approach, upon automating some pre-processing steps and testing in clinical trials, can potentially be implemented in a clinical setting.
  • Placeholder Image
    Publication
    Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation
    (01-10-2017)
    Alex, Varghese
    ;
    Vaidhya, Kiran
    ;
    Thirunavukkarasu, Subramaniam
    ;
    Kesavadas, Chandrasekharan
    ;
    The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients (n=20, 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.