Publication: Interpreting Deep Neural Networks for Medical Imaging Using Concept Graphs

Date
01-01-2022
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The black-box nature of deep learning models prevents them from being completely trusted in domains like biomedicine. Most explainability techniques do not capture the concept-based reasoning that human beings follow. In this work, we attempt to understand the behavior of trained models that perform image processing tasks in the medical domain by building a graphical representation of the concepts they learn. Extracting such a graphical representation of the model’s behavior on an abstract, higher conceptual level would help us to unravel the steps taken by the model for predictions. We show the application of our proposed implementation on two biomedical problems - brain tumor segmentation and fundus image classification. We provide an alternative graphical representation of the model by formulating a concept level graph as discussed above, and find active inference trails in the model. We work with radiologists and ophthalmologists to understand the obtained inference trails from a medical perspective and show that medically relevant concept trails are obtained which highlight the hierarchy of the decision-making process followed by the model. Our framework is available at https://github.com/koriavinash1/BioExp.
Description
Keywords
Active-trails, Causality, Concept-identification, Concept-level-graph, Concepts, Interpretation