Options
Generalizing supervised deep learning MRI reconstruction to multiple and unseen contrasts using meta-learning hypernetworks[Formula presented]
Date Issued
01-10-2023
Author(s)
Ramanarayanan, Sriprabha
Palla, Arun
Ram, Keerthi
Indian Institute of Technology, Madras
Abstract
Meta-learning has recently been an emerging data-efficient learning technique for various medical imaging operations and has helped advance contemporary deep learning models. Furthermore, meta-learning enhances the knowledge generalization of the imaging tasks by learning both shared and discriminative weights for various configurations of imaging tasks during training. However, existing meta-learning models attempt to learn a single set of weight initializations of a neural network that might be fundamentally restrictive under the heterogeneous (multimodal) data scenario. This work aims to develop a multimodal meta-learning model for image reconstruction, which augments meta-learning with evolutionary capabilities to encompass diverse acquisition settings of heterogeneous data. Our proposed model called KM-MAML (Kernel Modulation-based Multimodal Meta-Learning), has hypernetworks (auxiliary learners) that evolve to generate mode-specific (or context-specific) weights. These weights provide the mode-specific inductive bias for multiple modes by re-calibrating each kernel of the base network for image reconstruction via a low-rank kernel modulation operation. Furthermore, we incorporate gradient-based meta-learning (GBML) in the contextual space to update the weights of the hypernetworks based on different modes. The hypernetworks and the base reconstruction network in the GBML setting provide discriminative mode-specific features and low-level image features, respectively. We extensively evaluate our model for multi-contrast magnetic resonance image reconstruction considering the essential research directions in fastMRI for multimodal and rich transfer learning capabilities across various MRI contrasts. Our comparative studies show that the proposed model (i) exhibits superior reconstruction performance over joint training, other meta-learning methods, and various context-specific MRI reconstruction architectures, and (ii) better adaptation to 80% and 92% of unseen multi-contrast data contexts with improvement margins of 0.1 to 0.5 dB in PSNR and around 0.01 in SSIM, respectively. Besides, a representation analysis with U-Net as the base network shows that kernel modulation infuses 80% of mode-specific representation changes in the high-resolution layers. Our source code is available at https://github.com/sriprabhar/KM-MAML/.
Volume
146