Options
PADA: Pruning Assisted Domain Adaptation for Self-Supervised Speech Representations
Date Issued
01-01-2023
Author(s)
Abstract
While self-supervised speech representation learning (SSL) models serve a variety of downstream tasks, these models have been observed to overfit to the domain from which the unlabeled data originates. To alleviate this issue, we propose PADA (Pruning Assisted Domain Adaptation). Before performing the target-domain ASR fine-tuning, we discover the redundant weights from pre-trained wav2vec 2.0 models through various pruning strategies. We investigate the effect of Task-Agnostic and Task-Aware pruning and propose a new pruning paradigm called, Cross-Domain Task-Aware Pruning (CD-TAW). CD-TAW obtains the initial pruning mask from a well fine-tuned out-of-domain (OOD) model, thereby making use of the readily available fine-tuned models from the web. The proposed CD-TAW method achieves up to 20.6% relative WER improvement over our baseline when fine-tuned on a 2-hour subset of Switchboard data without language model (LM) decoding.