Options
Modified self-training based statistical models for image classification and speaker identification
Date Issued
01-12-2021
Author(s)
Bodapati, Jyostna Devi
Abstract
Building a high precision statistical model requires ample amounts of supervised (labeled) data to train the models. In certain domains, it is difficult to acquire large amounts of labeled data, especially applications involving images, speech and video data. At the same time, lots of unlabeled data is available in such applications. Self-training is one of the semi-supervised approaches that enables the use of vast unlabeled data to boost the efficiency of the model along with minimal labeled data. In this work, we propose a variant of the self-training approach that embraces soft labeling of unlabeled examples rather than the hard labeling used in conventional self-training. As our work focuses on image and speaker recognition tasks, Gaussian Mixture Model (GMM) based Bayesian classifier is used as a wrapper in the self-training approach. Our experimental studies on STL10, CIFAR10, MIT (image recognition task) and NIST (speaker recognition task) benchmark datasets indicate that the proposed modified self-training approach offers enhanced efficiency over conventional self-training.
Volume
24