Options
Linear and nonlinear compression of feature vectors for speech recognition
Date Issued
01-01-2002
Author(s)
Gangashetty, Suryakanth V.
Prasanna, S. R.Mahadeva
Yegnanarayana, Bayya
Abstract
In this paper, we consider approaches for linear and nonlinear compression of feature vectors for recognition of utterances of syllable-like units in Indian languages. The distribution capturing ability of an autoassociative neural network model is exploited to derive the components for compressing the feature vectors. The nonlinear compression is accomplished by a five layer autoassociative neural network model. Linear compression is realized by principal component analysis. Both linear and nonlinear compressions are performed on each subgroup of the sound units separately. The results show that it is indeed possible to compress the feature vectors from 50 to 19 dimension without affecting the performance of the classifier.
Volume
4