Options
Self-organizing map for estimation of close-speaking microphone speech from throat microphone speech
Date Issued
01-12-2005
Author(s)
Shahina, A.
Abstract
The message content in a throat microphone speech is intelligible, but the speech is perceptually different from the close-speaking microphone speech. The perceptual difference between the two speech signals depend on their acoustic characteristics. The throat microphone speech could be enhanced to make it perceptually similar to the close-speaking microphone speech. This paper utilizes a Self-organizing map (SOM) for deriving the features of the close-speaking microphone speech from those of the throat microphone speech. Self-organizing maps create an ordered mapping of the input signals (as in the brain) into a layer of independent, interconnected units. The feature vectors of the throat microphone are clustered using the SOM algorithm. The response of the SOM to the test input pattern (derived from the throat microphone speech) is used to derive the nearest linear prediction coefficients of the close-speaking microphone speech. These estimated spectral features are used to reconstruct the enhanced speech. Copyright © IICAI 2005.