Options
Mridangam Artist Identification from Taniavartanam Audio
Date Issued
02-01-2019
Author(s)
Abstract
The revolution in information technology has lead to the availability of vast and varied collections of music on the digital platform. With the widespread use of smartphones and other personal digital devices, there has been a growing interest in accessing music, based on its various characteristics using information retrieval technologies. But the unavailability of meta-Tags or annotations has lead to the need for developing technologies to automatically extract relevant properties of music from the audio. Automatically identifying meta-data from audio like, artist information-especially instrument artists-is a very tough task, even for humans. In this paper, automatic identification of percussion artist is attempted on mridangam audio from Carnatic music concert using probabilistic models. Unlike speaker identification where the voice of the speaker is unique, the timbre of the percussion instruments will be more or less the same across instruments. The distinctive characteristics of a musician can be found in the style of him/her playing the instrument. A single Gaussian mixture model (GMM) is built across all musician data using tonic normalized cent-filterbank-cepstral-coefficients (CFCC) features. Each artist's percussion audio is converted to a sequence of GMM tokens. Sub-string matching between train and test data is used to identify the musician. The performance is evaluated on a dataset of 10 mridangam artist and could identify the artist with an accuracy of 72.5 %.