Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • Research Outputs
  • Fundings & Projects
  • People
  • Statistics
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Indian Institute of Technology Madras
  3. Publication4
  4. Mridangam Artist Identification from Taniavartanam Audio
 
  • Details
Options

Mridangam Artist Identification from Taniavartanam Audio

Date Issued
02-01-2019
Author(s)
Gogineni, Krishnachaitanya
Kuriakose, Jom
Murthy, Hema A. 
Indian Institute of Technology, Madras
DOI
10.1109/NCC.2018.8600202
Abstract
The revolution in information technology has lead to the availability of vast and varied collections of music on the digital platform. With the widespread use of smartphones and other personal digital devices, there has been a growing interest in accessing music, based on its various characteristics using information retrieval technologies. But the unavailability of meta-Tags or annotations has lead to the need for developing technologies to automatically extract relevant properties of music from the audio. Automatically identifying meta-data from audio like, artist information-especially instrument artists-is a very tough task, even for humans. In this paper, automatic identification of percussion artist is attempted on mridangam audio from Carnatic music concert using probabilistic models. Unlike speaker identification where the voice of the speaker is unique, the timbre of the percussion instruments will be more or less the same across instruments. The distinctive characteristics of a musician can be found in the style of him/her playing the instrument. A single Gaussian mixture model (GMM) is built across all musician data using tonic normalized cent-filterbank-cepstral-coefficients (CFCC) features. Each artist's percussion audio is converted to a sequence of GMM tokens. Sub-string matching between train and test data is used to identify the musician. The performance is evaluated on a dataset of 10 mridangam artist and could identify the artist with an accuracy of 72.5 %.
Indian Institute of Technology Madras Knowledge Repository developed and maintained by the Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback