其他摘要:Music information retrieval is currently an active domain of research. An interesting aspect of music information retrieval involves mood classification. While the Western music captured much attention, research on Indian music was limited and mostly based on audio data. In this work, the authors propose a mood taxonomy and describe the framework for developing a multimodal dataset (audio and lyrics) for Hindi songs. We observed differences in mood for several instances of Hindi songs while annotating the audio of such songs in contrast to their corresponding lyrics. Finally, the mood classification frameworks were developed for Hindi songs and they consist of three different systems based on the features of audio, lyrics and both. The mood classification systems based on audio and lyrics achieved F-measures of 58.2% and 55.1%, respectively whereas the multimodal system (combination of both audio and lyrics) achieved the maximum F-measure of 68.6%.