摘要:Wireless networks are commonly employed for ambient assisted living applications, and artificial intelligence-enabled event detection and classification processes have become familiar. However, music is a kind of time-series data, and it is challenging to design an effective music genre classification (MGC) system due to a large quantity of music data. Robust MGC techniques necessitate a massive amount of data, which is time-consuming, laborious, and requires expert knowledge. Few studies have focused on the design of music representations extracted directly from input waveforms. In recent times, deep learning (DL) models have been widely used due to their characteristics of automatic extracting advanced features and contextual representation from actual music or processed data. This paper aims to develop a novel deep learning-enabled music genre classification (DLE-MGC) technique. The proposed DLE-MGC technique effectively classifies the music genres into multiple classes by using three subprocesses, namely preprocessing, classification, and hyperparameter optimization. At the initial stage, the Pitch to Vector (Pitch2vec) approach is applied as a preprocessing step where the pitches in the input musical instrument digital interface (MIDI) files are transformed into the vector sequences. Besides, the DLE-MGC technique involves the design of a cat swarm optimization (CSO) with bidirectional long-term memory (BiLSTM) model for the classification process. The DBTMPE technique has gained a moderately increased accuracy of 94.27%, and the DLE-MGC technique has accomplished a better accuracy of 95.87%. The performance validation of the DLE-MGC technique was carried out using the Lakh MIDI music dataset, and the comparative results verified the promising performance of the DLE-MGC technique over current methods.