Int J Performability Eng ›› 2026, Vol. 22 ›› Issue (5): 253-262.doi: 10.23940/ijpe.26.05.p3.253262

Previous Articles     Next Articles

Emotion-Driven Music Recommender System: A Novel Deep Learning Approach for Enhanced User Experience

Ritika Bidlan* and Sonal Chawla   

  1. Department of Computer Science & Applications, Panjab University, Chandigarh, India
  • Submitted on ; Revised on ; Accepted on
  • Contact: * E-mail address: ritika_dcsa@pu.ac.in

Abstract: Emotion identification using audio is a significant difficulty in interactions between humans and computers, as emotional indicators within speech are usually complex and dependent on context. Conventional techniques encounter difficulties in precise classification owing to high-dimensional characteristics and restricted predictability. This paper is devoted to the recognition of emotions from audio with a novel approach based on a hybrid ResNet single-channel feature-tailored architecture. The elicitation of emotions by the suggested predication system is an indicator of robust classification error. The suggested methodology is a dimensionality reduction methodology based on Principal Component Analysis and takes advantage of ANOVA to provide detailed statistical validation to improve feature selection and general performance of the model. The model possesses a high accuracy of 94.50% in addition to high precision, recall and F1 scores, indicating that the model is well able to identify emotional states. When comparing our study with previous literature, it is evident that our model has superior performance, and it is more effective than both traditional machine learning approaches and other approaches of deep learning. It is a part of the speech emotion recognition method development that may be used in personalized music recommendation systems and other human-computer interaction technologies.

Key words: MFCC, ResNet, ANOVA, principal component analysis, deep learning, recommendation system