Int J Performability Eng ›› 2024, Vol. 20 ›› Issue (2): 99-111.doi: 10.23940/ijpe.24.02.p5.99111

Previous Articles     Next Articles

Emotion Identification using EEG Signal with Reduced Electrodes and Time Frequency Parameters

Kalyani Wagha,*, K. Vasanthb, Sagar Shindec, Lalitkumar Wadhwad, and Avinash Thakurd   

  1. aJSPM Narhe Technical Campus, Maharashtra, India;
    bVidya Jyothi Institute of Technology, Telangana, India;
    cPCET's - NMVPM's Nutan College of Engineering and Research, Maharashtra, India;
    dDr. D. Y. Patil Institute of Technology, Maharashtra, India
  • Submitted on ; Revised on ; Accepted on
  • Contact: * E-mail address: kalyaniwagh13@gmail.com

Abstract: Emotion recognition has become important for easier and more effective interaction between humans and computers. It could be possible for machines to enhance and improve human communication by better understanding the variety of emotions. Therefore, constructing an emotion-specific brain-computer interface based on EEG might be the first step in overcoming that limitation by producing a neuroscientific medical tool to help such patients regain or maintain a good quality of life. Here, we used the Wavelets Multi-resolution Analysis method to extract various wavelet features from the EEG signal. Different mother wavelet functions like db, sym, coil, and haar are used to decompose EEG signals. Diverse features for five frequency bands are used, such as Power, Relative Power and Power Spectral Density, Wavelet Entropy, and statistical parameters. It has been observed that “Db6,” with six decomposition levels, which yields five separate frequency bands, gives good classification accuracy. It has been observed that electrodes T7, T8, CP1, CP6, F3, F4, FP1, FP2, POZ, and F3 gives good classification accuracy. The performance is tested using various classifiers like SVM, k-NN, DT and RF. It has been observed that time domain Hjorth parameters alone have good classification accuracy compared to other features. Maximum classification accuracy of 73.42%, 71.25%, and 67.84% is achieved for Positive, Negative, and Neutral emotion using the k-NN algorithm using FP1, FP2, F3, F4, T7, T8, CP1, CP6, and POZ channels.

Key words: emotion recognition, EEG, wavelet transform, classifier