Int J Performability Eng ›› 2025, Vol. 21 ›› Issue (12): 725-732.doi: 10.23940/ijpe.25.12.p6.725732

Previous Articles    

A Dual Firefly-Optimized Multimodal Emotion Detection Framework for Social Media

Neha Sharma* and Sanjay Tyagi   

  1. Department of Computer Science and Applications, Kurukshetra University, Haryana, India
  • Submitted on ; Revised on ; Accepted on
  • Contact: * E-mail address: nehadevsharma19@kuk.ac.in

Abstract: This paper proposes a robust multimodal emotion detection framework that leverages both visual and textual components of meme content using modality-specific deep learning pipelines and swarm intelligence-based optimization. The proposed architecture is divided into two distinct processing segments: one dedicated to visual content using a Convolutional Neural Network (CNN) with Firefly Algorithm-based hyper-parameter tuning, and the other focused on textual data processed through TF-IDF vectorization and Firefly-driven feature selection. The final emotion label is derived using a rule-based fusion strategy that combines predictions from both modalities. Experimental evaluations conducted on the Facebook Hateful Meme dataset demonstrate the superior performance of the proposed method over existing state-of-the-art techniques. The model achieves improvements of up to 8.3% in F1-score and 6.7% in accuracy compared to Abdullah et al. and Hamza et al., highlighting the significance of optimization in multimodal feature processing and decision fusion. This approach offers a lightweight yet interpretable solution for real-world meme analysis in applications involving hate speech detection, sentiment analysis, and affective computing.

Key words: multimodal detection, convolutional neural network, hyper-parameter tuning, firefly optimization algorithm