Int J Performability Eng ›› 2024, Vol. 20 ›› Issue (12): 764-774.doi: 10.23940/ijpe.24.12.p6.764774

• Original article • Previous Articles    

Identifying Cyber Threats in Metaverse Learning Environment using Explainable Deep Neural Networks

Deepika Singh(), Shajee Mohan, and Preeti Dubey   

  1. Department of Computer Science and Engineering, Sharda University, Uttar Pradesh, India
  • Submitted on ; Revised on ; Accepted on
  • Contact: Deepika Singh E-mail:2023563074.deepika@pg.sharda.ac.in

Abstract:

Rapid Internet of Artificial Intelligence and Internet of Things (AI-IoT) technology integration has led to the development of the Metaverse, a key component of the approaching digital era. Because it has created more immersive, interactive, and improved learning experiences for students, teachers, and institutions, this convergence has had a big impact on virtual learning platforms. But as the use of the Metaverse increases, strong cybersecurity measures are needed to identify and neutralize online threats and protect users. An explainable deep neural network (DNN) is proposed in this paper to detect and handle network intrusion attacks in Metaverse learning settings. In this paper, we used the IIoT Edge Cybersecurity dataset from Kaggle and implemented a neural network technique to create a quantitative and dependable network intrusion detection system (NIDS). To enhance the model's interpretability, we applied Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), allowing for a visual understanding of its decision-making process. By processing network traffic features from networked Metaverse devices and Internet of Things sensors, the explainable DNN makes it possible to accurately and understandably separate anomalous from benign Metaverse activity. The NIDS model establishes a more dependable and secure metaverse learning environment with a high-performance accuracy of 99.87%.

Key words: IoT, education, e-learning platforms, Explainable AI (XAI), AR, VR, Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), Machine Learning (ML)