Int J Performability Eng ›› 2026, Vol. 22 ›› Issue (3): 158-166.doi: 10.23940/ijpe.26.03.p5.158166

• Original article • Previous Articles     Next Articles

LGAT: Lightweight Graph Attention Model for Real-Time Session-Based Recommendation System

Somen Kumar Roya,*, Jyothi Pillaib, Ani Thomasand Gopal Beherad   

  1. a Department of Computer Application, Bhilai Institute of Technology Durg, Chhattisgarh, India
    b Department of Computer Science and Engineering, Bhilai Institute of Technology Durg, Chhattisgarh, India
    c Department of Information Technology, Bhilai Institute of Technology Durg, Chhattisgarh, India
    d Department of Computer Science and Engineering, Government College of Engineering Kalahandi, Bhawanipatna, India
  • Submitted on ; Revised on ; Accepted on
  • Contact: Somen Kumar Roy
  • About author:
    * Corresponding author.
    E-mail address: somenroy@bitdurg.ac.in

Abstract:

Session-Based recommendation models such as item Based Collaborative Filtering (Item CF), Session MF, and Bayesian Personalized Ranking (BPR), work but do not catch sequential patterns. Systems that use recurring patterns such as RNN and GRU4Rec do better with order but have trouble with long-distance item links and take a long time to train. Neural Collaborative Filtering (NCF) offers deeper User-Item connections but misses session context, while Graph Neural Network methods can show complex structures but need heavy calculations and can smooth things out too much. To fix these problems, we suggest the Lightweight Graph Attention (LGAT) Model. LGAT shows each click sequence as a directed session graph with items as nodes and moves between them as edges; it then uses a single Self Attention layer to figure out how important each edge is. A simple attention-pooling method then brings node embeddings together into a small session vector. The model predicts the next item using dot product scoring and trains from start to finish with Cross entropy loss. In three tests (DIGINETICA, Retail Rocket, Yoo Choose 1/64), LGAM did better than six other models and got the best results for HR @10, NDCG @10 and MRR @10 while cutting training time by 20-30%. LGAT’s novelty lies in its combination of graph structure and lightweight attention, yielding both high accuracy and real-time efficiency.

Key words: session, recommendation systems, embeddings, light graph attention, attention-pooling