Knowledge Enhanced Graph Neural Networks for Explainable Recommendation

Knowledge Enhanced Graph Neural Networks for Explainable Recommendation

Introduction

With the explosive growth of online information, recommendation systems play an essential role in solving the problem of information overload. Traditional recommendation systems typically rely on Collaborative Filtering (CF) methods, which generate recommendations based on users’ historical records. CF methods are mainly divided into memory-based and model-based techniques. Memory-based methods include user-based and item-based CF, while model-based methods recommend by learning models, such as matrix factorization. In recent years, deep learning techniques have shown extremely high efficacy in the research of information retrieval and recommendation systems. Many deep learning-based recommendation methods have achieved high recommendation performance. However, although these methods perform well in recommendation accuracy, they lack interpretability and transparency in the decision process. To enhance the transparency and user satisfaction of recommendation systems, the research on explainable recommendation is gradually gaining attention. Explainable recommendation not only makes recommendations more transparent and interpretable but also enhances the credibility of the systems and user satisfaction.

Source of the Paper

The paper titled “Knowledge Enhanced Graph Neural Networks for Explainable Recommendation” was co-authored by Ziyu Lyu and Min Yang from the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; Yue Wu from Chongqing University of Posts and Telecommunications; Junjie Lai from the University of Science and Technology of China; Chengming Li from Sun Yat-sen University, and Wei Zhou from Chongqing University. The paper was published in May 2023 in the IEEE Transactions on Knowledge and Data Engineering.

Research Workflow

Knowledge Enhanced Semantic Representation Learning

a) Research Workflow

This paper proposes a Knowledge Enhanced Graph Neural Network (KEGNN) for explainable recommendation. The approach involves the following steps:

Knowledge Enhanced Semantic Representation Learning

First, semantic representations of users, items, and user-item interactions are learned, and external knowledge bases are used to strengthen these semantic representations. For users and items, their historical reviews are aggregated, considering the temporal sequence of these reviews, to construct text documents. Global and sentence-level contextual representations are captured using Bidirectional Long Short-Term Memory Networks (BiLSTM), and semantic representations are enhanced through knowledge base retrievals.

User Behavior Learning and Reasoning

Based on the learned semantic representations, we construct a user behavior graph where the nodes include users and items, and the edges represent user-item interactions. The user behavior graph uses knowledge-enhanced semantic representations as the initial embeddings. Information propagation and reasoning are performed through Graph Neural Networks (GNN), capturing higher-order structural information and conducting multi-hop reasoning.

Hierarchical Neural Collaborative Filtering

Following user behavior learning and reasoning, a hierarchical neural collaborative filtering layer is designed to predict user-item interactions. The first layer connects user and item representations, while the second layer combines these with user-item relationship embeddings. Advanced neural network layers are then used to predict ratings.

Explanation Generation

An explanation generation module combining a generation model and a copy mechanism is designed. Gated Recurrent Units (GRU) are employed to generate human-like text explanations, and the copy mechanism selects passages from the original reviews to generate explanations.

b) Main Research Results

Experiments on three real-world datasets demonstrate that KEGNN outperforms existing methods in both prediction accuracy and explainability:

  • On the electronics dataset, KEGNN achieved a 9.74% improvement in Root Mean Square Error (RMSE) and a 4.02% improvement in Mean Absolute Error (MAE).
  • On the home and kitchen dataset, KEGNN achieved a 0.88% improvement in RMSE and a 2.59% improvement in MAE.
  • On the musical instruments dataset, KEGNN improved RMSE by 7.38% and MAE by 1.63%.

Additionally, KEGNN is capable of generating high-quality, human-like text explanations. The explainability metrics (such as Rouge-1, Rouge-2, and Rouge-L’s Precision and F1 Score) significantly outperform comparison methods.

c) Research Conclusions

The KEGNN method enhances semantic representations through external knowledge bases and performs reasoning on the user behavior graph, achieving a comprehensive understanding of user behavior. Through hierarchical collaborative filtering and an explanation generation module that integrates the copy mechanism, KEGNN achieves highly accurate and interpretable recommendation results. This study demonstrates the feasibility of significantly improving the interpretability and user satisfaction of recommendation systems by integrating knowledge enhancement, graph neural networks, and deep learning methods.

d) Research Highlights

  • Knowledge Enhancement: Utilizing external knowledge bases to enhance the semantic representations of users, items, and their interactions.
  • User Behavior Graph: Constructing a user behavior graph and performing higher-order reasoning and preference propagation through Graph Neural Networks.
  • Hierarchical Collaborative Filtering: Designing hierarchical neural collaborative filtering layers to accurately predict ratings based on user-item relationships.
  • Generating Explanations: Combining the copy mechanism with GRU to generate human-like explanations, making recommendation results more intuitive and understandable.

e) Additional Information

The paper also includes an analysis of the importance of model components by comparing performance changes against the removal of different modules, indicating the contribution of each design module to the final results. Error analysis results reveal the model’s prediction accuracy imbalance across different rating types, pointing out directions for future improvements and providing actual cases of generated explanations to specifically demonstrate the model’s interpretability.

Summary

By proposing the KEGNN method, this paper fully utilizes external knowledge bases, graph neural networks, and deep learning technologies to address the trade-off between accuracy and explainability in recommendation systems. The research shows that while enhancing recommendation performance, KEGNN can provide high-quality explainable explanations, which is crucial for improving user experience and system transparency. This method has broad application prospects in recommendation systems across various fields.