Graph Neural Network for Representation Learning of Lung Cancer

Research Flowchart

Representation Learning of Lung Cancer Based on Graph Neural Networks

Background Introduction

With the rapid development of digital pathology, image-based diagnostic systems are becoming increasingly important in precise pathological diagnosis. These systems rely on Multiple Instance Learning (MIL) technology for Whole Slide Images (WSIs). However, effectively representing WSIs remains a pressing problem. The advent of deep neural networks has led to breakthroughs in visual computing, but the vast pixel volume in each WSI presents significant challenges for existing neural network methods. Recently, some studies have explored graph-based models to capture complex relationships within images during the embedding and representation of WSIs.

Source of Publication

This study was completed by the following authors: Rukhma Aftab, Yan Qiang, Juanjuan Zhao, Zia Urrehman, and Zijuan Zhao. They all belong to the School of Information and Computer Science at Taiyuan University of Technology. The paper was published in the journal “BMC Cancer” in 2023, with the title “Graph neural network for representation learning of lung cancer” (DOI: https://doi.org/10.1186/s12885-023-11516-8).

Detailed Research Process

This study proposes a graph-based MIL method named MIL-GNN. The research is divided into several key steps:

Dataset Selection and Image Preprocessing

The study selected the classical MIL dataset MUSK and image data of two subtypes of lung cancer—Lung Adenocarcinoma (LUAD) and Lung Squamous Cell Carcinoma (LUSC)—for validation. The sliding window tiling method was used to extract patches from WSIs, and a pre-trained Convolutional Neural Network (CNN) was employed to extract feature vectors from each patch to construct a fully connected graph structure.

Application of Graph Neural Networks (GNNs)

In the graph structure, each patch is represented as a node, and the relationships between nodes are learned through a Gaussian Mixture Model (GMM). The study used a Variational Graph Auto-Encoder (VGAE) model based on GMM convolution to capture interactions between patches and compress them into a compact graph representation.

Model Training and Experimentation Process

Two key steps include:

  1. Graph Structure Construction and Feature Extraction: Each WSI is first subjected to CNN for feature extraction, and then the patch feature matrix is used to construct nodes and the adjacency matrix, which is then trained through GNN layers.
  2. Classification and Evaluation: The extracted graph feature vectors are applied to classification tasks, evaluated through 10-Fold Cross Validation, and assessed using metrics such as Area Under Curve (AUC) and F1 score to evaluate model performance.

Special Experimental Methods and Equipment

The study specially designed a graph autoencoder method based on Graph Mixture Model Convolution (GMMConv) to capture the complex feature relationships in WSI. The PyTorch Geometric library was used for training on an NVIDIA RTX3090 GPU, aiming for efficient memory utilization by the graph neural network during each WSI processing.

Research Results

Main Conclusions

The experimental results show that MIL-GNN achieved an accuracy of 97.42% on the MUSK dataset and an AUC of 94.3% on the LUAD and LUSC classification tasks. The study demonstrates the effectiveness of the graph-based MIL method in WSI representation learning and provides a new paradigm for whole-slide image learning.

Methodological Innovations and Research Value

  1. Application of Graph Neural Networks in MIL: The study captures and represents the complex relationships in WSIs using graph neural networks, enhancing the representation capability of pathological images.
  2. Efficient Graph Structure Representation Method: The use of GMMConv achieves efficient compression and representation of WSI data, significantly improving classification task performance.
  3. Interpretability and Visualization: The model’s attention on each patch can be visualized using the adjacency matrix, improving model interpretability and providing pathologists with more intuitive diagnostic tools.

Highlights of the Study

  1. Comparison with Next-Generation Models: MIL-GNN significantly outperformed various advanced MIL methods (such as ABMIL and Gated-ABMIL), especially in handling the MUSK dataset and TCGA lung cancer dataset, with notable improvements in AUC and accuracy.
  2. Instance-Level Learning Paradigm: The study proposes a new instance-based learning approach, tightly integrating patch representations with global pathological image representation.
  3. Combination of Deep Learning and Graph Representation: Through deep learning technology, patch features are extracted and translated into graph node information, then compressed and represented using a variational graph autoencoder for classification tasks, greatly enhancing the efficiency and accuracy of image analysis.

Discussion and Conclusion

The MIL-GNN method proposed in this study effectively captures important features within images by representing whole-slide images as dense graph structures, thereby enhancing classification performance. However, the model may experience overfitting issues under certain configurations. Future research tasks will include optimizing deep graph neural network models, exploring automated training mechanisms, and discovering more meaningful pathological structural features.

Significance and Value of the Study

This study not only provides an efficient method for representing whole-slide images in digital pathology but also achieves significant results in terms of interpretability and model performance. Future research will continue to refine this framework and explore more potential application scenarios based on graph representation in digital pathology.