Reliable Evaluation of Attribution Maps in CNNs: A Perturbation-Based Approach

Deep Learning Explainability Research: A Perturbation-Based Evaluation Method for Attribution Maps Background and Motivation With the remarkable success of deep learning models across various tasks, there is growing attention on the interpretability and transparency of these models. However, while these models excel in accuracy, their decision-maki...

An Explainable and Personalized Cognitive Reasoning Model Based on Knowledge Graph: Toward Decision Making for General Practice

An Explainable and Personalized Cognitive Reasoning Model Based on Knowledge Graph: Toward Decision Making for General Practice

An Explainable and Personalized Cognitive Reasoning Model Based on Knowledge Graph: Toward Decision Making for General Practice Background General medicine, as an important part of community and family healthcare, covers different ages, genders, organ systems, and various diseases. Its core concept is human-centered, family-based, emphasizing long-...

Knowledge-Enhanced Graph Topic Transformer for Explainable Biomedical Text Summarization

Application of Knowledge-Enhanced Graph Topic Transformer in Interpretable Biomedical Text Summarization Research Background Due to the continuous increase in the volume of biomedical literature, the task of automatic biomedical text summarization has become increasingly important. In 2021 alone, 1,767,637 articles were published in the PubMed data...

Critical Observations in Model-Based Diagnosis

In model-based fault diagnosis, the ability to identify the key observational data that leads to system abnormalities is highly valuable. This paper introduces a framework and algorithm for identifying key observational data. The framework determines which observations are crucial for diagnosis by abstracting the raw observational data into “sub-ob...