Predicting circRNA–Disease Associations with Shared Units and Multi-Channel Attention Mechanisms

Background Introduction In recent years, circular RNAs (circRNAs), as a novel class of non-coding RNA molecules, have played a significant role in the occurrence, development, and treatment of diseases. Due to their unique circular structure, circRNAs are resistant to degradation by nucleases, making them potential biomarkers and therapeutic target...

Multi-scale Hyperbolic Contrastive Learning for Cross-subject EEG Emotion Recognition

Cross-Subject EEG Emotion Recognition Research Based on Multi-Scale Hyperbolic Contrastive Learning Academic Background Electroencephalography (EEG), as a physiological signal, plays an important role in the field of affective computing. Compared with traditional non-physiological cues (such as facial expressions or voice), EEG signals have higher ...

Contrastive Decoupled Representation Learning and Regularization for Speech-Preserving Facial Expression Manipulation

Contrastive Decoupled Representation Learning in Speech-Preserving Facial Expression Manipulation Background Introduction In recent years, with the rapid development of virtual reality, film and television production, and human-computer interaction technologies, facial expression manipulation has become a research hotspot in the fields of computer ...

Sample-Cohesive Pose-Aware Contrastive Facial Representation Learning

Enhancing Pose Awareness in Self-Supervised Facial Representation Learning Research Background and Problem Statement In the field of computer vision, facial representation learning is a crucial research task. By analyzing facial images, we can extract information such as identity, emotions, and poses, thereby supporting downstream tasks like facial...

Contrastive Learning of T Cell Receptor Representations

New Breakthrough in T Cell Receptor (TCR) Specificity Prediction: The Introduction of the SCEPTR Model Academic Background T cell receptors (TCRs) play a crucial role in the immune system by determining the specificity of immune responses through their binding to peptide-MHC complexes (pMHCs). Understanding the interaction between TCRs and specific...

Unsupervised Domain Adaptation on Point Clouds via High-Order Geometric Structure Modeling

High-Order Geometric Structure Modeling-Based Unsupervised Domain Adaptation for Point Clouds Research Background and Motivation Point cloud data is a key data form for describing three-dimensional spaces, widely used in real-world applications such as autonomous driving and remote sensing. Point clouds can capture precise geometric information, bu...

Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation

A New Perspective on Domain Adaptive Semantic Segmentation: T2S-DA Study Background and Significance Semantic segmentation plays a crucial role in computer vision, but its performance often relies on extensive labeled data. However, acquiring labeled data is costly, especially in complex scenarios. To address this, many studies turn to synthetic da...

Efficient Deep Learning-Based Automated Diagnosis from Echocardiography with Contrastive Self-Supervised Learning

Breakthrough in Automated Echocardiogram Diagnosis via Deep Learning: A Comparative Study of Self-Supervised Learning Methods Research Background With the rapid development of artificial intelligence and machine learning technologies, their role in medical imaging diagnosis is becoming increasingly significant. In particular, Self-Supervised Learni...

Graph Neural Networks with Multiple Prior Knowledge for Multi-omics Data Analysis

Graph Neural Networks with Multiple Prior Knowledge for Multi-omics Data Analysis

Multiple Prior Knowledge Graph Neural Network in Multi-Omics Data Analysis Background Introduction Precision medicine is an important field for the future of healthcare as it provides personalized treatment plans for patients, improving treatment outcomes and reducing costs. For instance, due to the complex clinical, pathological, and molecular cha...

Mitigating Social Biases of Pre-trained Language Models via Contrastive Self-Debiasing with Double Data Augmentation

Introduction: Currently, pre-trained language models (PLMs) are widely applied in the field of natural language processing, but they have the problem of inheriting and amplifying social biases present in the training corpora. Social biases may lead to unpredictable risks in real-world applications of PLMs, such as automatic job screening systems te...