Analyzing Content of Paris Climate Pledges with Computational Linguistics

The Paris Agreement is a crucial framework for global climate action, with countries outlining their climate goals and strategies through Nationally Determined Contributions (NDCs). While existing research has primarily focused on assessing the mitigation targets within NDCs, the broader textual content of these documents has received little system...

Online Signature Watermarking in the Transform Domain

Academic Background With the rapid growth of digital content, the importance of digital signatures in identity verification and content authentication has become increasingly prominent. However, the security and integrity of digital signatures face significant challenges. To protect the authenticity of signatures and prevent tampering, digital wate...

A Communication-Efficient Distributed Frank-Wolfe Online Algorithm with an Event-Triggered Mechanism

Academic Background In the era of big data, distributed learning has become an effective method for solving large-scale online machine learning problems. However, frequent communication and projection operations in distributed learning incur high communication and computational costs. Especially in high-dimensional constrained optimization problems...

Rain Streak Removal Using Improved Generative Adversarial Network with Loss Function Optimization

Academic Background In the field of computer vision, rain streaks are a common interference factor, especially in outdoor surveillance, autonomous driving, and intelligent transportation systems. Rain streaks significantly degrade image quality, affecting the recognition and analysis capabilities of visual systems. Traditional rain streak removal m...

Three-Way Decision Approach Based on Utility and Dynamic Localization Transformational Procedures within a Circular Q-Rung Orthopair Fuzzy Set for Ranking and Grading Large Language Models

Academic Background With the rapid development of artificial intelligence (AI) and natural language processing (NLP), large language models (LLMs) have made significant progress in both academia and industry. However, despite the outstanding performance of LLMs in multiple NLP tasks, no single model has been able to simultaneously meet all task req...

Modeling Visual Attention Based on Gestalt Theory

Background Introduction In the field of computer vision, research on visual attention models aims to simulate how the human visual system selects regions of interest from images or natural scenes. The human brain’s ability to quickly and accurately identify salient regions in visual scenes is of great significance in tasks such as image processing,...

Improved Alternative Queuing Method of Interval-Set Dissimilarity Measures and Possibility Degrees for Multi-Expert Multi-Criteria Decision-Making

Academic Background and Problem Introduction In the field of Multi-Expert Multi-Criteria Decision-Making (MEMCDM), effectively handling uncertainty and imprecise information has always been a core challenge. Particularly in complex scenarios involving multiple experts and decision criteria, experts’ opinions often diverge, complicating the decision...

Attention-Enabled Multi-Layer Subword Joint Learning for Chinese Word Embedding

Academic Background In recent years, Chinese word embeddings have attracted significant attention in the field of Natural Language Processing (NLP). Unlike English, the complex and diverse structure of Chinese characters presents unique challenges for semantic representation. Traditional word vector models, such as Word2Vec, often fail to fully cap...

Leveraging Graph Convolutional Networks for Semi-Supervised Learning in Multi-View Non-Graph Data

Background Introduction In the field of machine learning, Semi-Supervised Learning (SSL) has garnered significant attention due to its ability to leverage a small amount of labeled data and a large amount of unlabeled data for learning. Particularly in scenarios where data labeling is costly, graph-based semi-supervised learning methods have become...

A Holistic Comparative Study of Large Language Models as Emotional Support Dialogue Systems

Academic Background In recent years, with the rapid development of large language models (LLMs), their applications in the field of natural language processing (NLP) have become increasingly widespread. LLMs such as ChatGPT and LLaMA have demonstrated powerful capabilities in language generation and comprehension, even excelling in emotional expres...