Complex Quantized Minimum Error Entropy with Fiducial Points: Theory and Application in Model Regression

Theory and Application of Complex Quantized Minimum Error Entropy with Fiducial Points: Breakthroughs in Model Regression

Academic Background

In the fields of machine learning and signal processing, the presence of non-Gaussian noise often adversely affects model performance. Although the traditional Mean Squared Error (MSE) is theoretically and computationally simple, its reliability is severely challenged when faced with non-Gaussian noise. To address this issue, researchers have proposed various optimization criteria, among which the Minimum Error Entropy (MEE) has garnered significant attention due to its excellent performance in suppressing impulse noise and outliers. However, the original MEE algorithm requires double summation over error samples, resulting in high computational complexity, which limits its application in large-scale datasets.

To reduce the computational burden, Zheng et al. proposed the Quantized Minimum Error Entropy (QMEE), which significantly improves computational efficiency through quantization techniques. Building on this, this study further extends this technique to the complex domain, proposing the Complex Quantized Minimum Error Entropy (CQMEE) and providing theoretical proofs and experimental validation of its fundamental properties and convergence. CQMEE not only offers an efficient computational alternative but also opens new avenues for handling complex data regression tasks.

Source of the Paper

This paper is co-authored by Bingqing Lin, Guobing Qian, Zongli Ruan, Junhui Qian, and Shiyuan Wang, from the College of Electronic and Information Engineering at Southwest University, the College of Science at China University of Petroleum (East China), and the School of Microelectronics and Communication Engineering at Chongqing University, respectively. The paper was published in February 2025 in the journal Neural Networks, titled “Complex Quantized Minimum Error Entropy with Fiducial Points: Theory and Application in Model Regression.”

Research Process

1. Research Background and Problem

In regression analysis, selecting an appropriate criterion function (loss function) is crucial. Traditional MSE performs poorly in the face of non-Gaussian noise, prompting researchers to propose various optimization criteria, such as MEE and the Maximum Correntropy Criterion (MCC). MEE enhances model robustness against noise, effectively suppressing impulse noise and outliers. However, its inherent bias issue necessitates the introduction of fiducial points, leading to the development of the Minimum Error Entropy with Fiducial Points (MEEF) algorithm. Nevertheless, the double summation in MEEF results in high computational complexity, especially when handling large-scale datasets.

2. Proposal of Complex Quantized Minimum Error Entropy (CQMEE)

To address the computational complexity of MEEF, this study extends quantization techniques to the complex domain, proposing CQMEE. CQMEE significantly reduces computational complexity by approximating a large number of error samples with a compact representative subset through Online Vector Quantization (VQ) techniques. Specifically, CQMEE is implemented through the following steps: 1. Quantization Operation: Compresses the error sample set into a subset containing m members, reducing the computational burden through a quantization operator. 2. Information Capacity Calculation: Computes information capacity on the quantized subset, ensuring reduced computation without significantly compromising accuracy. 3. Weight Update: Dynamically adjusts the codewords and coefficients in the quantized subset based on newly received error samples, ensuring model adaptability.

3. Theoretical Proofs and Property Analysis

This study provides theoretical proofs of CQMEE’s convergence and elaborates on its fundamental properties, including: 1. Property 1: When the quantization threshold є=0, CQMEE is equivalent to the original MEEF. 2. Property 2: When λ=0, CQMEE degenerates into the Complex Maximum Correntropy Criterion (CMCC); when λ=1 and σq→∞, CQMEE degenerates into the Complex Quantized Minimum Error Entropy (CQMEE). 3. Property 3: The information capacity of CQMEE is positive and bounded, ensuring the stability of the optimization process. 4. Property 4: When σ1 and σ2 are set to extremely large values, the information capacity of CQMEE approximates the weighted sum of the second-order moments of errors at the centers {0, c1,…, cm}, thus linking it to the traditional Complex Mean Squared Error (CMSE) criterion.

4. Experimental Validation

To validate the effectiveness of CQMEE, this study conducted linear and nonlinear regression experiments under various noise conditions. The results demonstrate that CQMEE performs excellently in handling noise-contaminated datasets, outperforming existing methods in terms of accuracy while significantly improving computational efficiency. Specific experiments include: 1. Linear Regression Experiments: Compared with optimization criteria such as MEEF and CMCC under different noise conditions, the results show that CQMEE maintains high accuracy while significantly reducing training time. 2. Nonlinear Regression Experiments: On multiple standard datasets, CQMEE outperforms other methods in terms of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) on the test set, demonstrating its superiority in handling complex noise environments.

Main Results and Conclusions

1. Main Results

  • Improved Computational Efficiency: Through quantization techniques, CQMEE significantly reduces the computational complexity of MEEF, especially when handling large-scale datasets, with a substantial reduction in training time.
  • High-Precision Regression: Under various noise conditions, CQMEE performs excellently in regression tasks, not only achieving higher accuracy than traditional methods but also demonstrating stronger robustness in handling complex noise.
  • Theoretical Support: The convergence and fundamental properties of CQMEE are theoretically proven, ensuring its stability and reliability in the optimization process.

2. Conclusions

As a new criterion in information-theoretic learning, CQMEE successfully addresses the high computational complexity of MEEF by introducing quantization techniques into the complex domain. Experimental results show that CQMEE performs excellently in handling noise-contaminated datasets, outperforming existing methods in terms of accuracy while significantly improving computational efficiency. The proposal of CQMEE provides a new solution for handling complex data regression tasks, with significant theoretical and practical value.

Research Highlights

  1. Improved Computational Efficiency: Through quantization techniques, CQMEE significantly reduces the computational complexity of MEEF, especially when handling large-scale datasets, with a substantial reduction in training time.
  2. High-Precision Regression: Under various noise conditions, CQMEE performs excellently in regression tasks, not only achieving higher accuracy than traditional methods but also demonstrating stronger robustness in handling complex noise.
  3. Theoretical Support: The convergence and fundamental properties of CQMEE are theoretically proven, ensuring its stability and reliability in the optimization process.

Other Valuable Information

Future research directions for this study include extending CQMEE to a broader range of kernel functions, further enhancing its applicability in various computational learning scenarios. Additionally, the application of CQMEE in fields such as communication systems, advanced signal processing, and quantum computing is also worth exploring.