Mitigating Social Biases of Pre-trained Language Models via Contrastive Self-Debiasing with Double Data Augmentation
Introduction: Currently, pre-trained language models (PLMs) are widely applied in the field of natural language processing, but they have the problem of inheriting and amplifying social biases present in the training corpora. Social biases may lead to unpredictable risks in real-world applications of PLMs, such as automatic job screening systems te...