Mitigating Framing Bias with Polarity Minimization Loss: Experimental Details
2024-5-18 15:0:42 Author: hackernoon.com(查看原文) 阅读量:1 收藏

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Yejin Bang, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;

(2) Nayeon Lee, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;

(3) Pascale Fung, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology.

A. Experimental Details

BERTSCORE-F1 For assessing salient information, we adopted token-embedding-based metric BERTSCORE-F1. We used the pre-trained ‘microsoft/deberta-xlarge-mnli’ version provided by (Zhang* et al., 2020) as the state-of-the-art checkpoint.

A.1 Human Evaluation

We conducted the evaluation with 30 randomly selected samples. We provide two articles from the two models (in random order) along with the issue sentence that describes what the articles are about. Then, the annotator is asked to answer the question “Which article is more biased?”, following Spinde et al. (2021); Lee et al. (2022). We get three annotations for each sample and select the majority voting. Since many of the test samples are closely related to U.S. politics, we recruited three non-U.S. citizens/nationals/residents to minimize any political bias or personal preference involved in the evaluation. All three annotators claimed themselves as moderate in political leaning and they are qualified to conduct the evaluation in English (they all have received their tertiary education in English).

To verify that the selection of which one is biased in the pairs is not random, a binomial test is conducted after obtaining the evaluation results. The null hypothesis was “The selection of articles generated from LR-INFO (our proposed method) to be less biased is random”. Then, we obtained a p-value of 0.019, which rejected the null hypothesis (p < 0.05). Therefore, the selection of articles generated from LR-INFO to be less biased is not random.

When the model is trained with polarity minimization loss, it can learn to remove bias-inducing information while BARTNEUSFT-T suffers. As illustrated in Table 4, our model LR-INFO could remove bias-inducing information “Trump is expected to attack President Joe Biden’s immigration policies” from the summary about the issue of “Trump to speak at CPAC” while BARTNEUSFTT failed to remove it.

Table 4: Human Evaluation Example.

Table 5: Experimental results for our models with proposed polarity minimization loss, LR-VALENCE, LRAROUSAL, LR-INFO, LRC-AROUSAL, LRC-INFO, with varying weights (λ). For framing bias metric, the lower number is the better (↓). For other scores, the higher number is the better (↑). The results of our models with polarity minimization loss (those denote with +) are reported with best λ. Full exploration of λ is available in Appendix and Fig. 2


文章来源: https://hackernoon.com/mitigating-framing-bias-with-polarity-minimization-loss-experimental-details?source=rss
如有侵权请联系:admin#unsafe.sh