TY - GEN
T1 - Eliminating Contextual Bias in Aspect-Based Sentiment Analysis
AU - An, Ruize
AU - Zhang, Chen
AU - Song, Dawei
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Pretrained language models (LMs) have made remarkable achievements in aspect-based sentiment analysis (ABSA). However, it is discovered that these models may struggle in some particular cases (e.g., to detect sentiments expressed towards targeted aspects with only implicit or adversarial expressions). Since it is hard for models to align implicit or adversarial expressions with their corresponding aspects, the sentiments of the targeted aspects would largely be impacted by the expressions towards other aspects in the sentence. We name this phenomenon as contextual bias. To tackle the problem, we propose a flexible aspect-oriented debiasing method (Arde) to eliminate the harmful contextual bias without the need of adjusting the underlying LMs. Intuitively, Arde calibrates the prediction towards the targeted aspect by subtracting the bias towards the context. Favorably, Arde can get theoretical support from counterfactual reasoning theory. Experiments are conducted on SemEval benchmark, and the results show that Arde can empirically improve the accuracy on contextually biased aspect sentiments without degrading the accuracy on unbiased ones. Driven by recent success of large language models (LLMs, e.g., ChatGPT), we further uncover that even LLMs can fail to address certain contextual bias, which yet can be effectively tackled by Arde.
AB - Pretrained language models (LMs) have made remarkable achievements in aspect-based sentiment analysis (ABSA). However, it is discovered that these models may struggle in some particular cases (e.g., to detect sentiments expressed towards targeted aspects with only implicit or adversarial expressions). Since it is hard for models to align implicit or adversarial expressions with their corresponding aspects, the sentiments of the targeted aspects would largely be impacted by the expressions towards other aspects in the sentence. We name this phenomenon as contextual bias. To tackle the problem, we propose a flexible aspect-oriented debiasing method (Arde) to eliminate the harmful contextual bias without the need of adjusting the underlying LMs. Intuitively, Arde calibrates the prediction towards the targeted aspect by subtracting the bias towards the context. Favorably, Arde can get theoretical support from counterfactual reasoning theory. Experiments are conducted on SemEval benchmark, and the results show that Arde can empirically improve the accuracy on contextually biased aspect sentiments without degrading the accuracy on unbiased ones. Driven by recent success of large language models (LLMs, e.g., ChatGPT), we further uncover that even LLMs can fail to address certain contextual bias, which yet can be effectively tackled by Arde.
KW - aspect-based sentiment analysis
KW - counterfactual inference
KW - implicit sentiment
UR - http://www.scopus.com/inward/record.url?scp=85189754803&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-56027-9_6
DO - 10.1007/978-3-031-56027-9_6
M3 - Conference contribution
AN - SCOPUS:85189754803
SN - 9783031560262
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 90
EP - 107
BT - Advances in Information Retrieval - 46th European Conference on Information Retrieval, ECIR 2024, Proceedings
A2 - Goharian, Nazli
A2 - Tonellotto, Nicola
A2 - He, Yulan
A2 - Lipani, Aldo
A2 - McDonald, Graham
A2 - Macdonald, Craig
A2 - Ounis, Iadh
PB - Springer Science and Business Media Deutschland GmbH
T2 - 46th European Conference on Information Retrieval, ECIR 2024
Y2 - 24 March 2024 through 28 March 2024
ER -