Abstract
Building self-explaining NLP models is a powerful approach in Explainable Artificial Intelligence (XAI). Selective Rationalization (SR) and Multi-Hop Question Answering (MHQA) reasoning tasks have received increasing research attention currently. SR models usually select text segments related to the downstream prediction task from the input text as the rationale for their predictions. MHQA reasoning model first retrieves relevant context matching the question from multiple documents, then combines multiple evidence documents for logical reasoning, and finally forms a reasoning path and provides the correct answer. However, SR models often suffer from feature spurious correlations and degeneration problem. MHQA reasoning tasks are easily disrupted by multiple disjoint text fragments or entities, causing the reasoning chain to break and ultimately failing to arrive at the correct answer. To address these challenges, we propose an NLP Self-Explaining framework based on cooperative rationalization and multi-hop evidence reasoning (S-Explainer). S-Explainer integrates the SR and MHQA reasoning tasks into a single framework for study, effectively improving task performance and enhancing model robustness through a cooperative game and two-stage refinement selection method. A series of experiments conducted on three real datasets also verified the effectiveness of our proposed method.
| Original language | English |
|---|---|
| Article number | 114754 |
| Journal | Knowledge-Based Systems |
| Volume | 336 |
| DOIs | |
| Publication status | Published - 15 Mar 2026 |
| Externally published | Yes |
Keywords
- Multi-Hop question answering (MHQA)
- Natural language processing for IR
- NLP Interpretability
- XAI
Fingerprint
Dive into the research topics of 'S-Explainer: A self-explaining NLP framework based on cooperative rationalization and multi-hop evidence reasoning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver