Robustness-Eva-MRC: Assessing and analyzing the robustness of neural models in extractive machine reading comprehension

Jingliang Fang, Hua Xu, Zhijing Wu*, Kai Gao, Xiaoyin Che, Haotian Hui

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep neural networks, despite their remarkable success in various language understanding tasks, have been found vulnerable to adversarial attacks and subtle input perturbations, revealing a robustness shortfall. To explore this, this paper presents Robustness-Eva-MRC, an interactive platform designed to assess and analyze the robustness of pre-trained and large-scale language models in extractive machine reading comprehension (MRC) tasks. The platform integrates eight adversarial attack methods across character-, word-, and sentence-levels, and applies them to five MRC datasets, thereby fabricating challenging adversarial testing sets. Then it evaluates the MRC models on both original and adversarial sets, yielding insights into their robustness through performance gaps. Moreover, Robustness-Eva-MRC provides comprehensive visualizations and detailed case studies, enhancing the understanding of model robustness. A screencast video and additional material are available at https://github.com/distantJing/Robustness-Eva-MRC.

Original languageEnglish
Article number200287
JournalIntelligent Systems with Applications
Volume20
DOIs
Publication statusPublished - Nov 2023
Externally publishedYes

Keywords

  • Analysis
  • Extractive machine reading comprehension
  • Robustness

Fingerprint

Dive into the research topics of 'Robustness-Eva-MRC: Assessing and analyzing the robustness of neural models in extractive machine reading comprehension'. Together they form a unique fingerprint.

Cite this