ABioNER: A BERT-Based Model for Arabic Biomedical Named-Entity Recognition

Nada Boudjellal, Huaping Zhang*, Asif Khan, Arshad Ahmad, Rashid Naseem, Jianyun Shang, Lin Dai

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

37 Citations (Scopus)

Abstract

The web is being loaded daily with a huge volume of data, mainly unstructured textual data, which increases the need for information extraction and NLP systems significantly. Named-entity recognition task is a key step towards efficiently understanding text data and saving time and effort. Being a widely used language globally, English is taking over most of the research conducted in this field, especially in the biomedical domain. Unlike other languages, Arabic suffers from lack of resources. This work presents a BERT-based model to identify biomedical named entities in the Arabic text data (specifically disease and treatment named entities) that investigates the effectiveness of pretraining a monolingual BERT model with a small-scale biomedical dataset on enhancing the model understanding of Arabic biomedical text. The model performance was compared with two state-of-the-art models (namely, AraBERT and multilingual BERT cased), and it outperformed both models with 85% F1-score.

Original languageEnglish
Article number6633213
JournalComplexity
Volume2021
DOIs
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'ABioNER: A BERT-Based Model for Arabic Biomedical Named-Entity Recognition'. Together they form a unique fingerprint.

Cite this