TY - JOUR
T1 - TextFormer
T2 - A Query-based End-to-end Text Spotter with Mixed Supervision
AU - Zhai, Yukun
AU - Zhang, Xiaoqiang
AU - Qin, Xiameng
AU - Zhao, Sanyuan
AU - Dong, Xingping
AU - Shen, Jianbing
N1 - Publisher Copyright:
© Institute of Automation, Chinese Academy of Sciences and Springer-Verlag GmbH Germany, part of Springer Nature 2024.
PY - 2024/8
Y1 - 2024/8
N2 - End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework. Typical methods heavily rely on region-of-interest (RoI) operations to extract local features and complex post-processing steps to produce final predictions. To address these limitations, we propose TextFormer, a query-based end-to-end text spotter with a transformer architecture. Specifically, using query embedding per text instance, TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multitask modeling. It allows for mutual training and optimization of classification, segmentation and recognition branches, resulting in deeper feature sharing without sacrificing flexibility or simplicity. Additionally, we design an adaptive global aggregation (AGG) module to transfer global features into sequential features for reading arbitrarily-shaped texts, which overcomes the suboptimization problem of RoI operations. Furthermore, potential corpus information is utilized from weak annotations to full labels through mixed supervision, further improving text detection and end-to-end text spotting results. Extensive experiments on various bilingual (i.e., English and Chinese) benchmarks demonstrate the superiority of our method. Especially on the TDA-ReCTS dataset, TextFormer surpasses the state-of-the-art method in terms of 1-NED by 13.2%.
AB - End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework. Typical methods heavily rely on region-of-interest (RoI) operations to extract local features and complex post-processing steps to produce final predictions. To address these limitations, we propose TextFormer, a query-based end-to-end text spotter with a transformer architecture. Specifically, using query embedding per text instance, TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multitask modeling. It allows for mutual training and optimization of classification, segmentation and recognition branches, resulting in deeper feature sharing without sacrificing flexibility or simplicity. Additionally, we design an adaptive global aggregation (AGG) module to transfer global features into sequential features for reading arbitrarily-shaped texts, which overcomes the suboptimization problem of RoI operations. Furthermore, potential corpus information is utilized from weak annotations to full labels through mixed supervision, further improving text detection and end-to-end text spotting results. Extensive experiments on various bilingual (i.e., English and Chinese) benchmarks demonstrate the superiority of our method. Especially on the TDA-ReCTS dataset, TextFormer surpasses the state-of-the-art method in terms of 1-NED by 13.2%.
KW - End-to-end text spotting
KW - arbitrarily-shaped texts
KW - mixed supervision
KW - multitask modeling
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85184205659&partnerID=8YFLogxK
U2 - 10.1007/s11633-023-1460-6
DO - 10.1007/s11633-023-1460-6
M3 - Article
AN - SCOPUS:85184205659
SN - 2731-538X
VL - 21
SP - 704
EP - 717
JO - Machine Intelligence Research
JF - Machine Intelligence Research
IS - 4
ER -