TY - JOUR
T1 - STMAP
T2 - A novel semantic text matching model augmented with embedding perturbations
AU - Wang, Yanhao
AU - Zhang, Baohua
AU - Liu, Weikang
AU - Cai, Jiahao
AU - Zhang, Huaping
N1 - Publisher Copyright:
© 2023 The Author(s)
PY - 2024/1
Y1 - 2024/1
N2 - Semantic text matching models have achieved outstanding performance, but traditional methods may not solve Few-shot learning problems and data augmentation techniques could suffer from semantic deviation. To solve this problem, we propose STMAP, which is implemented from the perspective of data augmentation based on Gaussian noise and Noise Mask signal. We also employ an adaptive optimization network to dynamically optimize the several training targets generated by data augmentation. We evaluated our model on four English datasets: MRPC, SciTail, SICK, and RTE, with achieved scores of 90.3%, 94.2%, 88.9%, and 68.8%, respectively. Our model obtained state-of-the-art (SOTA) results on three of the English datasets. Furthermore, we assessed our approach on three Chinese datasets, and achieved an average improvement of 1.3% over the baseline model. Additionally, in the Few-shot learning experiment, our model outperformed the baseline performance by 5%, especially when the data volume was reduced by around 0.4.
AB - Semantic text matching models have achieved outstanding performance, but traditional methods may not solve Few-shot learning problems and data augmentation techniques could suffer from semantic deviation. To solve this problem, we propose STMAP, which is implemented from the perspective of data augmentation based on Gaussian noise and Noise Mask signal. We also employ an adaptive optimization network to dynamically optimize the several training targets generated by data augmentation. We evaluated our model on four English datasets: MRPC, SciTail, SICK, and RTE, with achieved scores of 90.3%, 94.2%, 88.9%, and 68.8%, respectively. Our model obtained state-of-the-art (SOTA) results on three of the English datasets. Furthermore, we assessed our approach on three Chinese datasets, and achieved an average improvement of 1.3% over the baseline model. Additionally, in the Few-shot learning experiment, our model outperformed the baseline performance by 5%, especially when the data volume was reduced by around 0.4.
KW - Adaptive networks
KW - Data augmentation
KW - Embedding perturbations
KW - Few-shot
KW - Semantic text matching
UR - http://www.scopus.com/inward/record.url?scp=85181707384&partnerID=8YFLogxK
U2 - 10.1016/j.ipm.2023.103576
DO - 10.1016/j.ipm.2023.103576
M3 - Article
AN - SCOPUS:85181707384
SN - 0306-4573
VL - 61
JO - Information Processing and Management
JF - Information Processing and Management
IS - 1
M1 - 103576
ER -