融合序列语法知识的卷积-自注意力生成式摘要方法

Senlin Luo, Ruiyi Wang, Qian Wu*, Limin Pan, Zhouting Wu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Abstractive summarization is to analyze the core ideas of the document, rephrase or use new words to generate a summary that can summarize the whole document. However, the encoder-decoder model can not fully extract the syntax, that cause the summary not to match the grammar rules. The recurrent neural network is easy to forget the historical information and can not perform parallel computation during training, that cause the main idea of the summary not significant and the coding speed slow. In view of the above problems, a new abstractive summarization method with fusing sequential syntax was proposed for the convolution-self attention model. First, constructing a phrase structure tree for the document and embeding sequential syntax into the encoder, the method could make better use of the syntax when encoding. Then, the convolution-self-attention model was used to replace the recurrent neural network model to encode, learnning the global and local information sufficiently from the document. Experimental results on the CNN/Daily Mail dataset show that, the proposed method is superior to the state-of-the-art methods. At the same time, the generated summaries are more grammatical, the main ideas are more obvious and the encoding speed of the model is faster.

投稿的翻译标题A Convolution-Self Attention Abstractive Summarization Method Fusing Sequential Grammar Knowledge
源语言繁体中文
页(从-至)93-101
页数9
期刊Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology
41
1
DOI
出版状态已出版 - 1月 2021

关键词

  • Abstractive summarization
  • Attention mechanism
  • Convolution-self attention model
  • Encoder-decoder model
  • Grammatical analysis

指纹

探究 '融合序列语法知识的卷积-自注意力生成式摘要方法' 的科研主题。它们共同构成独一无二的指纹。

引用此