跨语言知识蒸馏的视频中文字幕生成

Translated title of the contribution: Cross-Lingual Knowledge Distillation for Chinese Video Captioning

Jing Yi Hou, Ya Yun Qi, Xin Xiao Wu*, Yun De Jia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Video captioning aims to automatically generate the natural language descriptions of a video, which requires understanding the visual content and describing it with grammatically accurate sentences. Video captioning has wide applications in video recommendation, vision assistance, human-robot interaction and many other fields, and has attracted growing attention in the fields of computer vision and natural language processing. Although remarkable progress has been made on English video captioning, using other languages such as Chinese to describe a video remains under-explored. In this paper, we investigate Chinese video captioning. However, the insufficiency of paired videos and Chinese captions makes it difficult to train a powerful model for Chinese video captioning. Since there exist many English video captioning methods and training data, it is a feasible method to perform Chinese video captioning by translating the English captions via machine translation. However, the difference between Chinese and Western cultures and the performance of machine translation algorithms will both affect the quality of generated Chinese captions. To this end, we propose a cross-lingual knowledge distillation method for Chinese video captioning. Based on a two-branches structure, our method does not only directly generate Chinese captions according to the video content, but also takes full advantage of the easily accessible English video captions as the privileged information to guide the generation of Chinese video captions. Since the Chinese and English captions are semantically correlated with respect to the video content, our method learns cross-lingual knowledge from them and utilizes knowledge distillation to integrate the high-level semantic information in English captions into Chinese captions generation. Meanwhile, the consistency between the training target and the captioning target is guaranteed by the end-to-end training strategy, thus effectively improving the performance of Chinese video captioning. Benefit from the mechanism of knowledge distillation, our method only utilizes English captions data during the training stage, and after training it can directly generate Chinese captions from the input video. To verify the universality and flexibility of our cross-lingual knowledge distillation method, we use four mainstream visual captioning models for evaluation, covering the CNN-RNN structure, RNN-RNN structure, CNN-CNN structure and model based on Top-Down attention mechanism. These models are widely used as the backbone models in a large number of visual captioning methods. Moreover, we extend the English video captioning dataset MSVD into a cross-lingual video captioning dataset with Chinese captions, called MSVD-CN. MSVD-CN contains 1970 video clips collected from the Internet and 11758 Chinese captions besides the original 41 English captions per video in MSVD. In order to reduce the annotation mistakes caused by annotators' typos or misunderstandings of the video contents, we propose two automatic inspection methods to perform semantic and syntactic checks, respectively, on the collected manual annotations in the data collection stage. Extensive experiments are carried out on the MSVD-CN dataset, via four widely used evaluation metrics for video captioning including BLEU, METEOR, ROUGE-L, and CIDEr. The results demonstrate that the superiority of proposed cross-lingual knowledge distillation on Chinese video captioning. Furthermore, we also report some qualitative experiment results to show the effectiveness of our method.

Translated title of the contributionCross-Lingual Knowledge Distillation for Chinese Video Captioning
Original languageChinese (Traditional)
Pages (from-to)1907-1921
Number of pages15
JournalJisuanji Xuebao/Chinese Journal of Computers
Volume44
Issue number9
DOIs
Publication statusPublished - Sept 2021

Fingerprint

Dive into the research topics of 'Cross-Lingual Knowledge Distillation for Chinese Video Captioning'. Together they form a unique fingerprint.

Cite this