Editable video creation based on embedded simulation engine and GAN

Zheng Guan*, Gangyi Ding

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

With the progress of artificial intelligence, the embedded generation of images and videos by AI has become a hot topic. This technology is tried to be applied in real-time processing of monitors, cameras and smart phones. Using GAN embedder networks, some research attempts to transfer the style, action and content of one video to another target video. Unfortunately, this generation is often difficult to control. We have created an Engine-GAN (E-GAN) model process, which effectively combines engine method with embedder GAN content to create "real" videos that can be edited in real time. This makes the image and content generated by AI directly controlled. We have made progress in E-GAN architecture, E-GAN workflow, tag generation and entity stylization. We use cuckoo algorithm to optimize the migration target and improve the migration efficiency.

源语言英语
文章编号103048
期刊Microprocessors and Microsystems
75
DOI
出版状态已出版 - 6月 2020

指纹

探究 'Editable video creation based on embedded simulation engine and GAN' 的科研主题。它们共同构成独一无二的指纹。

引用此