Editable video creation based on embedded simulation engine and GAN

Zheng Guan*, Gangyi Ding

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

With the progress of artificial intelligence, the embedded generation of images and videos by AI has become a hot topic. This technology is tried to be applied in real-time processing of monitors, cameras and smart phones. Using GAN embedder networks, some research attempts to transfer the style, action and content of one video to another target video. Unfortunately, this generation is often difficult to control. We have created an Engine-GAN (E-GAN) model process, which effectively combines engine method with embedder GAN content to create "real" videos that can be edited in real time. This makes the image and content generated by AI directly controlled. We have made progress in E-GAN architecture, E-GAN workflow, tag generation and entity stylization. We use cuckoo algorithm to optimize the migration target and improve the migration efficiency.

Original languageEnglish
Article number103048
JournalMicroprocessors and Microsystems
Volume75
DOIs
Publication statusPublished - Jun 2020

Keywords

  • Embedded simulation engine
  • GAN Structure
  • Video generation

Fingerprint

Dive into the research topics of 'Editable video creation based on embedded simulation engine and GAN'. Together they form a unique fingerprint.

Cite this