Text2Video-Zero
is a zero-shot text to video generator. It can perform
zero-shot text-to-video generation
,
Video Instruct Pix2Pix
(instruction-guided video editing),
text and pose conditional video generation
,
text and canny-edge conditional video generation
, and
text, canny-edge and dreambooth conditional video generation
. For more information about this work,
please have a look at our
paper
and our demo:
Our
code
works with any StableDiffusion base model.
This model provides DreamBooth weights for the Arcane style to be used with edge guidance (using ControlNet ) in text2video zero.
We converted the original weights into diffusers and made them usable for ControlNet with edge guidance using: https://github.com/lllyasviel/ControlNet/discussions/12 .
Developed by: Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan and Humphrey Shi
Model type: Dreambooth text-to-image and text-to-video generation model with edge control for text2video zero
Language(s): English
License: The CreativeML OpenRAIL M license .
Model Description: This is a model for text2video zero with edge guidance and arcane style. It can be used also with ControlNet in a text-to-image setup with edge guidance.
DreamBoth Keyword: arcane style
Cite as:
@article{text2video-zero, title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators}, author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey}, journal={arXiv preprint arXiv:2303.13439}, year={2023} }
The Dreambooth weights for the Arcane style were taken from CIVITAI .
Beware that Text2Video-Zero may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography, and violence. Text2Video-Zero in this demo is meant only for research purposes.
@article{text2video-zero, title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators}, author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey}, journal={arXiv preprint arXiv:2303.13439}, year={2023} }