Infusion: Inject and Attention Fusion for Multi Concept Zero-Shot Text-based Video Editing

Glance AI    

TL;DR: Editing1 your video via pretrained Stable Diffusion2 model without training.

+Porsche Car, +snowy winter

+Porsche Car, +landmark of autumn

+Terosaur

+Cherry Blossom

+white poppy flowers

+sunset

Abstract

Large text-to-image diffusion models have achieved remarkable success in generating diverse, high-quality images. Additionally, these models have been successfully leveraged to edit input images by just changing the text prompt. But when these models are applied to videos, the main challenge is to ensure temporal consistency and coherence across frames. In this paper, we propose InFusion,, a framework for zero-shot text-based video editing leveraging large pre-trained image diffusion models. Our framework specifically supports editing of multiple concepts with pixel-level control over diverse concepts mentioned in the editing prompt. Specifically, we inject the difference in features obtained with source and edit prompts from U-Net residual blocks of decoder layers. When these are combined with injected attention features, it becomes feasible to query the source contents and scale edited concepts along with the injection of unedited parts. The editing is further controlled in a fine-grained manner with mask extraction and attention fusion, which cut the edited part from the source and paste it into the denoising pipeline for the editing prompt. Our framework is a low-cost alternative to one-shot tuned models for editing since it does not require training. We demonstrated complex concept editing with a generalised image model (Stable Diffusion v1.5) using LoRA. Adaptation is compatible with all the existing image diffusion techniques. Extensive experimental results demonstrate the effectiveness of existing methods in rendering high-quality and temporally consistent videos.

Pipeline

InFusion: Leveraging a pre-trained text-to-image model for video editing ensures temporal consistency and editing accuracy with Inject and Attention Fusion. The denoising pipeline for source prompt Ps generates the decoder latent from U-Net and attention features from source video, which are injected into the denoising pipeline (initialised with inverted source latent z_T) for edit prompt Pe.

Additional Results: Video Editing Using Stable Diffusion

+ GrassHopper - Frog

+Night View, -Morning View

+ Ukiyo-e Style

+ santa - Moose

BibTeX

@article{khandelwal2023infusion,
  title={InFusion: Inject and Attention Fusion for Multi Concept Zero Shot Text based Video Editing},
  author={Khandelwal, Anant},
  journal={arXiv preprint arXiv:2308.00135},
  year={2023}
}
  

Explanation

1. For better visualization, we only show the edited word in this page.
2. All results are directly edited from Stable diffusion v1.5. We dont use any pretrained video diffusion model.
3. Our method does not require any training of Stable diffusion v1.5.