AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff. Please read the original repo README for more information.
- Clone this repo into
custom_nodesfolder. - Download motion modules and put them under
comfyui-animatediff/models/.
- Original modules: Google Drive | HuggingFace | CivitAI | Baidu NetDisk
- Community modules: manshoety/AD_Stabilized_Motion | CiaraRowles/TemporalDiff
- AnimateDiff v2 mm_sd_v15_v2.ckpt
Download motion LoRAs and put them under comfyui-animatediff/loras/ folder.
Note: LoRAs only work with AnimateDiff v2 mm_sd_v15_v2.ckpt module.
Workflow: lora.json
Samples:
|
|
|
|
The sliding window feature enables you to generate GIFs without a frame length limit. It divides frames into smaller batches with a slight overlap. This feature is activated automatically when generating more than 16 frames. To modify the trigger number and other settings, utilize the SlidingWindowOptions node. See the sample workflow bellow.
- Mostly the same with
KSampler motion_module: useAnimateDiffLoaderto load the motion moduleinject_method: should left defaultframe_number: animation lengthlatent_image: You can pass anEmptyLatentImagesliding_window_opts: custom sliding window options
- Combine GIF frames and produce the GIF image
frame_rate: number of frame per secondloop_count: use 0 for infinite loopsave_image: should GIF be saved to diskformat: supportsimage/gif,image/webp(better compression),video/webm,video/h264-mp4,video/h265-mp4. To use video formats, you'll need ffmpeg installed and available inPATH
Custom sliding window options
context_length: number of frame per window. Use 16 to get the best results. Reduce it if you have low VRAM.context_stride:- 1: sampling every frame
- 2: sampling every frame then every second frame
- 3: sampling every frame then every second frame then every third frames
- ...
context_overlap: overlap frames between each window sliceclosed_loop: make the GIF a closed loop, will add more sampling step
Load GIF or video as images. Usefull to load a GIF as ControlNet input.
frame_start: Skip some begining frames and start atframe_startframe_limit: Only takeframe_limitframes
Workflow: simple.json
Samples:
Workflow: sliding-window.json
Samples:
|
|
Upscale latent output using LatentUpscale then do a 2nd pass with AnimateDiffSampler.
Workflow: latent-upscale.json
You will need following additional nodes:
- Kosinkadink/ComfyUI-Advanced-ControlNet: Apply different weight for each latent in batch
- Fannovel16/comfyui_controlnet_aux: ControlNet preprocessors
- Use
LatentKeyframeandTimestampKeyframefrom ComfyUI-Advanced-ControlNet to apply diffrent weights for each latent index. - Use 2 controlnet modules for two images with weights reverted.
Workflow: cn-2images.json
Samples:
|
|
|
|
Using a GIF (or video, or a list of images) as ControlNet input.
Workflow: cn-vid2vid.json
Samples:
|
|
It's an xformers bug accidentally triggered by the way the original AnimateDiff CrossAttention is passed in. The current workaround is to disable xformers with --disable-xformers when booting ComfyUI.
Work around:
- Shorter your prompt and negative prompt
- Reduce resolution. AnimateDiff is trained on 512x512 images so it works best with 512x512 output.
- Disable xformers with
--disable-xformers
See: continue-revolution/sd-webui-animatediff#31
Training data used by the authors of the AnimateDiff paper contained Shutterstock watermarks. Since mm_sd_v15 was finetuned on finer, less drastic movement, the motion module attempts to replicate the transparency of that watermark and does not get blurred away like mm_sd_v14. Try other community finetuned modules.


















