Intelligent Creation Team, ByteDance
- [01/06/2026] 🎉 Our video version DreamID-V is released!
- [08/11/2025] 🎉 DreamID accepted by SIGGRAPH Asia 2025!
- [04/21/2025] 🔥 Our project DreamID is released.
- [04/23/2025] 🔥 Our paper DreamID is released.
We introduce DreamID, a high-similarity, fast, and high-fidelity diffusion-based face-swapping model. You can try out our model in Dreamina.
DreamID achieves high-fidelity face swapping with unprecedented identity similarity—to our knowledge, it currently ranks as the most identity-preserving face-swapping model. It addresses long-standing challenges in the field, such as facial shape deformation, while excelling in attribute preservation (e.g., makeup, lighting) at a fine-grained level. Moreover, DreamID demonstrates robust performance under occlusions and extreme head poses.
🎨 Enjoy on Dreamina
DreamID is applied in Dreamina, ByteDance. You can also enjoy the more advanced customization algorithm in Dreamina!
Select generate in image generator.
Click to import the reference image. Select one of your own images to upload, and click on portrait photography(Human Face).Use your imagination to write the prompt according to your needs, and choose the appropriate model and image ratio parameters.
Finally, enjoy your result!This research aims to advance the field of generative AI. Users are free to create images using this tool, provided they comply with local laws and exercise responsible usage. The developers are not liable for any misuse of the tool by users.
We are recruiting individuals interested in face swap technology, whether for full-time or internships. Feel free to contact us! Email:yefulong@bytedance.com
If DreamID is helpful, please help to ⭐ the repo.
If you find this project useful for your research, please consider citing our paper:
@misc{ye2025dreamidhighfidelityfastdiffusionbased,
title={DreamID: High-Fidelity and Fast diffusion-based Face Swapping via Triplet ID Group Learning},
author={Fulong Ye and Miao Hua and Pengze Zhang and Xinghui Li and Qichao Sun and Songtao Zhao and Qian He and Xinglong Wu},
year={2025},
eprint={2504.14509},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.14509},
}





