A complete walkthrough to generate anime faces using Deep Convolutional GANs (DCGANs) - built with PyTorch and deployed via Gradio.
The field of generative models has seen incredible advances in recent years. From generating realistic photos to deepfakes, one of the most fascinating applications of generative models is the creation of art — especially anime characters, which have a strong visual style and consistent structure.
- Train a DCGAN on anime face datasets
- Modular PyTorch code (Generator & Discriminator)
- Gradio web app for face generation
- Clean and reproducible setup
- Jupyter Notebook for exploration and storytelling
A DCGAN is a type of Generative Adversarial Network (GAN) that uses convolutional layers to generate more realistic images.
It consists of two neural networks:
-
Generator (G): Takes a vector of random noise and tries to generate a realistic image.
-
Discriminator (D): Takes an image and decides whether it’s real (from the dataset) or fake (from the generator).
Both networks are trained together in a game-theoretic setup: the generator improves by trying to fool the discriminator, and the discriminator improves by getting better at catching the fake images.
DCGANs are especially good at generating realistic images and are relatively simple to implement using PyTorch.
The DCGAN architecture implemented follows the original paper’s design:
Generator
-
Input: Random noise vector (size 100)
-
Layers: Transposed Convolutions → BatchNorm → ReLU
-
Output: 3x64x64 image with tanh activation
Discriminator
-
Input: 3x64x64 image
-
Layers: Convolutions → BatchNorm → LeakyReLU
-
Output: Single sigmoid probability
anime-dcgan/
│
├── data/ # Dataset (use anime face dataset)
├── models/ # Generator & Discriminator architecture
├── outputs/ # Sample outputs and model checkpoints
├── utils.py # Helper functions
├── train.py # Training script
├── gradio_app.py # Gradio interface for inference
├── DCGAN_anime.ipynb # Clean notebook for walkthrough
├── requirements.txt # pip dependencies
├── LICENSE # License file
└── README.md # The current file
git clone https://github.com/codebywiam/anime-dcgan.git
cd anime-dcganpip install -r requirements.txtThe Anime Faces Dataset is already downloaded and extacted into the data/ directory.
Training a GAN is a bit tricky due to the adversarial nature. The key hyperparameters used were:
- Epochs: 50
- Batch Size: 128
- Learning Rate: 0.0002
- Beta1 (Adam optimizer): 0.5
- Loss: Binary Cross Entropy
python train.pyTo train faster during development, you can use a subset of the dataset
python train.py --fast-train --num-samples 10000 # to changeThis will:
- Train the DCGAN model
- Save outputs every epoch in
/outputs - Save checkpoints for the generator and discriminator
To skip training and directly use the pretrained Generator and Discriminator:
- Generator:
/checkpoints/G_trained.pth - Discriminator:
/checkpoints/D_trained.pth
python gradio_app.pyThis launches a browser-based app where you can:
- Generate anime characters
- Set a random seed for consistent results
Want to explore results and training process in a visual way? Open DCGAN_anime.ipynb.
jupyter notebook DCGAN_anime.ipynbThis project is licensed under the MIT License. See the LICENSE file for details.

