This repository is based on Marigold, CVPR 2024 Best Paper: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
We present ApDepth, a diffusion model, and associated fine-tuning protocol for monocular depth estimation. Based on Marigold. Its core innovation lies in addressing the deficiency of diffusion models in feature representation capability. Our model followed Marigold, derived from Stable Diffusion and fine-tuned with synthetic data: Hypersim and VKitti, achieved ideal results in object edge refinement.
- 2025-10-25: Inspired by DepthMaster, we propose a two-stage loss function training strategy based on
Apepth V1-0. In the first stage, we perform foundational training using MSE loss. In the second stage, we learn edge structures through FFT loss. Based on this, we introduce Apepth V1-1. - 2025-10-09: We propose a novel diffusion-based deep estimation framework guided by pre-trained models.
- 2025-09-23: We change Marigold from Stochastic multi-step generation to Deterministic one-step perception
- 2025-08-10: Trying to make some optimizations in Feature Expression
- 2025-05-08: Clone Marigold to local.
We offer several ways to interact with ApDepth:
-
Local development instructions with this codebase are given below.
The Model was trained on:
- Ubuntu 22.04 LTS, Python 3.12.9, CUDA 11.8, GeForce RTX 4090 (pip)
The inference code was tested on:
- Ubuntu 22.04 LTS, Python 3.12.9, CUDA 11.8, GeForce RTX 4090 & GeForce RTX 5080 (pip)
We recommend running the code in WSL2:
- Install WSL following installation guide.
- Install CUDA support for WSL following installation guide.
- Find your drives in
/mnt/<drive letter>/; check WSL FAQ for more details. Navigate to the working directory of choice.
Clone the repository (requires git):
git clone https://github.com/Haruko386/ApDepth.git
cd ApDepthUsing Conda: Alternatively, create a Python native virtual environment and install dependencies into it:
conda create -n apdepth python==3.12.9
conda activate apdepth
pip install -r requirements.txt
Keep the environment activated before running the inference script. Activate the environment again after restarting the terminal session.
-
Use selected images under
input -
Or place your images in a directory, for example, under
input/test-image, and run the following inference command.
This setting corresponds to our paper. For academic comparison, please run with this setting.
python run.py \
--checkpoint prs-eth/marigold-v1-0 \
--ensemble_size 1 \
--input_rgb_dir input/in-the-wild_example \
--output_dir output/in-the-wild_exampleYou can find all results in output/in-the-wild_example. Enjoy!
The default settings are optimized for the best result. However, the behavior of the code can be customized:
-
Trade-offs between the accuracy and speed (for both options, larger values result in better accuracy at the cost of slower inference.)
--ensemble_size: Number of inference passes in the ensemble.
-
By default, the inference script resizes input images to the processing resolution, and then resizes the prediction back to the original resolution. This gives the best quality, as Stable Diffusion, from which ApDepth is derived, performs best at 768x768 resolution.
--processing_res: the processing resolution; set as 0 to process the input resolution directly. When unassigned (None), will read default setting from model config. Default:768None.--output_processing_res: produce output at the processing resolution instead of upsampling it to the input resolution. Default: False.--resample_method: the resampling method used to resize images and depth predictions. This can be one ofbilinear,bicubic, ornearest. Default:bilinear.
-
--half_precisionor--fp16: Run with half-precision (16-bit float) to have faster speed and reduced VRAM usage, but might lead to suboptimal results. -
--seed: Random seed can be set to ensure additional reproducibility. Default: None (unseeded). Note: forcing--batch_size 1helps to increase reproducibility. To ensure full reproducibility, deterministic mode needs to be used. -
--batch_size: Batch size of repeated inference. Default: 0 (best value determined automatically). -
--color_map: Colormap used to colorize the depth prediction. Default: Spectral. Set toNoneto skip colored depth map generation. -
--apple_silicon: Use Apple Silicon MPS acceleration.
By default, the checkpoint is stored in the Hugging Face cache.
The HF_HOME environment variable defines its location and can be overridden, e.g.:
export HF_HOME=$(pwd)/cacheAt inference, specify the checkpoint path:
python run.py \
--checkpoint checkpoints/marigold-v1-0 \
--ensemble_size 1 \
--input_rgb_dir input/in-the-wild_example\
--output_dir output/in-the-wild_exampleInstall additional dependencies:
pip install -r requirements+.txt -r requirements.txtSet data directory variable (also needed in evaluation scripts) and download evaluation datasets into corresponding subfolders:
export BASE_DATA_DIR=<YOUR_DATA_DIR> # Set target data directory
wget -r -np -nH --cut-dirs=4 -R "index.html*" -P ${BASE_DATA_DIR} https://share.phys.ethz.ch/~pf/bingkedata/marigold/evaluation_dataset/Run inference and evaluation scripts, for example:
# Run inference
bash script/eval/11_infer_nyu.sh
# Evaluate predictions
bash script/eval/12_eval_nyu.shOr you can just run
bash script/eval/00_test_all.shYou can get the result under output/eval
Note: although the seed has been set, the results might still be slightly different on different hardware.
Based on the previously created environment, install extended requirements:
pip install -r requirements++.txt -r requirements+.txt -r requirements.txtSet environment parameters for the data directory:
export BASE_DATA_DIR=YOUR_DATA_DIR # directory of training data
export BASE_CKPT_DIR=YOUR_CHECKPOINT_DIR # directory of pretrained checkpointDownload Stable Diffusion v2 checkpoint into ${BASE_CKPT_DIR}
Download the checkpoint of Depth-Anything-V2 into DA2/checkpoints/
Prepare for Hypersim and Virtual KITTI 2 datasets and save into ${BASE_DATA_DIR}. Please refer to this README for Hypersim preprocessing.
Run first stage training script
python train.py --config config/train_marigold.yaml --no_wandbResume from a checkpoint, e.g.
python train.py --resume_run output/train_marigold/checkpoint/latest --no_wandbEvaluating results
Only the U-Net is updated and saved during training. To use the inference pipeline with your training result, replace unet folder in train_apdepth checkpoints with that in the checkpoint output folder. Then refer to this section for evaluation.
Note: Although random seeds have been set, the training result might be slightly different on different hardwares. It's recommended to train without interruption.
Please refer to this instruction.
| Problem | Solution |
|---|---|
| (Windows) Invalid DOS bash script on WSL | Run dos2unix <script_name> to convert script format |
(Windows) error on WSL: Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory |
Run export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH |
Please cite our paper:
@InProceedings{haruko26apdepth,
title={ApDepth: Aiming for Precise Monocular Depth Estimation Based on Diffusion Models},
author={Haruko386 and Yuan Shuai},
booktitle = {Under review},
year={2026}
}This work is licensed under the Apache License, Version 2.0 (as defined in the LICENSE).
By downloading and using the code and model you agree to the terms in the LICENSE.
