📄 Paper | 💻 Source Code | 🤗 Hugging Face
Architecture of the CR-Net model.
CR-Net is a novel transformer-based I2I framework that provides continuous control over illumination conditions to generate realistic and diverse images, particularly low-light ones, without requiring style samples during inference.
Smooth continuous light to dark transition with phi angle
- Continuous translation: CR-Net is capable of achieving smooth continuous translations, as well as cyclic translations, such as across different times of the day.
- Arbitrary illumination: By varying the light variable, high-quality continuous image translations between daytime and nighttime can be efficiently obtained.
- Data augmentation: The proposed model facilitates the generation of realistic and diverse low-light images for training and advancing deep learning–based computer vision applications.
To run this model, you need the proper environment. We recommend the following versions:
- Python:
Python >= 3.10(RecommendedPython 3.10) - PyTorch:
PyTorch >= 1.12(RecommendedPyTorch 2.1.2)
Step 1: Clone the repository
git clone https://github.com/val-utehy/CR-Net.git
cd CR-NetStep 2: Install dependencies
pip install -r requirements.txtNote
Make sure you have installed the compatible versions of torch and torchvision with your CUDA driver to leverage GPU.
The pretrained models are available at: link.
Note
Put all weights downloaded to ./checkpoints_v2/ast_rafael_v2_sharpening.
Please ensure your path to the checkpoint and config (opt.pkl) is correct in the script files before running.
Training file will be updated soon!
a. Video Processing:
Open and edit the file test_scripts/ast_inference_video.sh. Here, you need to provide the path to the trained checkpoint and the input/output video paths.
After completing the configuration, navigate to the project’s root directory and execute the following command:
bash test_scripts/ast_inference_video.shb. Image Directory Processing:
Open and edit the file test_scripts/ast_n2h_dat.sh. Here, you need to provide the path to the trained checkpoint and the input/output image directory paths.
After completing the configuration, navigate to the project’s root directory and execute the following command:
bash test_scripts/ast_n2h.shThis project is licensed under the MIT License - see the LICENSE file for details.


