Implementation of paper:
Shi Yuchen, Han Congying, Guo Tiande, NeuroPrim: An Attention-based Model for Solving NP-hard Spanning Tree Problems, SCIENCE CHINA Mathematics, 2023.
- Python 3.10
- PyTorch 2.0.1
- TensorBoard 2.10.0
- tqdm 4.65.0
- wandb 0.15.4
With the default settings of --num_batches 2500 and –-num_epochs 100, the training of each problem with 20 vertices will take about 10 hours on a single RTX 3090 GPU.
python generate_data.py --graph_size 20
python train --graph_size 20 --degree_constrain 2
python train --graph_size 20 --cost_function_key mrcst
python train --graph_size 20 --is_stp TrueRun test.ipynb to see the performance of pretrained models on random data.
| DCMST (d=2) | MRCST | STP |
|---|---|---|
![]() |
![]() |
![]() |
- wouterkool/attention-learn-to-route: data generation in
generate_data.py - xbresson/TSP_Transformer: network built in
nets/Bresson2021TheTN.py - Transformer — PyTorch: Transformer used in
nets/transformer.py
If you find our paper or repository helpful for your research or project, please considering citing our paper with the following BibTeX citation format:
@article{shi2023neuroprim,
author = {Shi, Yuchen and Han, Congying and Guo, Tiande},
title = {{NeuroPrim}: An Attention-based Model for Solving NP-hard Spanning Tree Problems},
journal = {SCIENCE CHINA Mathematics},
year = {2023},
url = {https://www.sciengine.com/SCM/doi/10.1007/s11425-022-2175-5},
doi = {10.1007/s11425-022-2175-5}
}


