Skip to content

Multi-Modality-Tracking/AINet-AAAI2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

AINet

RGBT Tracking via All-layer Multimodal Interactions with Progressive Fusion Mamba

Dataset

We use the LasHeR training set for training, RGBT210, RGBT234, LasHeR testing set, VTUAVST for testing, and their project addresses are as follows:

Environment Preparation

  1. Our code is trained and tested with Python == 3.8.13, PyTorch == 2.1.1 and CUDA == 11.8 on NVIDIA GeForce RTX 4090.
  2. Install causal_conv1d and mamba from our repo.

Training

  1. We adopt OSTrack as our base tracker. For our best result, we need to load the parameter from DropMAE.
  2. Modify the relevant dataset and pretrained model paths, Then run the following command.
python lib/train/run_training.py --script ostrack_twobranch --config 384 --save_dir your_save_dir

Evaluation

Modify the relevant dataset and checkpoint paths, then run the following command.

python tracking/test.py --tracker_name ostrack_twobranch --tracker_param 384 --checkpoint_path your_checkpoint_path

Results and Models

Model RGBT210(PR/SR) RGBT234(PR/SR) LasHeR(PR/NPR/SR) VTUAV(PR/SR) Checkpoints & Results
AINet 87.5/64.8 89.2/67.3 74.2/70.1/59.1 89.3/77.4 download

About

RGBT Tracking via All-layer Multimodal Interactions with Mamba

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages