Why most of the trackers are written by matlab? Slow speed, lots of extra-packages, need loads of money to buy the licences, version compliance problems (gcc4.9 gcc5.0 oh my god)... Anyway, I hate that! C++ is fast and clear! I even doubt that the FPS measured by using matlab is really meaningful, especially for actual and embedded system use! So I will re-implement those trackers by cpp day by day, keep the clarity and less extra-packages in mind, hope you like it!
2018/06/28 -- New features Now it support automatic initialization with Web camera using OpenPose.
| Included | Tracker |
|---|---|
| ☑️ | CSK |
| ☑️ | KCF |
| ☑️ | DSST |
| ☑️ | GOTURN |
| 🔨 | ECO |
| 🔨 | C-COT |
| 🔨 | SRDCF |
| 🔨 | SRDCF-Deep |
| Included | Dataset | Reference |
|---|---|---|
| ☑️ | VOT-2017 | Web |
| ☑️ | TB-2015 | Web |
| ☑️ | TLP | Web |
| ☑️ | UAV123 | Web |
| Included | Dataset | Reference |
|---|---|---|
| ☑️ | OpenPose | Web |
| Included | OS |
|---|---|
| ☑️ | Ubuntu 16.04 |
| ☑️ | macOS Sierra |
| 🔨 | NVIDIA Jetson TX1/2 |
| 🔨 | Rasperberry PI 3 |
| 🔨 | Windows10 |
To run a test with ECO without Deep feature, no need to install Caffe, CUDA etc.
cd eco
make -j`nproc`
./runecotracker.bin
brew install tesseract
cd eco
make -j`nproc`
./runecotracker.bin
For the environment settings and detailed procedures (with all the packages from the very beginning), refer to: [My DeeplearningSettings].
The only extra-package is: Opencv3.x (already installed if you follow the environment settings above).
Of course, for trackers that use Deep features, you need to install [caffe] (maybe I will use Darknet with C in the future, I like Darknet 👄 ), and change the makefile according to your path. Compile of caffe refer to : [Install caffe by makefile].
If you want to autodetection the people with web camera, you need to install [OpenPose].
If you want to use Openpose, in ./makefile, set OPENPOSE=1, else set OPENPOSE=0.
Change the datasets, in inputs/readdatasets.hpp, change the number of string databaseType = databaseTypes[1];
Change the path of datasets, in inputs/readdatasets.cc, change the path to your path of data.
By raising your two arms higher than your nose, it will atomatically detect the person and start the tracking programme.
make all
./trackerscompare.bin
If you don't want to compile with Caffe, that means you cannot use Deep features, set in eco/makefile: USE_CAFFE=0.
If you don't want to compile with CUDA, that means you cannot use Deep features, set in eco/makefile: USE_CUDA=0.
If you want to compile with Caffe, set in eco/makefile: USE_CAFFE=1 (CUDA will automatically set to use), and set the according caffe path of your system in eco/makefile:
ifeq ($(USE_CAFFE), 1)
CXXFLAGS+= -DUSE_CAFFE
LDFLAGS+= -L/media/elab/sdd/mycodes/caffe/build/lib -lcaffe
CXXFLAGS+= -I/media/elab/sdd/mycodes/caffe/build/include/ -I/media/elab/sdd/mycodes/caffe/include/
endif
Download a pretrained [VGG_CNN_M_2048.caffemodel (370 MB)], put it into folder: eco/model
For using Deep features, in eco/parameters.cc, change to bool useDeepFeature = true;.
Change the path of your test images in eco/runecotracker.cc.
Change the datasets, in eco/runecotracker.cc, change the number of string databaseType = databaseTypes[1];.
If you want to show the heatmap of the tracking, in eco/parameters.cc, change to #define DEBUG 1.
cd eco
make -j`nproc`
./runecotracker.bin
Change the path of your test images in kcf/opencvtrackers.cc.
cd opencvtrackers
make
./opencvtrackers.bin
Change the path of your test images in kcf/runkcftracker.cc.
cd kcf
make -j`nproc`
./runkcftracker.bin
Change the path of your test images in goturn/rungoturntracker.cc.
You can download a pretrained [goturun_tracker.caffemodel (434 MB)], put it into folder: goturn/nets
cd goturn
make -j`nproc`
./rungoturntracker.bin
./classification.bin /media/elab/sdd/caffe/models/bvlc_reference_caffenet/deploy.prototxt /media/elab/sdd/caffe/models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel /media/elab/sdd/caffe/data/ilsvrc12/imagenet_mean.binaryproto /media/elab/sdd/caffe/data/ilsvrc12/synset_words.txt /media/elab/sdd/caffe/examples/images/cat.jpg
(not complete, tell me if I forgot you)
Learning to Track at 100 FPS with Deep Regression Networks,
David Held,
Sebastian Thrun,
Silvio Savarese,
European Conference on Computer Vision (ECCV), 2016 (In press)
J. F. Henriques, R. Caseiro, P. Martins, J. Batista,
"High-Speed Tracking with Kernelized Correlation Filters", TPAMI 2015.
J. F. Henriques, R. Caseiro, P. Martins, J. Batista,
"Exploiting the Circulant Structure of Tracking-by-detection with Kernels", ECCV 2012.
Martin Danelljan, Goutam Bhat, Fahad Khan, Michael Felsberg.
ECO: Efficient Convolution Operators for Tracking.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg.
Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking.
In Proceedings of the European Conference on Computer Vision (ECCV), 2016.
http://www.cvl.isy.liu.se/research/objrec/visualtracking/conttrack/index.html
Martin Danelljan, Gustav Häger, Fahad Khan, Michael Felsberg.
Learning Spatially Regularized Correlation Filters for Visual Tracking.
In Proceedings of the International Conference in Computer Vision (ICCV), 2015.
http://www.cvl.isy.liu.se/research/objrec/visualtracking/regvistrack/index.html
Martin Danelljan, Gustav Häger, Fahad Khan, Michael Felsberg.
Convolutional Features for Correlation Filter Based Visual Tracking.
ICCV workshop on the Visual Object Tracking (VOT) Challenge, 2015.
http://www.cvl.isy.liu.se/research/objrec/visualtracking/regvistrack/index.html
Martin Danelljan, Gustav Häger, Fahad Khan and Michael Felsberg.
Accurate Scale Estimation for Robust Visual Tracking.
In Proceedings of the British Machine Vision Conference (BMVC), 2014.
http://www.cvl.isy.liu.se/research/objrec/visualtracking/scalvistrack/index.html
Martin Danelljan, Gustav Häger, Fahad Khan, Michael Felsberg.
Discriminative Scale Space Tracking.
Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017.
http://www.cvl.isy.liu.se/research/objrec/visualtracking/scalvistrack/index.html
N. Dalal and B. Triggs.
Histograms of oriented gradients for human detection.
In CVPR, 2005.
J. van de Weijer, C. Schmid, J. J. Verbeek, and D. Larlus.
Learning color names for real-world applications.
TIP, 18(7):1512–1524, 2009.
Y. Wu, J. Lim, and M.-H. Yang.
Online object tracking: A benchmark.
TPAMI 37(9), 1834-1848 (2015).
https://sites.google.com/site/trackerbenchmark/benchmarks/v10
Y. Wu, J. Lim, and M.-H. Yang.
Object tracking benchmark.
In CVPR, 2013.
KCF: joaofaro/KCFcpp.
DSST: liliumao/KCF-DSST, the max_scale_factor and min_scale_factor is set to 10 and 0.1 in case of divergence error (Tested on UAV123 dataset when the object is quite small, ex.uav2/3/4...).
GOTURN: davheld/GOTURN.
ECO: martin-danelljan/ECO.