Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Improved Association Pipeline Tracker by Stadler and Beyerer (https://ieeexplore.ieee.org/document/10223159) #1527

Merged
merged 9 commits into from
Jul 22, 2024

Conversation

rolson24
Copy link
Contributor

This is an implementation of the Improved Association Pipeline Tracker which is the top open algorithm tracker (not including NVIDIA or BrinqTraq) on MOT20, and 3rd on MOT17. It is somewhat similar to BoTSORT, but it has a single association step where both the high and low score detections are matched with the tracks in one stage, but their cost values are scaled so that the high score detections don't dominate too much. The code is also quite similar to your implementation of BoTSORT, so it is more maintainable. I have not completed a full benchmark evaluation yet, but its in the works. I have been using this implementation for a different project for quite a while and it has consistently outperformed ByteTrack and is on par with BoTSORT.
I think it could be a good tracker to add as it is a drop-in replacement with many of the existing modules, and its performance is quite good on handling occlusions.
I know this is a lot of code to review, but I hope this PR is helpful.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jul 20, 2024

Sorry for my late response. I am on vacations 😄 . I ran a sanity check on the tracker, obtaining these results:

HOTA: -pedestrian                  HOTA      DetA      AssA      DetRe     DetPr     AssRe     AssPr     LocA      OWTA      HOTA(0)   LocA(0)   HOTALocA(0)
MOT17-02-FRCNN                     57.264    47.679    70.949    51.734    78.494    71.864    95.946    85.552    59.965    66.918    81.416    54.482    
MOT17-04-FRCNN                     71.629    63.261    83.381    71.35     78.345    84.363    96.415    85.717    76.545    85.978    79.866    68.667    
COMBINED                           68.979    60.104    81.427    67.279    78.369    82.401    96.346    85.685    73.42     82.39     80.11     66.003    

CLEAR: -pedestrian                 MOTA      MOTP      MODA      CLR_Re    CLR_Pr    MTR       PTR       MLR       sMOTA     CLR_TP    CLR_FN    CLR_FP    IDSW      MT        PT        ML        Frag      
MOT17-02-FRCNN                     52.273    84.178    52.273    59.091    89.655    0         81.818    18.182    42.923    52        36        6         0         0         18        4         0         
MOT17-04-FRCNN                     67.262    84.75     67.262    79.167    86.928    90.476    0         9.5238    55.189    266       70        40        0         38        0         4         0         
COMBINED                           64.151    84.656    64.151    75        87.363    59.375    28.125    12.5      52.643    318       106       46        0         38        18        8         0         

Identity: -pedestrian              IDF1      IDR       IDP       IDTP      IDFN      IDFP      
MOT17-02-FRCNN                     71.233    59.091    89.655    52        36        6         
MOT17-04-FRCNN                     82.866    79.167    86.928    266       70        40        
COMBINED                           80.711    75        87.363    318       106       46        

Count: -pedestrian                 Dets      GT_Dets   IDs       GT_IDs    
MOT17-02-FRCNN                     58        88        20        22        
MOT17-04-FRCNN                     306       336       46        42        
COMBINED                           364       424       66        64        

{'HOTA': 68.979, 'MOTA': 64.151, 'IDF1': 80.711}

Which are way lower than all the implemented trackers. Not sure if something in the implementation is wrong and don't have time for deep dive at the moment. Compare imprassoc to existing trackers under README run:

python tracking/generate_dets_n_embs.py --source ./assets/MOT17-mini/train --yolo-model yolox_x.pt --reid-model weights/osnet_x0_25_msmt17.pt
python tracking/generate_mot_results.py --dets yolox_x --embs osnet_x0_25_msmt17 --tracking-method imprassoc
python tracking/val.py --benchmark MOT17-mini --dets yolox_x --embs osnet_x0_25_msmt17 --tracking-method imprassoc

This will help you in the process of fixing possible bugs 🚀

@rolson24
Copy link
Contributor Author

Thanks for the feedback!
I tried to do some hyperparameter optimization using the evolve.py script, but I was using yolov8n.pt and the detection results were not very good, so I'm guessing that was not helping the hyperparam optimization. I will try to run the evolve.py script on the better detections, and see if that gets me anywhere.
Also I am wondering if you used the full MOT17 train set for hyperparam optimization or just used MOT17-mini for the other trackers.

@mikel-brostrom
Copy link
Owner

The other trackers have the default parameters from their respective repo.

@rolson24
Copy link
Contributor Author

I ran the other trackers on MOT17-mini and they got the following results:

OCSORT: {'HOTA': 69.078, 'MOTA': 64.387, 'IDF1': 80.813}
DeepOCSORT: {'HOTA': 69.078, 'MOTA': 64.387, 'IDF1': 80.813}
HybridSORT: {'HOTA': 69.056, 'MOTA': 64.623, 'IDF1': 80.964}
ByteTrack: {'HOTA': 68.844, 'MOTA': 64.858, 'IDF1': 81.067}
BoTSORT: {'HOTA': 69.055, 'MOTA': 64.858, 'IDF1': 81.067}
ImprAssoc: {'HOTA': 68.803, 'MOTA': 64.151, 'IDF1': 80.362}

It seems to be in line with the other trackers on MOT17-mini. Maybe we need to do the full MOT17 eval to reproduce those numbers.

@mikel-brostrom
Copy link
Owner

Sorry, my bad. I only evaluated the first two sequences. In the README they are evaluated on all.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jul 22, 2024

Some notes regarding parameters:

Maybe I am missing something but please check the default parameters as well and set those for the tracker in impr_assoc_tracker.py. Otherwise, good job with this implementation 🚀. Good to see new SOTA trackers getting into this repo

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jul 22, 2024

Sorry, my bad. I only evaluated the first two sequences. In the README they are evaluated on all.

I will set up a benchmarking job for sanity checking all the trackers in the CI pipeline 🚀

@rolson24
Copy link
Contributor Author

Awesome!

For context I am using the match_thresh as the maximum cost value for the linear assignment problem, and I am using track_high_thresh to sort the low and high detections. In the paper it says:

"The detections are separated into $D^{h}$ and $D^{l}$ with $S_{track}$ = 0.6 and a threshold of $S_{init}$ = 0.7 is applied for initialization of tracks in the ablative experiments. For OAI, the maximum overlap $o_{max}$ is empirically set to 0.55. The best configuration of CD from Equation (4) uses DIoU for $S_{MOT}$, λ = 0.2, $o_{min}$ = 0.1, and a maximum distance $d_{max}^{h}$ = 0.65. For calculation of $d_{APP}$ the size of the feature bank is $n_{feat}$ = 15"

match_thresh is $d_{max}^{h}$ = 0.65 and track_high_thresh is $S_{track}$ = 0.6.

I will clean up the code a bit and remove the lambda parameter. Excited to have this added!

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jul 22, 2024

Some test seems to be failing... Aaand please pull the latest from master such that it can be benchmarked against the rest of the trackers on a very small subset of MOT17 🚀. After pulling the latest you can add the tracker's name here:

TRACKERS: "ocsort bytetrack botsort hybridsort deepocsort"

@rolson24
Copy link
Contributor Author

Ok It looks like the tests are still failing. I took a look at the tests and they input the following detections into the tracker:

https://github.com/mikel-brostrom/boxmot/blob/7785235f3e358feb6660e38a2d693094351ecf8d/tests/unit/test_trackers.py#L50C1-L51C53

This fails because the confidence threshold for initializing a new track in ImprAssoc is 0.7 by default, but one of the detections has a confidence of 0.54

Should I lower the confidence threshold or should we change the tests?

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jul 22, 2024

This fails because the confidence threshold for initializing a new track in ImprAssoc is 0.7 by default, but one of the detections has a confidence of 0.54

You can just increase the confidences of those dets to > 0.7. If two dets are getting in and two are expected out, it makes sense to higher those confidences such that no tracker deem them irrelevant.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jul 22, 2024

Good enough for now 😄. Merging 🚀

Format      Status❔  HOTA    MOTA    IDF1
Ocsort      ✅        21.684  6.6038  13.158
Bytetrack   ✅        21.717  6.6038  13.158
Botsort     ✅        21.61   6.6038  13.158
Hybridsort  ✅        21.684  6.6038  13.158
Deepocsort  ✅        21.684  6.6038  13.158
Imprassoc   ✅        21.516  6.6038  13.158

I had to lower a few parameters to achieve similar results as the rest of the trackers:

new_track_thresh: 0.5
track_high_thresh: 0.5

@mikel-brostrom mikel-brostrom merged commit 06c72ee into mikel-brostrom:master Jul 22, 2024
12 checks passed
@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jul 22, 2024

Tack you for contributing to this repo @rolson24! It may take me a few days for me to take a look to your other PR though 🔥

@rolson24
Copy link
Contributor Author

Sounds good!

In the meantime, what is the best way to do a full eval? Should I just use this but with MOT17 instead of MOT17-mini?

# saves dets and embs under ./runs/dets_n_embs separately for each selected yolo and reid model
$ python tracking/generate_dets_n_embs.py --source ./assets/MOT17-mini/train --yolo-model yolov8n.pt yolov8s.pt --reid-model weights/osnet_x0_25_msmt17.pt
# generate MOT challenge format results based on pregenerated detections and embeddings for a specific trackign method
$ python tracking/generate_mot_results.py --dets yolov8n --embs osnet_x0_25_msmt17 --tracking-method botsort
# uses TrackEval to generate MOT metrics for the tracking results under ./runs/mot/<dets+embs+tracking-method>
$ python tracking/val.py --benchmark MOT17-mini --dets yolov8n --embs osnet_x0_25_msmt17 --tracking-method botsort

@mikel-brostrom
Copy link
Owner

For best people detection model and best lightweight ReID model run:

python tracking/generate_dets_n_embs.py --source ./assets/MOT17/train --yolo-model yolox_x.pt --reid-model osnet_x1_0_dukemtmcreid.pt

With this generated you can then just feed these pre-generated detections and embeddings to the trackers of your choice

python tracking/generate_mot_results.py --dets yolox_x --embs osnet_x1_0_dukemtmcreid --tracking-method botsort

Finally, generate the MOT metrics using trackeval by:

python tracking/val.py --benchmark MOT17 --dets yolox_x --embs osnet_x1_0_dukemtmcreid --tracking-method botsort

@mikel-brostrom
Copy link
Owner

FYI, the evaluation process has now been simplified into a single file

$ python3 tracking/val.py --benchmark MOT17-mini --yolo-model yolov8n.pt --reid-model osnet_x0_25_msmt17.pt --tracking-method deepocsort --verbose --source ./assert/MOT17-mini/train

It is as fast as always 😄

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants