Visual Object Tracking and Segmentation challenge VOTS2024 is a continuation of the VOTS2023 challenge, which no longer distincts between single- and multi-target tracking nor between short- and long-term tracking. It requires tracking one or more targets simultaneously by segmentation over long or short sequences, while the targets may disappear during tracking and reappear later in the video. Three challenges:
VOTS adopts a general problem formulation that covers single/multiple-target and short/long-term tracking as special cases. The tracker is initialized in the first frame by segmentation masks for all tracked targets. In each subsequent frame, the tracker has to report all segmentation masks (one for each target). The following figure summarizes the tracking task.
Researchers are invited to participate in two challenges: VOTS2024 and VOTSt2024. The difference between the two challenges is that VOTS2024 considers objects undergoing a topological transformation, such as vegetables cut into pieces, machines disassembled.
The VOTS2024 challenge is sponsored by the Faculty of Computer and Information Science, University of Ljubljana, The Academic and Research Network of Slovenia ARNES, University of Birmingham, Wallenberg AI, Autonomous Systems and Software Program WASP.
As the VOT is primarily rooted in the EU, some members are restricted by law to collaborate with institutions from certain countries, such as the Russian Federation. Consequently the VOT cannot process submissions affiliated with the mentioned institutions. In such cases, the authors should consider declaring affiliations with internationally recognized professional organizations, such as IEEE, ACM, CVF or ORCHID instead. If uncertain about eligibility of your institution, please contact our affiliation representative.
vot pack
command.Follow this and this for how to create your submission. Do not forget to pack the results with the vot pack
command.
Make sure that in the tracker identifier in the manifest.yml (by default is inside the output zip file) match with the tracker short name you register through our Google Form.
Then submit your zip file in Participate tab. Note that uploading the zip file can take a long time, as the file may be large in size, and some private network (e.g., company wifi) will not allow to upload file to the challenge page.
For each submission, the evaluation will run for roughly 45 minutes to 1 hours and 30 minutes, sometimes it can take even longer depends on server loads. To avoid bottlenecking the server, you should try to submit earlier, especially when the deadline is close.
Does the number of targets change during tracking?
All targets in the sequence are specified in the first frame. During tracking, some targets may disappear and reappear later. The number of targets is different from sequence to sequence.
Can I participate with a single-target tracker?
Sure, with a slight adjustment. You will write a wrapper that creates several independent tracker instances, each tracking one of the targets. To the toolkit, your tracker will be a multi-target tracker, while internally, you’re running independent trackers. See the examples here.
Can I participate with a bounding box tracker?
Sure, with a slight extension. In previous VOT challenges we showed that box trackers achieve very good performance on segmentation tasks by running a general segmentation on top of a bounding box. So you can simply run AlphaRef (or a similar box refinement module like SAM) on the top of your estimated bounding box to create the per-target segmentation mask. Running a vanilla bounding box tracker is possible, but its accuracy will be low (robustness might still be high).
Which datasets can I use for training?
Validation and test splits of popular tracking datasets are NOT allowed for training the model. These include: OTB, VOT, ALOV, UAV123, NUSPRO, TempleColor, AVisT, LaSOT-val, GOT10k-val, GOT10k-test, TrackingNet-val/test, TOTB. Other than above, training splits of any dataset is allowed (including LaSOT-train, TrackingNet-train, YouTubeVOS, COCO, etc.). For including the transparent objects, it is allowed to use the Trans2k dataset. In case private training sets are used, we strongly encourage making them publicly available for results reproduction.
Which performance measures are you using?
The VOTS2023 performance measures are used in both VOTS2024 and VOTSt2024 challenges, see the VOTS2023 results paper.
When will my results be publicly available?
The results for a registered tracker are revealed to the participant via an email in approximately 30 minutes after submission. Considering many requests, we decided to also reveal all results in the week after the challenge closes. The leaderboard data will contain also tracker registration details (without participants personal details, long tracker description and source code password). Note that public link to the source code is mandatory for the results paper coauthorship, but can be kept under password (revealed only to VOTS committee) until the VOTS workshop.
Why is the analysis computed with the toolkit empty?
The VOTS2024 and VOTSt2024 evaluation datasets contain annotations for initialization frame only, which means that the analysis cannot be computed locally by the toolkit. Thus, the results should be submitted to the server, where analysis is computed and then reported to the user via email.
If I submit several timest to the evaluation server, which submission will be used for the final score?
The final submission will be used for the final score. Please make sure that the tracker description matches the code that produced the final submission.
Will the evaluation server remain open after the VOTS2024 deadline?
After the challenge deadline, the VOTS2024 and VOTSt2024 challenges become the VOTS2024 and VOTSt2024 benchmarks and the evaluation server will remain open. In fact, the VOTS2023 challenge results will be added to the VOTS2024 results. The results submission link on the challenge page will change to enable post-challenge submissions not included in the VOTS2024 results paper. However, all benchmark and challenge submissions will appear on the same leaderboard
Questions regarding the VOTSt2024 challenge should be directed to the VOTS2024 committee. If you have general technical questions regarding the VOT toolkit, consult the FAQ page and the VOT support forum first. Stay tuned with the latest VOT updates: Follow us on Twitter.
Start: March 1, 2024, midnight
Description: Submission for VOTSt2024 Challenge. All submissions in this phase will receive results privately through email. Please note that to become the results paper coauthors, your final submission need to beat the AOT baseline, with Q: 0.49 (Acc: 0.49; Rob: 0.72; NRE: 0.13; DRE: 0.15; ADQ: 0.40).
June 23, 2024, 10 p.m.
You must be logged in to participate in competitions.
Sign In