VOTST Benchmark

Organized by tktung - Current server time: Nov. 21, 2024, 6:47 a.m. UTC

Current

Single Phase
July 8, 2024, midnight UTC

End

Competition Ends
May 14, 2025, midnight UTC

VOTST Benchmark

VOTST Benchmark is a continuation of a new sub-challenge introduced in VOTS2024, considers general objects undergoing a topological transformation, such as vegetables cut into pieces, machines disassembled, etc.

Visual Object Tracking and Segmentation challenge VOTS2024 is a continuation of the VOTS2023 challenge, which no longer distincts between single- and multi-target tracking nor between short- and long-term tracking. It requires tracking one or more targets simultaneously by segmentation over long or short sequences, while the targets may disappear during tracking and reappear later in the video.

Problem statement

VOTS adopts a general problem formulation that covers single/multiple-target and short/long-term tracking as special cases. The tracker is initialized in the first frame by segmentation masks for all tracked targets. In each subsequent frame, the tracker has to report all segmentation masks (one for each target). The following figure summarizes the tracking task.

Problem statement

Researchers are invited to participate in two challenges: VOTS2024 and VOTSt2024. The difference between the two challenges is that VOTS2024 considers objects undergoing a topological transformation, such as vegetables cut into pieces, machines disassembled.

Challenges

Sponsors

The VOTS2024 challenge is sponsored by the Faculty of Computer and Information Science, University of Ljubljana, The Academic and Research Network of Slovenia ARNES, University of Birmingham, Wallenberg AI, Autonomous Systems and Software Program WASP.

Participation steps

  • Follow the guidelines to integrate your tracker with the VOT toolkit and run the experiments.
  • Register your tracker on the VOTSt2024 challenge registration page, fill-out the tracker description questionnaire and submit the tracker description documents: a short description for the results paper and a longer description.
  • Once registered, submit the output produced by the toolkit (see tutorial) to the VOTSt2024 challenge evaluation server (this current page). Do not forget to pack the results with the vot pack command.
  • Receive performance scores via email.
  • See Additional clarifications and FAQ below for further details.

Participation steps

Result submission

Follow this and this for how to create your submission. Do not forget to pack the results with the vot pack command.

Make sure that in the tracker identifier in the manifest.yml (by default is inside the output zip file) match with the tracker short name you register through our Google Form.

Then submit your zip file in Participate tab. Note that uploading the zip file can take a long time, as the file may be large in size, and some private network (e.g., company wifi) will not allow to upload file to the challenge page.

For each submission, the evaluation will run for roughly 45 minutes to 1 hours and 30 minutes, sometimes it can take even longer depends on server loads. To avoid bottlenecking the server, you should try to submit earlier, especially when the deadline is close.

Additional clarifications

  • The short tracker description should contain a concise description (LaTeX format) for the VOT results paper appendix (see examples in Appendix of a VOT results papers). The longer description will be used by the VOTS TC for result interpretation. Write the descriptions in advance to speed up the submission process.
  • Results for a single registered tracker may be submitted to the evaluation server at most 10 times, each at least 24h apart to mitigate overfitting attempts. In response to submissions >10, an email with Subject “Maximum number of VOTS submissions reached” will be sent to avoid confusion about the situation. Registering a slightly modified tracker to increase the number of server evaluations is prohibited. The VOTS committee reserves the discretion to disqualify trackers that violate this constraint. If in doubt whether a modification is “slight”, contact the VOTS committee.
  • Submissions resulting in evaluation error do not count into the limit on max submissions.
  • The results of the last submission will be taken into account and previous ones deleted. Make sure that the tracker code you link reproduces the last submission results.
  • When coauthoring multiple submissions with similar design, the longer description should refer to the other submissions and clearly expose the differences. If in doubt whether a change is sufficient enough, contact the organisers.
  • The participant can update information about the tracker (name, description, etc.) anytime before the challenge closes.
  • Only a single eu.aihub.ml account is allowed per tracker.
  • Authors are encouraged to submit their own previously published or unpublished trackers.
  • Authors may submit modified versions of third-party trackers. The submission description should clearly state what the changes were. Third-party trackers submitted without significant modification will not be accepted.
  • The VOTS2024 challenge winner is required to publicly release the pertained tracker and the source code. In case private training sets are used, the authors are strongly encouraged to make the dataset publicly available to foster results reproducibility.

Tracker registration checklist (prepare in advance)

  • Make sure you selected the correct link depending whether you’re submitting to the VOTS2024 or the VOTSt2024 challenge.
  • Authors, affiliations + emails, and division of work.
  • Make sure that the tracker identifier in the manifest.yml (inside the results output zip file) match with the tracker short name you registered (in the registration form).
    • Short tracker description for the results paper appendix. See examples in the VOT2022 results paper. (~800 characters with spaces when compiled, which is ~1500 characters of LaTeX text without bibtex file)
  • Long tracker description (should detail the main ideas).
  • Bibtex file for the long and short tracker description.
  • A link to the tracker code placed in a persistent depository (Github, dropbox, Gdrive,…). If the link is not yet publicly accessible, provide a password. Note that to become a co-author of the results paper, the tracker has to be publicly accessible by the VOTS2024 workshop date.

FAQ

  • Does the number of targets change during tracking?

    All targets in the sequence are specified in the first frame. During tracking, some targets may disappear and reappear later. The number of targets is different from sequence to sequence.

  • Can I participate with a single-target tracker?

    Sure, with a slight adjustment. You will write a wrapper that creates several independent tracker instances, each tracking one of the targets. To the toolkit, your tracker will be a multi-target tracker, while internally, you’re running independent trackers. See the examples here.

  • Can I participate with a bounding box tracker?

    Sure, with a slight extension. In previous VOT challenges we showed that box trackers achieve very good performance on segmentation tasks by running a general segmentation on top of a bounding box. So you can simply run AlphaRef (or a similar box refinement module like SAM) on the top of your estimated bounding box to create the per-target segmentation mask. Running a vanilla bounding box tracker is possible, but its accuracy will be low (robustness might still be high).

  • Which datasets can I use for training?

    Validation and test splits of popular tracking datasets are NOT allowed for training the model. These include: OTB, VOT, ALOV, UAV123, NUSPRO, TempleColor, AVisT, LaSOT-val, GOT10k-val, GOT10k-test, TrackingNet-val/test, TOTB. Other than above, training splits of any dataset is allowed (including LaSOT-train, TrackingNet-train, YouTubeVOS, COCO, etc.). For including the transparent objects, it is allowed to use the Trans2k dataset. In case private training sets are used, we strongly encourage making them publicly available for results reproduction.

  • Which performance measures are you using?

    The VOTS2023 performance measures are used in both VOTS2024 and VOTSt2024 challenges, see the VOTS2023 results paper.

  • When will my results be publicly available?

    The results for a registered tracker are revealed to the participant via an email in approximately 30 minutes after submission. Considering many requests, we decided to also reveal all results in the week after the challenge closes. The leaderboard data will contain also tracker registration details (without participants personal details, long tracker description and source code password). Note that public link to the source code is mandatory for the results paper coauthorship, but can be kept under password (revealed only to VOTS committee) until the VOTS workshop.

  • Why is the analysis computed with the toolkit empty?

    The VOTS2024 and VOTSt2024 evaluation datasets contain annotations for initialization frame only, which means that the analysis cannot be computed locally by the toolkit. Thus, the results should be submitted to the server, where analysis is computed and then reported to the user via email.

  • If I submit several timest to the evaluation server, which submission will be used for the final score?

    The final submission will be used for the final score. Please make sure that the tracker description matches the code that produced the final submission.

  • Will the evaluation server remain open after the VOTS2024 deadline?

    After the challenge deadline, the VOTS2024 and VOTSt2024 challenges become the VOTS2024 and VOTSt2024 benchmarks and the evaluation server will remain open. In fact, the VOTS2023 challenge results will be added to the VOTS2024 results. The results submission link on the challenge page will change to enable post-challenge submissions not included in the VOTS2024 results paper. However, all benchmark and challenge submissions will appear on the same leaderboard

More questions?

Questions regarding the VOTSt2024 challenge should be directed to the VOTS2024 committee. If you have general technical questions regarding the VOT toolkit, consult the FAQ page and the VOT support forum first. Stay tuned with the latest VOT updates: Follow us on Twitter.

Single Phase

Start: July 8, 2024, midnight

Description: Result submission

Competition Ends

May 14, 2025, midnight

You must be logged in to participate in competitions.

Sign In
# Username Score
1 civa_lab 0.54
2 hello_world 0.54
3 VOTST2024_RMemAOT 0.54