The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking
The MMOT dataset was presented in the paper MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking.
The official code and further details can be found on the GitHub repository: https://github.com/Annzstbl/MMOT.
Introduction
MMOT is the first large-scale benchmark for drone-based multispectral multi-object tracking (MOT). It integrates spectral and temporal cues to evaluate modern tracking algorithms under real-world UAV conditions. Drone-based multi-object tracking is essential yet highly challenging due to small targets, severe occlusions, and cluttered backgrounds. This dataset aims to bridge the gap by providing crucial multispectral cues to enhance object discriminability under degraded spatial conditions.
β¨ Highlights
- π¦ Large Scale β 125 video sequences, 13.8K frames, 488.8K annotated oriented boxes across 8 categories
- π Multispectral Imagery β 8-band MSI covering visible to near-infrared spectrum
- π Oriented Bounding Boxes (OBB) β precise orientation labels for robust aerial association
- π Real UAV Scenarios β varying altitudes (80β200 m), illumination, and dense urban traffic
- π§© Complete Codebase β integrates 8 representative trackers (SORT, ByteTrack, OC-SORT, BoT-SORT, MOTR, MOTRv2, MeMOTR, MOTIP)
πΈ Example Visualization
Example annotations from MMOT showcasing diverse and challenging scenarios. In these scenes, where spatial features are limited due to small object size, clutter or blur, spectral cues provide critical complementary information for reliable discrimination. Zoom in for better visualization.
π Dataset Download and Preparation
The MMOT dataset can be obtained from this Hugging Face repository. On Hugging Face, each video sequence is individually packaged into a .tar file to support Croissant file generation.
Each .tar archive contains:
- Multispectral frames in
.npyformat - Frame-wise MOT annotations in
.txtformat (one annotation per frame)
Example (root folder):
MMOT_DATASET
βββ train
β βββ data30-8
β β βββ 000001.npy
β β βββ 000001.txt
β β βββ 000002.npy
β β βββ 000002.txt
β β βββ ...
β βββ ...
βββ test
Please download all .tar files and place them into the corresponding train/ or test/ folder.
Then, you can automatically extract and convert the structure to the MMOT standard format using:
python dataset/huggingface_tar_to_standard.py --root /path/to/root
This script will reorganize all tar files and transform the Hugging Face structure into the standard MMOT format.
π Standard Directory Layout
After processing (from either source), your dataset directory should appear as follows:
MMOT_DATASET
βββ train
β βββ npy
β β βββ data23-2
β β β βββ 000001.npy
β β β βββ 000002.npy
β β βββ data23-3
β β βββ ...
β βββ mot
β βββ data23-2.txt
β βββ data23-3.txt
β βββ ...
βββ test
βββ npy
βββ mot
Once the dataset has been organized in the above structure,
please link it to the main project directory for unified access:
# Link dataset to project root
ln -s /path/to/MMOT_dataset ./data
βοΈ License
The MMOT dataset is released under the CC BY-NC-ND 4.0 License. It is intended for academic research only. You must attribute the original source, and you are not allowed to modify or redistribute the dataset without permission.
π Citation
If you use the MMOT dataset, code, or benchmark results in your research, please cite:
@inproceedings{li2025mmot,
title = {MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking},
author = {Li, Tianhao and Xu, Tingfa and Wang, Ying and Qin, Haolin and Lin, Xu and Li, Jianan},
booktitle = {NeurIPS 2025 Datasets and Benchmarks Track},
year = {2025},
url = {https://arxiv.org/abs/2510.12565}
}
- Downloads last month
- 269