Dataset Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "multi object tracking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking

The MMOT dataset was presented in the paper MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking.

The official code and further details can be found on the GitHub repository: https://github.com/Annzstbl/MMOT.

Introduction

MMOT is the first large-scale benchmark for drone-based multispectral multi-object tracking (MOT). It integrates spectral and temporal cues to evaluate modern tracking algorithms under real-world UAV conditions. Drone-based multi-object tracking is essential yet highly challenging due to small targets, severe occlusions, and cluttered backgrounds. This dataset aims to bridge the gap by providing crucial multispectral cues to enhance object discriminability under degraded spatial conditions.

✨ Highlights

  • πŸ“¦ Large Scale β€” 125 video sequences, 13.8K frames, 488.8K annotated oriented boxes across 8 categories
  • 🌈 Multispectral Imagery β€” 8-band MSI covering visible to near-infrared spectrum
  • πŸ“ Oriented Bounding Boxes (OBB) β€” precise orientation labels for robust aerial association
  • 🚁 Real UAV Scenarios β€” varying altitudes (80–200 m), illumination, and dense urban traffic
  • 🧩 Complete Codebase β€” integrates 8 representative trackers (SORT, ByteTrack, OC-SORT, BoT-SORT, MOTR, MOTRv2, MeMOTR, MOTIP)

πŸ“Έ Example Visualization

Example annotations from MMOT showcasing diverse and challenging scenarios. In these scenes, where spatial features are limited due to small object size, clutter or blur, spectral cues provide critical complementary information for reliable discrimination. Zoom in for better visualization.

πŸ“‚ Dataset Download and Preparation

The MMOT dataset can be obtained from this Hugging Face repository. On Hugging Face, each video sequence is individually packaged into a .tar file to support Croissant file generation.

Each .tar archive contains:

  • Multispectral frames in .npy format
  • Frame-wise MOT annotations in .txt format (one annotation per frame)

Example (root folder):

MMOT_DATASET
β”œβ”€β”€ train
β”‚   β”œβ”€β”€ data30-8
β”‚   β”‚   β”œβ”€β”€ 000001.npy
β”‚   β”‚   β”œβ”€β”€ 000001.txt
β”‚   β”‚   β”œβ”€β”€ 000002.npy
β”‚   β”‚   β”œβ”€β”€ 000002.txt
β”‚   β”‚   └── ...
β”‚   └── ...
β”œβ”€β”€ test

Please download all .tar files and place them into the corresponding train/ or test/ folder. Then, you can automatically extract and convert the structure to the MMOT standard format using:

python dataset/huggingface_tar_to_standard.py --root /path/to/root

This script will reorganize all tar files and transform the Hugging Face structure into the standard MMOT format.

πŸ“ Standard Directory Layout

After processing (from either source), your dataset directory should appear as follows:

MMOT_DATASET
β”œβ”€β”€ train
β”‚   β”œβ”€β”€ npy
β”‚   β”‚   β”œβ”€β”€ data23-2
β”‚   β”‚   β”‚   β”œβ”€β”€ 000001.npy
β”‚   β”‚   β”‚   └── 000002.npy
β”‚   β”‚   β”œβ”€β”€ data23-3
β”‚   β”‚   └── ...
β”‚   └── mot
β”‚       β”œβ”€β”€ data23-2.txt
β”‚       β”œβ”€β”€ data23-3.txt
β”‚       └── ...  
└── test
    β”œβ”€β”€ npy
    └── mot

Once the dataset has been organized in the above structure,
please link it to the main project directory for unified access:

# Link dataset to project root
ln -s /path/to/MMOT_dataset ./data

βš–οΈ License

The MMOT dataset is released under the CC BY-NC-ND 4.0 License. It is intended for academic research only. You must attribute the original source, and you are not allowed to modify or redistribute the dataset without permission.

πŸ“– Citation

If you use the MMOT dataset, code, or benchmark results in your research, please cite:

@inproceedings{li2025mmot,
  title     = {MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking},
  author    = {Li, Tianhao and Xu, Tingfa and Wang, Ying and Qin, Haolin and Lin, Xu and Li, Jianan},
  booktitle = {NeurIPS 2025 Datasets and Benchmarks Track},
  year      = {2025},
  url       = {https://arxiv.org/abs/2510.12565}
}
Downloads last month
269