Datasets:
				
			
			
	
			
			
	
		
		Improve dataset card: Add comprehensive content, links, and sample usage
#6
by
						
nielsr
	
							HF Staff
						- opened
							
					
    	
        README.md
    CHANGED
    
    | 
         @@ -1,40 +1,67 @@ 
     | 
|
| 1 | 
         
             
            ---
         
     | 
| 2 | 
         
            -
            license: mit
         
     | 
| 3 | 
         
            -
            task_categories:
         
     | 
| 4 | 
         
            -
            - video-text-to-text
         
     | 
| 5 | 
         
             
            language:
         
     | 
| 6 | 
         
             
            - en
         
     | 
| 
         | 
|
| 7 | 
         
             
            size_categories:
         
     | 
| 8 | 
         
             
            - 1K<n<10K
         
     | 
| 
         | 
|
| 
         | 
|
| 9 | 
         
             
            ---
         
     | 
| 10 | 
         
            -
            <!-- <div align="center">
         
     | 
| 11 | 
         
            -
              <h1>RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video</h1> 
         
     | 
| 12 | 
         
            -
            </div> -->
         
     | 
| 13 | 
         | 
| 14 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 15 | 
         | 
| 16 | 
         
            -
            <!-- [](https://www.modelscope.cn/datasets/Jungang/RTV-Bench)   -->
         
     | 
| 17 | 
         
            -
            <!-- ## 🔥 News
         
     | 
| 18 | 
         
            -
            * **`2025.05.03`** 🌟 We are happy to release the RTV-Bench.
         
     | 
| 19 | 
         
            -
              
         
     | 
| 20 | 
         
             
            ## TODO
         
     | 
| 21 | 
         
            -
            - [ 
     | 
| 22 | 
         
            -
            - [ 
     | 
| 23 | 
         
             
            - [ ] Construct a more comprehensive benchmark for real-time video analysis.
         
     | 
| 24 | 
         
             
            - [ ] ···
         
     | 
| 25 | 
         
            -
            ## 👀 RTV-Bench Overview
         
     | 
| 26 | 
         | 
| 27 | 
         
            -
             
     | 
| 28 | 
         
            -
            * **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes; 
         
     | 
| 29 | 
         
            -
            * **Hierarchical Question Structure**, combining basic and advanced queries; and
         
     | 
| 30 | 
         
            -
            * **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning.  -->
         
     | 
| 31 | 
         | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 32 | 
         | 
| 33 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 34 | 
         | 
| 35 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 36 | 
         | 
| 37 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 38 | 
         | 
| 39 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 40 | 
         
             
            ```
         
     | 
| 
         | 
|
| 1 | 
         
             
            ---
         
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 2 | 
         
             
            language:
         
     | 
| 3 | 
         
             
            - en
         
     | 
| 4 | 
         
            +
            license: mit
         
     | 
| 5 | 
         
             
            size_categories:
         
     | 
| 6 | 
         
             
            - 1K<n<10K
         
     | 
| 7 | 
         
            +
            task_categories:
         
     | 
| 8 | 
         
            +
            - video-text-to-text
         
     | 
| 9 | 
         
             
            ---
         
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 10 | 
         | 
| 11 | 
         
            +
            # $\mathcal{RTV}\text{-}Bench$: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video
         
     | 
| 12 | 
         
            +
             
     | 
| 13 | 
         
            +
            [](https://arxiv.org/abs/2505.02064) [](https://huggingface.co/datasets/xunsh/RTV-Bench) [](https://www.modelscope.cn/datasets/Jungang/RTV-Bench)  
         
     | 
| 14 | 
         
            +
            [Paper](https://huggingface.co/papers/2505.02064) | [Project Page](https://ljungang.github.io/RTV-Bench) | [Code](https://github.com/ljungang/rtv-bench)
         
     | 
| 15 | 
         
            +
             
     | 
| 16 | 
         
            +
            ## 🔥 News
         
     | 
| 17 | 
         
            +
            *   **`2025-09-20`** 🎉🎉🎉 Our paper has been accepted by NeurIPS 2025, we will update our dataset and code for community as soon as possible~
         
     | 
| 18 | 
         
            +
            *   **`2025-06-27`** 🎉 We update core code for evaluation.
         
     | 
| 19 | 
         
            +
            *   **`2025-05-17`** 🎉 We have released the label json, which is named `QA.json`.
         
     | 
| 20 | 
         
            +
            *   **`2025-05-04`** 🎉  We released the paper $\mathcal{RTV}\text{-}Bench$: [Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video](https://arxiv.org/abs/2505.02064).
         
     | 
| 21 | 
         
            +
            *   **`2025-05-03`** 🌟 We are happy to release the $\mathcal{RTV}\text{-}Bench$. You can find the $\mathcal{RTV}\text{-}Bench$ from [](https://huggingface.co/datasets/xunsh/RTV-Bench) or [](https://www.modelscope.cn/datasets/Jungang/RTV-Bench).
         
     | 
| 22 | 
         
            +
            <p align="center">
         
     | 
| 23 | 
         
            +
                <img src="https://github.com/ljungang/rtv-bench/blob/main/asset/1_examples.png?raw=true" width="100%" height="100%" >
         
     | 
| 24 | 
         
            +
            </p>
         
     | 
| 25 | 
         | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 26 | 
         
             
            ## TODO
         
     | 
| 27 | 
         
            +
            - [x] Release the final label json.
         
     | 
| 28 | 
         
            +
            - [x] Release the evaluation code.
         
     | 
| 29 | 
         
             
            - [ ] Construct a more comprehensive benchmark for real-time video analysis.
         
     | 
| 30 | 
         
             
            - [ ] ···
         
     | 
| 
         | 
|
| 31 | 
         | 
| 32 | 
         
            +
            ## 👀 $\mathcal{RTV}\text{-}Bench$ Overview
         
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 33 | 
         | 
| 34 | 
         
            +
            We introduce $\mathcal{RTV}\text{-}Bench$, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (_e.g._ GPT-4o, Gemini 2.0), open-source offline (_e.g._ Qwen2.5-VL, VideoLLaMA3), and open-source real-time (_e.g._ VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs.  $\mathcal{RTV}\text{-}Bench$ includes three key principles: 
         
     | 
| 35 | 
         
            +
            *   **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes; 
         
     | 
| 36 | 
         
            +
            *   **Hierarchical Question Structure**, combining basic and advanced queries; and
         
     | 
| 37 | 
         
            +
            *   **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning. 
         
     | 
| 38 | 
         | 
| 39 | 
         
            +
            **Video Categories and Distribution of Question Difficulty and Query Characteristics.** 
         
     | 
| 40 | 
         
            +
            <p align="center">
         
     | 
| 41 | 
         
            +
                <img src="https://github.com/ljungang/rtv-bench/blob/main/asset/2_dataset_stati.png?raw=true" width="100%" height="100%" >
         
     | 
| 42 | 
         
            +
                (Left) RTV-Bench overs 3 key domains and 16 sub-class video types. 
         
     | 
| 43 | 
         
            +
                (Center) Distribution of question difficulty levels across eight representative task types, measured by percentage-based performance ranges.
         
     | 
| 44 | 
         
            +
                (Right) Distribution of question queries by video length, categorized into Shallow, Moderate, and Deep levels. The bar heights indicate counts, while the line chart overlays query proportions for each duration bucket.
         
     | 
| 45 | 
         
            +
            </p>
         
     | 
| 46 | 
         | 
| 47 | 
         
            +
            ## 🔖Evaluation Results
         
     | 
| 48 | 
         
            +
            <p align="center">
         
     | 
| 49 | 
         
            +
                <img src="https://github.com/ljungang/rtv-bench/blob/main/asset/3_evaluation.png?raw=true" width="100%" height="100%">
         
     | 
| 50 | 
         
            +
            </p>
         
     | 
| 51 | 
         | 
| 52 | 
         
            +
            ## 🛠️ Sample Usage (Evaluation)
         
     | 
| 53 | 
         
            +
            To evaluate models on RTV-Bench, you can use the provided script:
         
     | 
| 54 | 
         
            +
            ```shell
         
     | 
| 55 | 
         
            +
            bash scripts/eval/eval_model.sh
         
     | 
| 56 | 
         
            +
            ```
         
     | 
| 57 | 
         | 
| 58 | 
         
            +
            ## 📑 Citation
         
     | 
| 59 | 
         
            +
            If you find $\mathcal{RTV}\text{-}Bench$ useful for your research and applications, please cite using this BibTeX:
         
     | 
| 60 | 
         
            +
            ```bibtex
         
     | 
| 61 | 
         
            +
            @article{xun2025rtv,
         
     | 
| 62 | 
         
            +
              title={RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video},
         
     | 
| 63 | 
         
            +
              author={Xun, Shuhang and Tao, Sicheng and Li, Jungang and Shi, Yibo and Lin, Zhixin and Zhu, Zhanhui and Yan, Yibo and Li, Hanqian and Zhang, Linghao and Wang, Shikang and others},
         
     | 
| 64 | 
         
            +
              journal={arXiv preprint arXiv:2505.02064},
         
     | 
| 65 | 
         
            +
              year={2025}
         
     | 
| 66 | 
         
            +
            }
         
     | 
| 67 | 
         
             
            ```
         
     |