HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering
Abstract
HERBench is a VideoQA benchmark that tests models' ability to integrate multiple, temporally separated visual cues, revealing significant challenges in current Video-LLMs.
Video Large Language Models (Video-LLMs) are rapidly improving, yet current Video Question Answering (VideoQA) benchmarks often allow questions to be answered from a single salient cue, under-testing reasoning that must aggregate multiple, temporally separated visual evidence. We present HERBench, a VideoQA benchmark purpose-built to assess multi-evidence integration across time. Each question requires aggregating at least three non-overlapping evidential cues across distinct video segments, so neither language priors nor a single snapshot can suffice. HERBench comprises 26K five-way multiple-choice questions organized into twelve compositional tasks that probe identity binding, cross-entity relations, temporal ordering, co-occurrence verification, and counting. To make evidential demand measurable, we introduce the Minimum Required Frame-Set (MRFS), the smallest number of frames a model must fuse to answer correctly, and show that HERBench imposes substantially higher demand than prior datasets (mean MRFS 5.5 vs. 2.6-4.2). Evaluating 13 state-of-the-art Video-LLMs on HERBench reveals pervasive failures: accuracies of 31-42% are only slightly above the 20% random-guess baseline. We disentangle this failure into two critical bottlenecks: (1) a retrieval deficit, where frame selectors overlook key evidence, and (2) a fusion deficit, where models fail to integrate information even when all necessary evidence is provided. By making cross-time evidence both unavoidable and quantifiable, HERBench establishes a principled target for advancing robust, compositional video understanding.
Community
๐ Project page: https://herbench.github.io/
๐ arXiv: https://arxiv.org/abs/2512.14870
๐ค HF dataset card: https://huggingface.co/datasets/DanBenAmi/HERBench
๐ฅ Code (GitHub): https://github.com/DanBenAmi/HERBench
Looks like a really comprehensive benchmark. Thank you for sharing
arXiv lens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/herbench-a-benchmark-for-multi-evidence-integration-in-video-question-answering-5806-4c113a6f
- Executive Summary
- Detailed Breakdown
- Practical Applications
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper