--- dataset_info: - config_name: all features: - name: doc_id dtype: string - name: image dtype: image - name: dataset_name dtype: string splits: - name: test num_bytes: 33863176699.362 num_examples: 206267 download_size: 39444733971 dataset_size: 33863176699.362 - config_name: chartqa features: - name: doc_id dtype: string - name: image dtype: image - name: dataset_name dtype: string splits: - name: test num_bytes: 1002602606.782 num_examples: 20882 download_size: 955114086 dataset_size: 1002602606.782 - config_name: default features: - name: doc_id dtype: string - name: image dtype: image - name: dataset_name dtype: string splits: - name: train num_bytes: 1957342799.05 num_examples: 12230 download_size: 5075559665 dataset_size: 1957342799.05 - config_name: dude features: - name: doc_id dtype: string - name: image dtype: image - name: dataset_name dtype: string splits: - name: test num_bytes: 10850355002.6 num_examples: 27955 download_size: 9918891043 dataset_size: 10850355002.6 - config_name: infovqa features: - name: doc_id dtype: string - name: image dtype: image - name: dataset_name dtype: string splits: - name: test num_bytes: 2037249851.38 num_examples: 5485 download_size: 1821864668 dataset_size: 2037249851.38 - config_name: slidevqa features: - name: doc_id dtype: string - name: image dtype: image - name: dataset_name dtype: string splits: - name: test num_bytes: 6852775381.32 num_examples: 52380 download_size: 6477479204 dataset_size: 6852775381.32 configs: - config_name: all data_files: - split: test path: all/test-* - config_name: chartqa data_files: - split: test path: chartqa/test-* - config_name: default data_files: - split: train path: data/train-* - config_name: dude data_files: - split: test path: dude/test-* - config_name: infovqa data_files: - split: test path: infovqa/test-* - config_name: slidevqa data_files: - split: test path: slidevqa/test-* task_categories: - visual-question-answering - question-answering language: - en size_categories: - 100K, 'dataset_name': 'visualmrc' } ``` ### Data Fields An example of a sample looks as follows: ```json { 'doc_id': Document ID, 'image': PIL.Image, 'dataset_name': Source dataset name } ``` ### Stats about the datasets in The OpenDocVQA | Dataset | Documents | # images | # Train&Dev | # Test | |----------------------|-----------|----------|-------------|---------| | DocVQA | Industry | 12,767 | 6,382 | - | | InfoVQA | Infographic | 5,485 | 9,592 | 1,048 | | VisualMRC | Webpage | 10,229 | 6,126 | - | | ChartQA | Chart | 20,882 | - | 150 | | OpenWikitable | Table | 1,257 | 4,261 | - | | DUDE | Open | 27,955 | 2,135 | 496 | | MPMQA | Manual | 10,018 | 3,054 | - | | SlideVQA | Slide | 52,380 | - | 760 | | MHDocVQA (newly created) | Open | 28,550 | 9,470 | - | ## Additional Information ### License Each of the publicly available sub-datasets present in the OpenDocVQA is governed by specific licensing conditions. Therefore, when making use of them, you must take into consideration each of the licenses governing each dataset. The images of VisualMRC and SlideVQA datasets in this repo are released under the [NTT License](https://huggingface.co/NTT-hil-insight/OpenDocVQA/edit/main/LICENSE). ### Citation ```bibtex @inproceedings{tanaka2025vdocrag, author = {Ryota Tanaka and Taichi Iki and Taku Hasegawa and Kyosuke Nishida and Kuniko Saito and Jun Suzuki}, title = {VDocRAG: Retrieval-Augmented Generation over Visually-Rich Documents}, booktitle = {CVPR}, year = {2025} } ```