nielsr HF Staff commited on
Commit
cccc7c7
·
verified ·
1 Parent(s): f5b0d79

Add comprehensive dataset card for UniFilter synthetic training data

Browse files

This PR adds comprehensive information to the dataset card for `unifilter_train_data`, including:
- A descriptive introduction to the dataset's purpose and identity.
- Links to the paper ([`2510.15162`](https://huggingface.co/papers/2510.15162)), project page, and GitHub repository.
- The paper abstract for full context.
- Relevant metadata such as `task_categories` (`image-text-to-text`), `language` (`en`), and `tags` (`multimodal`, `data-quality`, `synthetic-data`).
- Installation and UniFilter training instructions as "Sample Usage" to illustrate how this dataset is consumed by the associated project.
- Citation and Acknowledgment sections.

This ensures better discoverability and understanding of the dataset.

Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ task_categories:
5
+ - image-text-to-text
6
+ tags:
7
+ - multimodal
8
+ - data-quality
9
+ - synthetic-data
10
+ ---
11
+
12
+ # UniFilter Synthetic Training Data (`unifilter_train_data`)
13
+
14
+ This repository contains `UniFilter-Post-Train-Data`, the synthetic training data used for the UniFilter model, as presented in the paper [Train a Unified Multimodal Data Quality Classifier with Synthetic Data](https://huggingface.co/papers/2510.15162).
15
+
16
+ UniFilter is an efficient Multimodal Large Language Model (MLLM) designed as a Unified Multimodal Data Quality Classifier. It filters high-quality image-text caption and interleaved document data by generating quality scores. This dataset facilitates the training of UniFilter, enabling MLLMs pre-trained on the filtered data to demonstrate significantly enhanced capabilities in zero-shot reasoning and in-context learning.
17
+
18
+ * **Project Page**: [https://victorwz.github.io/UniFilter](https://victorwz.github.io/UniFilter)
19
+ * **Code**: [https://github.com/Victorwz/UniFilter](https://github.com/Victorwz/UniFilter)
20
+
21
+ ## Abstract
22
+ The Multimodal Large Language Models (MLLMs) are continually pre-trained on a mixture of image-text caption data and interleaved document data, while the high-quality data filtering towards image-text interleaved document data is under-explored. We propose to train an efficient MLLM as a Unified Mulitmodal Data Quality Classifier to Filter both high-quality image-text caption and interleaved data (UniFilter). To address the challenge of collecting diverse labeled multimodal data, we introduce a semi-synthetic approach that leverages readily available raw images and generates corresponding text across four quality levels. This method enables efficient creation of sample-score pairs for both caption and interleaved document data to train UniFilter. We apply UniFilter to curate high-quality caption data from DataComp caption dataset and interleaved data from the OBELICS image-text interleaved dataset. MLLMs pre-trained on the filtered data demonstrate significantly enhanced capabilities compared to those trained on baseline-filtered data, achieving stronger zero-shot reasoning and in-context learning capabilities. After visual supervised fine-tuning, these UniFilter-induced MLLMs achieve stronger performance on various benchmarks, highlighting the downstream benefits of high-quality multimodal pre-training. We release the synthetic training data used for training UniFilter, the UniFilter model checkpoints, and the high-quality interleaved document subset OBELICS-HQ, curated by UniFilter, to the community for reproduction and further development.
23
+
24
+ ## Dataset Description
25
+ This dataset (`UniFilter-Post-Train-Data`) consists of large-scale (multimodal data example, quality score) pairs, which contains both caption data and interleaved document data. This synthetic multimodal example-score paired data is used for training the UniFilter model, a Unified Multimodal Data Quality Classifier that can generate quality scores for both image-text caption and interleaved document data. These quality scores can then be used for filtering high-quality data to significantly strengthen the capability of pre-trained MLLMs.
26
+
27
+ ## Sample Usage: UniFilter Training
28
+
29
+ This dataset is used for training the UniFilter classifier. The following sections are excerpted from the UniFilter GitHub repository, detailing the installation and training process that consumes this dataset.
30
+
31
+ ### Installation
32
+ If you just require the quality score generation, please install the customized LLaVA package only.
33
+
34
+ ```Shell
35
+ conda create -n unifilter python=3.10
36
+ conda activate unifilter
37
+ pip install -e LLaVA
38
+ pip install flash-attn==2.5.2 --no-build-isolation
39
+ ```
40
+
41
+ ### Data Preparation for UniFilter Training
42
+ UniFilter is trained a large-scale set of (multimodal data example, quality score) pairs, which contains both caption data and interleaved document data. The synthetic multimodal example-score paired data are available at [UniFilter-Post-Train-Data](https://huggingface.co/datasets/weizhiwang/unifilter_train_data) (this dataset).
43
+
44
+ ### UniFilter Training
45
+ We develop the UniFilter training and scoring codebase based on [LLaVA-Unified](https://github.com/Victorwz/LLaVA-Unified) repo, which is adapted from LLaVA with the support for recent LLMs and Vision Encoders.
46
+ The architectural design of UniFilter contains three modules, the vision encoder, the visual projector, and the LLM Backbone. Different from a MLLM, the LLM Backbone does not have a language modeling head and we replace it with a score generation head. All these module parameters are specified with:
47
+ - `--mm_projector_type`: visual projector, i.e. aapool_mlp representing average pooling vision projector with 144 tokens for one image
48
+ - `--vision_tower`: vision encoder, i.e. SigLIP-SO-400M with 384px resolution
49
+ - `--model_name_or_path`: LLM Backbone, i.e. Qwen2.5-0.5B-Instruct
50
+
51
+ #### Visual Projector Pre-Training (Stage 1)
52
+ Please download the 558K subset of the LLAVA-Pretrain caption dataset [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain).
53
+
54
+ Training script with DeepSpeed ZeRO-2: [`pretrain.sh`](scripts/v1_5/pretrain.sh).
55
+
56
+ #### UniFilter Classifier Training (Stage 2)
57
+ Training script with DeepSpeed ZeRO-3: [`train_classifier.sh`](scripts/v1_5/train_classifier.sh).
58
+
59
+ Our training script will upload the metrics to wandb. The best UniFilter model is saved based on the best quality classification accuracy on the validation sets.
60
+
61
+ ## Citation
62
+ Please cite our paper if you find this repository interesting or helpful:
63
+ ```bibtex
64
+ @article{UniFilter,
65
+ title={Train a Unified Multimodal Data Quality Classifier with Synthetic Data},
66
+ author={Wang, Weizhi and Lin, Rongmei and Li, Shiyang and Lockard, Colin and Sarkhel, Ritesh and Lokegaonkar, Sanket and Shang, Jingbo and Yan, Xifeng and Zalmout, Nasser and Li, Xian},
67
+ journal={arXiv preprint arXiv:2510.15162},
68
+ year={2025}
69
+ }
70
+ ```
71
+
72
+ ## Acknowledgement
73
+ - [LLaVA](https://github.com/haotian-liu/LLaVA): the codebase we built upon for UniFilter training.