Datasets:
Add comprehensive dataset card with multi-resolution documentation
Browse files
README.md
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
size_categories:
|
| 5 |
+
- 10K<n<100K
|
| 6 |
+
task_categories:
|
| 7 |
+
- summarization
|
| 8 |
+
- image-to-text
|
| 9 |
+
- text-generation
|
| 10 |
+
tags:
|
| 11 |
+
- summarization
|
| 12 |
+
- vision
|
| 13 |
+
- DeepSeek-OCR
|
| 14 |
+
- multilingual
|
| 15 |
+
- visual-text-encoding
|
| 16 |
+
- random-augmentation
|
| 17 |
+
library_name: datasets
|
| 18 |
+
license: cc0-1.0
|
| 19 |
+
pretty_name: "BillSum Legal Document Summarization"
|
| 20 |
+
dataset_info:
|
| 21 |
+
features:
|
| 22 |
+
- name: text
|
| 23 |
+
dtype: string
|
| 24 |
+
- name: summary
|
| 25 |
+
dtype: string
|
| 26 |
+
- name: image
|
| 27 |
+
dtype: image
|
| 28 |
+
- name: source_dataset
|
| 29 |
+
dtype: string
|
| 30 |
+
- name: original_split
|
| 31 |
+
dtype: string
|
| 32 |
+
- name: original_index
|
| 33 |
+
dtype: int64
|
| 34 |
+
splits:
|
| 35 |
+
- name: train
|
| 36 |
+
num_examples: 22218
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
# DeepSynth - BillSum Legal Document Summarization
|
| 40 |
+
|
| 41 |
+
## Dataset Description
|
| 42 |
+
|
| 43 |
+
US Congressional bills with human-written summaries. Specialized for legal document summarization
|
| 44 |
+
with complex structure, formal language, and legislative terminology.
|
| 45 |
+
|
| 46 |
+
This dataset is part of the **DeepSynth** project, which uses visual text encoding for multilingual summarization with the DeepSeek-OCR vision-language model. Text documents are converted into images and processed through a frozen 380M parameter visual encoder, enabling 20x token compression while preserving document layout and structure.
|
| 47 |
+
|
| 48 |
+
### Key Features
|
| 49 |
+
|
| 50 |
+
- **Original High-Quality Images**: Full-resolution images stored once, augmented on-the-fly during training
|
| 51 |
+
- **Random Augmentation Pipeline**: Rotation, perspective, color jitter, and resize transforms for better generalization
|
| 52 |
+
- **Visual Text Encoding**: 20x compression ratio (1 visual token ≈ 20 text tokens)
|
| 53 |
+
- **Document Structure Preservation**: Layout and formatting maintained through image representation
|
| 54 |
+
- **Human-Written Summaries**: High-quality reference summaries for each document
|
| 55 |
+
- **Deduplication Tracking**: Source dataset and index tracking prevents duplicates
|
| 56 |
+
|
| 57 |
+
### Dataset Statistics
|
| 58 |
+
|
| 59 |
+
- **Total Samples**: ~22,000
|
| 60 |
+
- **Language(s)**: English
|
| 61 |
+
- **Domain**: US Congressional bills
|
| 62 |
+
- **Average Document Length**: ~3,000 tokens
|
| 63 |
+
- **Average Summary Length**: ~200 tokens
|
| 64 |
+
|
| 65 |
+
### Source Dataset
|
| 66 |
+
|
| 67 |
+
Based on the **BillSum dataset** of US Congressional bills.
|
| 68 |
+
- **Original Authors**: Kornilova & Eidelman (2019)
|
| 69 |
+
- **Paper**: [BillSum: A Corpus for Automatic Summarization of US Legislation](https://arxiv.org/abs/1910.00523)
|
| 70 |
+
- **License**: CC0 1.0 Universal (Public Domain)
|
| 71 |
+
|
| 72 |
+
## Image Augmentation Pipeline
|
| 73 |
+
|
| 74 |
+
Images are stored at **original resolution** (up to 1600×2200) and augmented during training for better generalization:
|
| 75 |
+
|
| 76 |
+
### Available Augmentation Transforms
|
| 77 |
+
|
| 78 |
+
- **Random Rotation**: ±10° rotation for orientation invariance
|
| 79 |
+
- **Random Perspective**: 0.1-0.2 distortion to simulate viewing angles
|
| 80 |
+
- **Random Resize**: 512-1600px range for multi-scale learning
|
| 81 |
+
- **Color Jitter**: Brightness, contrast, saturation adjustments (±20%)
|
| 82 |
+
- **Random Horizontal Flip**: Optional (use with caution for text)
|
| 83 |
+
|
| 84 |
+
All transforms preserve aspect ratio with padding to maintain text readability. This approach:
|
| 85 |
+
- **Reduces storage**: 6x less disk space (single image vs 6 resolutions)
|
| 86 |
+
- **Increases flexibility**: Any resolution on-the-fly vs pre-computed fixed sizes
|
| 87 |
+
- **Improves generalization**: Random transforms prevent overfitting to specific resolutions
|
| 88 |
+
|
| 89 |
+
## Dataset Structure
|
| 90 |
+
|
| 91 |
+
### Data Fields
|
| 92 |
+
|
| 93 |
+
- `text` (string): Original document text
|
| 94 |
+
- `summary` (string): Human-written summary
|
| 95 |
+
- `image` (PIL.Image): Original full-size rendered document image (up to 1600×2200)
|
| 96 |
+
- `source_dataset` (string): Origin dataset name
|
| 97 |
+
- `original_split` (string): Source split (train/validation/test)
|
| 98 |
+
- `original_index` (int): Original sample index for deduplication
|
| 99 |
+
|
| 100 |
+
### Data Example
|
| 101 |
+
|
| 102 |
+
```python
|
| 103 |
+
{
|
| 104 |
+
'text': 'A BILL to amend the Internal Revenue Code of 1986...',
|
| 105 |
+
'summary': 'This bill amends the Internal Revenue Code to...',
|
| 106 |
+
'image': <PIL.Image>, # Original resolution (up to 1600×2200)
|
| 107 |
+
'source_dataset': 'billsum',
|
| 108 |
+
'original_split': 'train',
|
| 109 |
+
'original_index': 0
|
| 110 |
+
}
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Usage
|
| 114 |
+
|
| 115 |
+
### Loading the Dataset
|
| 116 |
+
|
| 117 |
+
```python
|
| 118 |
+
from datasets import load_dataset
|
| 119 |
+
|
| 120 |
+
# Load full dataset
|
| 121 |
+
dataset = load_dataset("baconnier/deepsynth-en-legal")
|
| 122 |
+
|
| 123 |
+
# Streaming for large datasets
|
| 124 |
+
dataset = load_dataset("baconnier/deepsynth-en-legal", streaming=True)
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
### Training Example with DeepSeek-OCR and Augmentation
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
from transformers import AutoProcessor, AutoModelForVision2Seq
|
| 131 |
+
from datasets import load_dataset
|
| 132 |
+
from deepsynth.data.transforms import create_training_transform
|
| 133 |
+
|
| 134 |
+
# Load model and processor
|
| 135 |
+
model = AutoModelForVision2Seq.from_pretrained("deepseek-ai/DeepSeek-OCR")
|
| 136 |
+
processor = AutoProcessor.from_pretrained("deepseek-ai/DeepSeek-OCR")
|
| 137 |
+
|
| 138 |
+
# Load dataset
|
| 139 |
+
dataset = load_dataset("baconnier/deepsynth-en-legal")
|
| 140 |
+
|
| 141 |
+
# Create augmentation pipeline (random rotation, perspective, resize, color jitter)
|
| 142 |
+
transform = create_training_transform(
|
| 143 |
+
target_size_range=(512, 1600), # Random resize range
|
| 144 |
+
rotation_degrees=10, # ±10° rotation
|
| 145 |
+
perspective_distortion=0.1, # Perspective transform
|
| 146 |
+
brightness_factor=0.2, # ±20% brightness
|
| 147 |
+
contrast_factor=0.2, # ±20% contrast
|
| 148 |
+
)
|
| 149 |
+
|
| 150 |
+
# Process sample with augmentation
|
| 151 |
+
sample = dataset['train'][0]
|
| 152 |
+
augmented_image = transform(sample['image']) # Apply random transforms
|
| 153 |
+
inputs = processor(
|
| 154 |
+
images=augmented_image,
|
| 155 |
+
text=sample['text'],
|
| 156 |
+
return_tensors="pt"
|
| 157 |
+
)
|
| 158 |
+
|
| 159 |
+
# Fine-tune decoder only (freeze encoder)
|
| 160 |
+
for param in model.encoder.parameters():
|
| 161 |
+
param.requires_grad = False
|
| 162 |
+
|
| 163 |
+
# Training loop with on-the-fly augmentation...
|
| 164 |
+
```
|
| 165 |
+
|
| 166 |
+
## Training Recommendations
|
| 167 |
+
|
| 168 |
+
### DeepSeek-OCR Fine-Tuning
|
| 169 |
+
|
| 170 |
+
```python
|
| 171 |
+
# Recommended hyperparameters with augmentation
|
| 172 |
+
training_args = {
|
| 173 |
+
"learning_rate": 2e-5,
|
| 174 |
+
"batch_size": 4,
|
| 175 |
+
"gradient_accumulation_steps": 4,
|
| 176 |
+
"num_epochs": 3,
|
| 177 |
+
"mixed_precision": "bf16",
|
| 178 |
+
"freeze_encoder": True, # IMPORTANT: Only fine-tune decoder
|
| 179 |
+
|
| 180 |
+
# Augmentation parameters
|
| 181 |
+
"rotation_degrees": 10, # Random rotation ±10°
|
| 182 |
+
"perspective_distortion": 0.1, # Perspective transform
|
| 183 |
+
"resize_range": (512, 1600), # Random resize 512-1600px
|
| 184 |
+
"brightness_factor": 0.2, # ±20% brightness
|
| 185 |
+
"contrast_factor": 0.2, # ±20% contrast
|
| 186 |
+
}
|
| 187 |
+
```
|
| 188 |
+
|
| 189 |
+
### Expected Performance
|
| 190 |
+
|
| 191 |
+
- **Baseline (text-to-text)**: ROUGE-1 ~40-42
|
| 192 |
+
- **DeepSeek-OCR (visual)**: ROUGE-1 ~44-47 (typical SOTA)
|
| 193 |
+
- **Training Time**: ~6-8 hours on A100 (80GB) for full dataset
|
| 194 |
+
- **GPU Memory**: ~40GB with batch_size=4, mixed_precision=bf16
|
| 195 |
+
|
| 196 |
+
## Dataset Creation
|
| 197 |
+
|
| 198 |
+
This dataset was created using the **DeepSynth** pipeline:
|
| 199 |
+
|
| 200 |
+
1. **Source Loading**: Original text documents from billsum
|
| 201 |
+
2. **Text-to-Image Conversion**: Documents rendered as PNG images (DejaVu Sans 12pt, Unicode support)
|
| 202 |
+
3. **Original Resolution Storage**: Full-quality images stored once (up to 1600×2200)
|
| 203 |
+
4. **Incremental Upload**: Batches of 5,000 samples uploaded to HuggingFace Hub
|
| 204 |
+
5. **Deduplication**: Source tracking prevents duplicate samples
|
| 205 |
+
|
| 206 |
+
**Note**: Images are augmented on-the-fly during training using random transformations (rotation, perspective, resize, color jitter) for better generalization across different resolutions and conditions.
|
| 207 |
+
|
| 208 |
+
### Rendering Specifications
|
| 209 |
+
|
| 210 |
+
- **Font**: DejaVu Sans 12pt (full Unicode support for multilingual text)
|
| 211 |
+
- **Line Wrapping**: 100 characters per line
|
| 212 |
+
- **Margin**: 40px
|
| 213 |
+
- **Background**: White (255, 255, 255)
|
| 214 |
+
- **Text Color**: Black (0, 0, 0)
|
| 215 |
+
- **Format**: PNG with lossless compression
|
| 216 |
+
|
| 217 |
+
## Citation
|
| 218 |
+
|
| 219 |
+
If you use this dataset in your research, please cite:
|
| 220 |
+
|
| 221 |
+
```bibtex
|
| 222 |
+
@misc{deepsynth-en-legal,
|
| 223 |
+
title={{DeepSynth BillSum Legal Document Summarization: Visual Text Encoding with Random Augmentation for Summarization}},
|
| 224 |
+
author={Baconnier},
|
| 225 |
+
year={2025},
|
| 226 |
+
publisher={HuggingFace},
|
| 227 |
+
howpublished={\url{https://huggingface.co/datasets/baconnier/deepsynth-en-legal}}
|
| 228 |
+
}
|
| 229 |
+
```
|
| 230 |
+
|
| 231 |
+
### Source Dataset Citation
|
| 232 |
+
|
| 233 |
+
```bibtex
|
| 234 |
+
@inproceedings{kornilova2019billsum,
|
| 235 |
+
title={BillSum: A Corpus for Automatic Summarization of US Legislation},
|
| 236 |
+
author={Kornilova, Anastassia and Eidelman, Vladimir},
|
| 237 |
+
booktitle={Proceedings of the 2nd Workshop on New Frontiers in Summarization},
|
| 238 |
+
year={2019}
|
| 239 |
+
}
|
| 240 |
+
```
|
| 241 |
+
|
| 242 |
+
## License
|
| 243 |
+
|
| 244 |
+
CC0 1.0 Universal (Public Domain Dedication)
|
| 245 |
+
|
| 246 |
+
**Note**: This dataset inherits the license from the original source dataset. Please review the source license before commercial use.
|
| 247 |
+
|
| 248 |
+
## Limitations and Bias
|
| 249 |
+
|
| 250 |
+
- **Legal jargon**: Heavy use of legislative and legal terminology
|
| 251 |
+
- **Complex structure**: Bills have nested sections, subsections, clauses
|
| 252 |
+
- **US-specific**: United States federal legislation only
|
| 253 |
+
- **Formal language**: Very different from conversational or news text
|
| 254 |
+
- **Long documents**: Bills can be 10,000+ tokens
|
| 255 |
+
|
| 256 |
+
## Additional Information
|
| 257 |
+
|
| 258 |
+
### Dataset Curators
|
| 259 |
+
|
| 260 |
+
Created by the DeepSynth team as part of multilingual visual summarization research.
|
| 261 |
+
|
| 262 |
+
### Contact
|
| 263 |
+
|
| 264 |
+
- **Repository**: [DeepSynth GitHub](https://github.com/bacoco/DeepSynth)
|
| 265 |
+
- **Issues**: [GitHub Issues](https://github.com/bacoco/DeepSynth/issues)
|
| 266 |
+
|
| 267 |
+
### Acknowledgments
|
| 268 |
+
|
| 269 |
+
- **DeepSeek-OCR**: Visual encoder from DeepSeek AI
|
| 270 |
+
- **Source Dataset**: billsum
|
| 271 |
+
- **HuggingFace**: Dataset hosting and infrastructure
|