harpreetsahota commited on
Commit
6f9a96d
·
verified ·
1 Parent(s): 64eee52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +186 -77
README.md CHANGED
@@ -5,24 +5,27 @@ size_categories:
5
  - 10K<n<100K
6
  task_categories:
7
  - object-detection
 
 
8
  task_ids: []
9
  pretty_name: CommonForms_val
10
  tags:
11
  - fiftyone
12
  - image
13
  - object-detection
14
- dataset_summary: '
15
 
16
 
17
 
18
 
19
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.
 
20
 
21
 
22
  ## Installation
23
 
24
 
25
- If you haven''t already, install FiftyOne:
26
 
27
 
28
  ```bash
@@ -44,9 +47,9 @@ dataset_summary: '
44
 
45
  # Load the dataset
46
 
47
- # Note: other available arguments include ''max_samples'', etc
48
 
49
- dataset = load_from_hub("harpreetsahota/commonforms_val_subset")
50
 
51
 
52
  # Launch the App
@@ -54,19 +57,14 @@ dataset_summary: '
54
  session = fo.launch_app(dataset)
55
 
56
  ```
57
-
58
- '
59
  ---
60
 
61
  # Dataset Card for CommonForms_val
62
 
63
- <!-- Provide a quick summary of the dataset. -->
64
-
65
-
66
-
67
 
 
68
 
69
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.
70
 
71
  ## Installation
72
 
@@ -84,141 +82,252 @@ from fiftyone.utils.huggingface import load_from_hub
84
 
85
  # Load the dataset
86
  # Note: other available arguments include 'max_samples', etc
87
- dataset = load_from_hub("harpreetsahota/commonforms_val_subset")
88
 
89
  # Launch the App
90
  session = fo.launch_app(dataset)
91
  ```
92
 
93
-
94
  ## Dataset Details
95
 
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
99
 
 
100
 
 
 
 
 
 
101
 
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
- - **Language(s) (NLP):** en
106
- - **License:** [More Information Needed]
107
 
108
  ### Dataset Sources [optional]
109
 
110
- <!-- Provide the basic links for the dataset. -->
111
-
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
  ## Uses
117
 
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
-
120
  ### Direct Use
121
 
122
- <!-- This section describes suitable use cases for the dataset. -->
 
 
 
 
 
 
 
123
 
124
- [More Information Needed]
 
 
 
 
125
 
126
  ### Out-of-Scope Use
127
 
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
129
 
130
- [More Information Needed]
 
 
 
 
 
131
 
132
  ## Dataset Structure
133
 
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
135
 
136
- [More Information Needed]
137
 
138
- ## Dataset Creation
 
 
 
 
 
139
 
140
- ### Curation Rationale
 
 
 
 
 
 
 
 
 
 
141
 
142
- <!-- Motivation for the creation of this dataset. -->
143
 
144
- [More Information Needed]
 
 
 
 
145
 
146
- ### Source Data
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
 
 
 
149
 
150
- #### Data Collection and Processing
151
 
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
153
 
154
- [More Information Needed]
155
 
156
- #### Who are the source data producers?
 
 
 
157
 
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
 
160
- [More Information Needed]
 
 
 
161
 
162
- ### Annotations [optional]
163
 
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
165
 
166
- #### Annotation process
167
 
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
169
 
170
- [More Information Needed]
171
 
172
- #### Who are the annotators?
173
 
174
- <!-- This section describes the people or systems who created the annotations. -->
 
 
 
 
 
 
175
 
176
- [More Information Needed]
177
 
178
- #### Personal and Sensitive Information
179
 
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
181
 
182
- [More Information Needed]
183
 
184
- ## Bias, Risks, and Limitations
185
 
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
187
 
188
- [More Information Needed]
 
 
 
 
 
 
 
189
 
190
- ### Recommendations
 
 
 
 
191
 
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
193
 
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
195
 
196
- ## Citation [optional]
197
 
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
199
 
200
- **BibTeX:**
201
 
202
- [More Information Needed]
203
 
204
- **APA:**
205
 
206
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
207
 
208
- ## Glossary [optional]
209
 
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
211
 
212
- [More Information Needed]
213
 
214
- ## More Information [optional]
215
 
216
- [More Information Needed]
 
 
 
 
 
217
 
218
- ## Dataset Card Authors [optional]
219
 
220
- [More Information Needed]
 
 
 
 
221
 
222
- ## Dataset Card Contact
223
 
224
- [More Information Needed]
 
 
 
5
  - 10K<n<100K
6
  task_categories:
7
  - object-detection
8
+ - visual-question-answering
9
+ - visual-document-retrieval
10
  task_ids: []
11
  pretty_name: CommonForms_val
12
  tags:
13
  - fiftyone
14
  - image
15
  - object-detection
16
+ dataset_summary: >
17
 
18
 
19
 
20
 
21
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000
22
+ samples.
23
 
24
 
25
  ## Installation
26
 
27
 
28
+ If you haven't already, install FiftyOne:
29
 
30
 
31
  ```bash
 
47
 
48
  # Load the dataset
49
 
50
+ # Note: other available arguments include 'max_samples', etc
51
 
52
+ dataset = load_from_hub("Voxel51/commonforms_val_subset")
53
 
54
 
55
  # Launch the App
 
57
  session = fo.launch_app(dataset)
58
 
59
  ```
 
 
60
  ---
61
 
62
  # Dataset Card for CommonForms_val
63
 
 
 
 
 
64
 
65
+ CommonForms_val is a validation subset of the CommonForms dataset for form field detection. It contains 10,000 annotated document images with bounding boxes for three types of form fields: text inputs, choice buttons (checkboxes/radio buttons), and signature fields. This dataset is designed for training and evaluating object detection models on the task of automatically detecting fillable form fields in document images.
66
 
67
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10,000 samples.
68
 
69
  ## Installation
70
 
 
82
 
83
  # Load the dataset
84
  # Note: other available arguments include 'max_samples', etc
85
+ dataset = load_from_hub("Voxel51/commonforms_val_subset")
86
 
87
  # Launch the App
88
  session = fo.launch_app(dataset)
89
  ```
90
 
 
91
  ## Dataset Details
92
 
93
  ### Dataset Description
94
 
95
+ CommonForms_val is a validation subset extracted from the CommonForms dataset, a web-scale dataset for form field detection introduced in the paper "CommonForms: A Large, Diverse Dataset for Form Field Detection" (Barrow, 2025). The dataset frames form field detection as an object detection problem: given an image of a document page, predict the location and type of form fields.
96
 
97
+ The full CommonForms dataset was constructed by filtering Common Crawl to find PDFs with fillable elements, starting with 8 million documents and arriving at ~55,000 documents with over 450,000 pages. This validation subset contains 2,500 pages with 34,643 annotated form field instances across diverse languages and domains.
98
 
99
+ Key characteristics:
100
+ - **Multilingual**: Approximately one-third of pages are non-English
101
+ - **Multi-domain**: 14 classified domains, with no single domain exceeding 25% of the dataset
102
+ - **High-quality annotations**: Automatically extracted from interactive PDF forms with fillable fields
103
+ - **Three form field types**: Text inputs (68.9%), choice buttons (30.7%), and signature fields (0.4%)
104
 
105
+ - **Curated by:** Joe Barrow (Independent Researcher)
106
+ - **Funded by:** LambdaLabs (compute grant for model training)
107
+ - **Shared by:** Joe Barrow
108
+ - **Language(s) (NLP):** Multilingual (en, and ~33% non-English including various European and other languages)
109
+ - **License:** [Check original repository - https://huggingface.co/datasets/jbarrow/CommonForms]
110
 
111
  ### Dataset Sources [optional]
112
 
113
+ - **Repository:** https://github.com/jbarrow/commonforms
114
+ - **Paper:** https://arxiv.org/abs/2509.16506
115
+ - **Demo:** https://detect.semanticdocs.org
116
+ - **Original Dataset:** https://huggingface.co/datasets/jbarrow/CommonForms
 
117
 
118
  ## Uses
119
 
 
 
120
  ### Direct Use
121
 
122
+ This dataset is intended for:
123
+
124
+ 1. **Training and evaluating object detection models** for form field detection in document images
125
+ 2. **Benchmarking form field detection systems** against the validation set
126
+ 3. **Research in document understanding** and intelligent document processing
127
+ 4. **Developing automated form preparation tools** that can convert static PDFs into fillable forms
128
+ 5. **Computer vision research** on high-resolution document analysis
129
+ 6. **Multi-class object detection** with imbalanced classes (signature fields are rare)
130
 
131
+ The dataset is particularly useful for:
132
+ - Training YOLO, Faster R-CNN, or other object detection architectures
133
+ - Fine-tuning vision transformers for document understanding
134
+ - Evaluating model performance across different form field types
135
+ - Studying the impact of high-resolution inputs on detection quality
136
 
137
  ### Out-of-Scope Use
138
 
139
+ This dataset should **not** be used for:
140
 
141
+ 1. **OCR or text recognition tasks** - The dataset only contains bounding boxes for form fields, not text content
142
+ 2. **Form understanding or semantic analysis** - No information about field labels, relationships, or form structure
143
+ 3. **Handwriting detection** - Only detects empty form fields, not filled content
144
+ 4. **Privacy-sensitive applications without review** - Forms may contain templates with sensitive field types (medical, financial, etc.)
145
+ 5. **Production deployment without validation** - This is a validation subset; models should be tested on appropriate test sets
146
+ 6. **Fine-grained form field classification** - Only three broad categories are available (text, choice, signature)
147
 
148
  ## Dataset Structure
149
 
150
+ ### FiftyOne Dataset Structure
151
 
152
+ This dataset is stored in FiftyOne format, which provides a powerful structure for computer vision datasets:
153
 
154
+ **Sample-level fields:**
155
+ - `filepath` (string): Path to the document image file
156
+ - `image_id` (int): Unique identifier for the image from the original dataset
157
+ - `file_name` (string): Original filename (e.g., "0001104-0.png")
158
+ - `dataset_id` (int): Sample ID in the original dataset
159
+ - `ground_truth` (Detections): FiftyOne Detections object containing all form field annotations
160
 
161
+ **Detection-level fields (within `ground_truth`):**
162
+ - `label` (string): Form field type - one of:
163
+ - `text_input`: Text boxes and input fields (68.9% of annotations)
164
+ - `choice_button`: Checkboxes and radio buttons (30.7% of annotations)
165
+ - `signature`: Signature fields (0.4% of annotations)
166
+ - `bounding_box` (list): Normalized coordinates [x, y, width, height] in range [0, 1]
167
+ - Format: [top-left-x, top-left-y, width, height] relative to image dimensions
168
+ - `area` (float): Area of the bounding box in absolute pixels
169
+ - `iscrowd` (bool): COCO-style crowd flag (always False in this dataset)
170
+ - `object_id` (int): Unique identifier for the annotation
171
+ - `category_id` (int): Numeric category (0=text_input, 1=choice_button, 2=signature)
172
 
173
+ ### Image Specifications
174
 
175
+ - **Image dimensions:** Variable, ranging from 1680×1680 to 3360×3528 pixels
176
+ - **Mean dimensions:** 1748×2201 pixels
177
+ - **Format:** RGB PNG images
178
+ - **Resolution:** High-resolution document scans optimized for form field detection
179
+ - **Unique dimensions:** 61 different image size combinations
180
 
181
+ ### Annotation Format
182
 
183
+ Annotations follow COCO object detection format converted to FiftyOne:
184
+ - **Original format:** COCO [x, y, width, height] in absolute pixel coordinates
185
+ - **FiftyOne format:** Normalized [x, y, width, height] in relative coordinates [0, 1]
186
+ - **Bounding box validation:** Invalid boxes (negative dimensions, out-of-bounds) are filtered during conversion
187
 
 
188
 
189
+ ## Dataset Creation
190
 
191
+ ### Curation Rationale
192
 
193
+ The CommonForms dataset was created to address the lack of large-scale, publicly available datasets for form field detection. Existing commercial solutions (Adobe Acrobat, Apple Preview) have limitations:
194
+ - They cannot detect choice buttons (checkboxes/radio buttons)
195
+ - They are closed-source and not reproducible
196
+ - No public benchmarks exist for comparison
197
 
198
+ The key insight is that "quantity has a quality all its own" - by leveraging existing fillable PDF forms from Common Crawl as a training signal, high-quality form field detection can be achieved without manual annotation. This validation subset enables:
199
 
200
+ 1. **Reproducible benchmarking** of form field detection systems
201
+ 2. **Open-source model development** for automated form preparation
202
+ 3. **Research advancement** in document understanding and intelligent document processing
203
+ 4. **Cost-effective training** - models trained on this data cost less than $500 in compute
204
 
205
+ ### Source Data
206
 
207
+ #### Data Collection and Processing
208
 
209
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
210
 
211
+ **Source:** Common Crawl PDF corpus (~8 million PDFs) prepared by the PDF Association
212
+
213
+ **Filtering Process:**
214
+ 1. Started with 8 million PDF documents from Common Crawl
215
+ 2. Applied rigorous cleaning to identify well-prepared forms with fillable elements
216
+ 3. Filtered to PDFs containing interactive form fields (text boxes, checkboxes, signature fields)
217
+ 4. Quality filtering to ensure form fields were properly annotated in the source PDFs
218
+ 5. Final dataset: ~55,000 documents with 450,000+ pages
219
+
220
+ **Processing Steps:**
221
+ 1. PDF rendering to high-resolution images (optimized for form field detection)
222
+ 2. Extraction of form field annotations from PDF metadata
223
+ 3. Conversion to COCO object detection format
224
+ 4. Train/validation/test split creation
225
+ 5. This subset represents the validation split
226
+
227
+ **Quality Assurance:**
228
+ - Ablation studies showed the cleaning process improves data efficiency vs. using all PDFs
229
+ - Annotations are automatically extracted from interactive PDF forms (no manual annotation)
230
+ - High-resolution inputs (1216px+) were found crucial for quality detection
231
+
232
+ **Data Characteristics:**
233
+ - **Multilingual:** ~33% non-English pages
234
+ - **Multi-domain:** 14 domains classified, no domain exceeds 25%
235
+ - **Diverse layouts:** Wide variety of form designs and structures
236
+ - **Real-world forms:** Government forms, applications, surveys, contracts, etc.
237
 
238
+ #### Who are the source data producers?
239
 
240
+ The source data consists of PDF forms published on the public web and crawled by Common Crawl. The original form creators include:
241
 
242
+ - **Government agencies** (federal, state, local)
243
+ - **Educational institutions**
244
+ - **Healthcare organizations**
245
+ - **Financial institutions**
246
+ - **Legal services**
247
+ - **Corporate entities**
248
+ - **Non-profit organizations**
249
 
250
+ The forms were created by professional document designers, administrative staff, and organizations worldwide. The diversity of sources contributes to the dataset's robustness across different form styles, languages, and domains.
251
 
252
+ **Note:** The forms are templates (unfilled) extracted from publicly available PDFs on the internet.
253
 
254
+ ### Annotations
255
 
256
+ #### Annotation process
257
 
258
+ **Automatic Annotation from PDF Metadata:**
259
 
260
+ The annotations in this dataset are **automatically extracted** from interactive PDF forms, not manually annotated. The process:
261
 
262
+ 1. **Source:** PDF form field metadata embedded in interactive PDFs
263
+ 2. **Extraction:** Form field locations and types are programmatically extracted from PDF structure
264
+ 3. **Mapping:** PDF form field types are mapped to three detection categories:
265
+ - PDF text fields → `text_input`
266
+ - PDF checkbox/radio button fields → `choice_button`
267
+ - PDF signature fields → `signature`
268
+ 4. **Coordinate conversion:** PDF coordinates converted to image pixel coordinates
269
+ 5. **Format standardization:** Converted to COCO object detection format
270
 
271
+ **Advantages:**
272
+ - **Scale:** Enables annotation of 450k+ pages without manual labor
273
+ - **Consistency:** Annotations are objective and derived from PDF structure
274
+ - **Cost:** No annotation costs
275
+ - **Quality:** Reflects real-world form field placement by professional designers
276
 
277
+ **Limitations:**
278
+ - Annotation quality depends on source PDF quality
279
+ - Some PDFs may have incorrectly defined form fields
280
+ - Only detects explicitly defined form fields (not visual-only fields)
281
 
282
+ #### Who are the annotators?
283
 
284
+ The annotations are **automatically generated** from PDF metadata - there are no human annotators. The "annotators" are effectively the original form designers who created the interactive PDF forms with fillable fields.
285
 
286
+ The dataset curation and extraction pipeline was developed by Joe Barrow (Independent Researcher).
287
 
 
288
 
289
+ ## Citation
290
 
291
+ **BibTeX:**
292
 
293
+ ```bibtex
294
+ @misc{barrow2025commonforms,
295
+ title = {CommonForms: A Large, Diverse Dataset for Form Field Detection},
296
+ author = {Barrow, Joe},
297
+ year = {2025},
298
+ eprint = {2509.16506},
299
+ archivePrefix = {arXiv},
300
+ primaryClass = {cs.CV},
301
+ doi = {10.48550/arXiv.2509.16506},
302
+ url = {https://arxiv.org/abs/2509.16506}
303
+ }
304
+ ```
305
 
306
+ **APA:**
307
 
308
+ Barrow, J. (2025). CommonForms: A Large, Diverse Dataset for Form Field Detection. *arXiv preprint arXiv:2509.16506*. https://doi.org/10.48550/arXiv.2509.16506
309
 
310
+ ## More Information
311
 
312
+ ### Related Resources
313
 
314
+ - **GitHub Repository:** https://github.com/jbarrow/commonforms
315
+ - **Hosted Demo:** https://detect.semanticdocs.org
316
+ - **Models:**
317
+ - FFDNet-S: https://huggingface.co/jbarrow/FFDNet-S
318
+ - FFDNet-L: https://huggingface.co/jbarrow/FFDNet-L
319
+ - **Full Dataset:** https://huggingface.co/datasets/jbarrow/CommonForms (486,969 samples)
320
 
321
+ ### Use Cases in the Wild
322
 
323
+ The CommonForms models and dataset enable:
324
+ - Automated PDF form preparation
325
+ - Document digitization workflows
326
+ - Accessibility improvements for forms
327
+ - Form field extraction for document understanding systems
328
 
329
+ ## Dataset Card Authors
330
 
331
+ - **Primary Author:** Harpreet Sahota (FiftyOne dataset curation)
332
+ - **Original Dataset:** Joe Barrow ([email protected])
333
+ - **Dataset Card Completion:** AI-assisted with human review