Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,8 @@ size_categories:
|
|
| 18 |
### Dataset Summary
|
| 19 |
|
| 20 |
Industry Documents Library (IDL) is a document dataset filtered from [UCSF documents library](https://www.industrydocuments.ucsf.edu/) with 19 million pages kept as valid samples.
|
| 21 |
-
Each document exists as a collection of a pdf, a tiff image with the same contents rendered, a json file containing extensive Textract OCR annotations from the [idl_data](https://github.com/furkanbiten/idl_data) project, and a .ocr file with the original, older OCR annotation.
|
|
|
|
| 22 |
<center>
|
| 23 |
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/idl_page_example.png" alt="An addendum from an internal legal document" width="600" height="300">
|
| 24 |
<p><em>An example page of one pdf document from the Industry Documents Library. </em></p>
|
|
@@ -26,8 +27,15 @@ Each document exists as a collection of a pdf, a tiff image with the same conten
|
|
| 26 |
|
| 27 |
|
| 28 |
### Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
For faster download, you can use
|
| 31 |
|
| 32 |
```python
|
| 33 |
import os
|
|
@@ -128,13 +136,19 @@ def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=
|
|
| 128 |
|
| 129 |
That way, columnar documents can be better separated. This is a basic heuristic but it should improve overall the readability of the documents.
|
| 130 |
|
|
|
|
| 131 |
<div style="text-align: center;">
|
| 132 |
-
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/
|
| 133 |
-
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/
|
| 134 |
</div>
|
| 135 |
-
<p style="text-align: center;"><em>Standard reading order for a single-column document.</em></p>
|
| 136 |
|
| 137 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
|
| 139 |
For each pdf document, we store statistics on number of pages per shard, number of valid samples per shard. A valid sample is a sample that can be encoded then decoded, which we did for each sample.
|
| 140 |
|
|
@@ -161,7 +175,10 @@ For faster download, you can use directly the `huggingface_hub` library. Make su
|
|
| 161 |
hf.snapshot_download("pixparse/pdfa-english-train", repo_type="dataset", local_dir_use_symlinks=False)
|
| 162 |
```
|
| 163 |
### Data, metadata and statistics.
|
| 164 |
-
|
|
|
|
|
|
|
|
|
|
| 165 |
|
| 166 |
The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
|
| 167 |
|
|
|
|
| 18 |
### Dataset Summary
|
| 19 |
|
| 20 |
Industry Documents Library (IDL) is a document dataset filtered from [UCSF documents library](https://www.industrydocuments.ucsf.edu/) with 19 million pages kept as valid samples.
|
| 21 |
+
Each document exists as a collection of a pdf, a tiff image with the same contents rendered, a json file containing extensive Textract OCR annotations from the [idl_data](https://github.com/furkanbiten/idl_data) project, and a .ocr file with the original, older OCR annotation. In each pdf, there may be from 1 to up to 3000 pages.
|
| 22 |
+
|
| 23 |
<center>
|
| 24 |
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/idl_page_example.png" alt="An addendum from an internal legal document" width="600" height="300">
|
| 25 |
<p><em>An example page of one pdf document from the Industry Documents Library. </em></p>
|
|
|
|
| 27 |
|
| 28 |
|
| 29 |
### Usage
|
| 30 |
+
This instance of IDL is in [webdataset](https://github.com/webdataset/webdataset) .tar format and can be used with derived forms of the webdataset library. For dataloading, the `datasets` library can readily be used, and we also recommend to use it with the chug library, an optimized library for distributed data loading.
|
| 31 |
+
```python
|
| 32 |
+
from datasets import load_dataset
|
| 33 |
+
dataset = load_dataset('pixparse/IDL-wds', streaming=True)
|
| 34 |
+
print(next(iter(dataset['train'])).keys())
|
| 35 |
+
>> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
|
| 36 |
+
```
|
| 37 |
|
| 38 |
+
For faster download, you can directly use the `huggingface_hub` library. Make sure `hf_transfer` is installed prior to downloading and mind that you have enough space locally.
|
| 39 |
|
| 40 |
```python
|
| 41 |
import os
|
|
|
|
| 136 |
|
| 137 |
That way, columnar documents can be better separated. This is a basic heuristic but it should improve overall the readability of the documents.
|
| 138 |
|
| 139 |
+
|
| 140 |
<div style="text-align: center;">
|
| 141 |
+
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/bounding_boxes_straight.png" alt="Numbered bounding boxes on a document" style="width: 600px; height: 800px; object-fit: cover; display: inline-block;">
|
| 142 |
+
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/arrows_plot_straight.png" alt="A simple representation of reading order" style="width: 600px; height: 800px; object-fit: cover; display: inline-block;">
|
| 143 |
</div>
|
| 144 |
+
<p style="text-align: center;"><em>Standard reading order for a single-column document. On the left, bounding boxes are ordered, and on the right a rendition of the corresponding reading order is given.</em></p>
|
| 145 |
|
| 146 |
|
| 147 |
+
<div style="text-align: center;">
|
| 148 |
+
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/bounding_boxes.png" alt="Numbered bounding boxes on a document" style="width: 600px; height: 800px; object-fit: cover; display: inline-block;">
|
| 149 |
+
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/arrows_plot.png" alt="A simple representation of reading order" style="width: 600px; height: 800px; object-fit: cover; display: inline-block;">
|
| 150 |
+
</div>
|
| 151 |
+
<p style="text-align: center;"><em>Heuristic-driven columnar reading order for a two-columns document. On the left, bounding boxes are ordered, and on the right a rendition of the corresponding reading order is given. Some inaccuracies remain but the overall reading order is preserved.</em></p>
|
| 152 |
|
| 153 |
For each pdf document, we store statistics on number of pages per shard, number of valid samples per shard. A valid sample is a sample that can be encoded then decoded, which we did for each sample.
|
| 154 |
|
|
|
|
| 175 |
hf.snapshot_download("pixparse/pdfa-english-train", repo_type="dataset", local_dir_use_symlinks=False)
|
| 176 |
```
|
| 177 |
### Data, metadata and statistics.
|
| 178 |
+
<center>
|
| 179 |
+
<img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/idl_page_example.png" alt="An addendum from an internal legal document" width="600" height="300">
|
| 180 |
+
<p><em>An example page of one pdf document from the Industry Documents Library. </em></p>
|
| 181 |
+
</center>
|
| 182 |
|
| 183 |
The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
|
| 184 |
|