metadata
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: offset
dtype: int64
- name: token_count
dtype: int64
- name: language
dtype: string
- name: page_average_lid
dtype: string
- name: page_average_lid_score
dtype: float64
- name: full_doc_lid
dtype: string
- name: full_doc_lid_score
dtype: float64
- name: per_page_languages
list: string
- name: is_truncated
dtype: bool
- name: extractor
dtype: string
- name: page_ends
list: int64
splits:
- name: train
num_bytes: 43954704
num_examples: 7537
download_size: 24068621
dataset_size: 43954704
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Sampling Methodology
This dataset was created using reservoir sampling, a statistically unbiased random sampling algorithm that guarantees each sample from the source dataset has an equal probability of being included. This ensures the 10M token sample is representative of the full dataset's characteristics.
Source Dataset: HuggingFaceFW/finepdfs Sample Size: 10M tokens Content: High-quality textbook-style pdfs
Reservoir sampling enables rapid experimentation and ablation studies without processing the entire source dataset, while maintaining statistical validity of results.
For details on how this dataset was used in optimal pre-training data composition research, see the blog post.
Citation
If you use this model/dataset, please cite:
@article{sharma2025billion,
title={The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix},
author={Sharma, Asankhaya},
year={2025},
url={https://huggingface.co/blog/codelion/optimal-dataset-mixing/}
}
For more details, see the blog post.