Mathematical Documents Dataset
This dataset contains 36,661 scientific documents with OCR-extracted text and mathematical content probability scores. Documents were filtered from the CommonCrawl PDF corpus based on mathematical content probability.
Quick Start
from datasets import load_dataset
import json
# Load metadata
with open("metadata.jsonl") as f:
for line in f:
doc = json.loads(line)
doc_id = doc['doc_id']
# Read extracted text for each page
# texts/{doc_id}/page_1.md, page_2.md, ...
with open(f"texts/{doc_id}/page_1.md") as page:
text = page.read()
print(text)
break
Dataset Structure
math-docs-dataset/
βββ metadata.jsonl # Document metadata with probability scores
βββ metadata_updated.jsonl # Updated metadata (if applicable)
βββ token_counts.jsonl # Token counts per document
βββ token_stats.json # Aggregate token statistics
βββ texts/ # OCR-extracted text (2.5GB)
β βββ {doc_id}/
β β βββ page_1.md
β β βββ page_2.md
β β βββ ...
βββ samples/ # 50 sample documents for preview
βββ pdfs/
β βββ {doc_id}.pdf
βββ texts/
β βββ {doc_id}/
βββ sample_metadata.jsonl
Statistics
- Total documents: 36,661
- Total pages: 885,333
- Average pages per document: 24.1
- Mean probability range: [0.8007, 1.0000]
Token Statistics
- Total tokens: 756,843,504
- Average tokens per document: 20,644
- Average tokens per page: 854
Token counts calculated using tiktoken (cl100k_base encoding, GPT-4 tokenizer).
Accessing Full PDFs
Due to size constraints, full PDF files (30+ GB) are hosted on Wasabi S3 storage.
Download All PDFs
# Install AWS CLI if needed
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install -i ~/.local/aws-cli -b ~/.local/bin
# Download PDFs (no authentication required)
aws s3 sync s3://igor-bucket/math_docs_dataset/pdfs/ ./pdfs/ \
--endpoint-url=https://s3.eu-central-1.wasabisys.com \
--no-sign-request
Download Specific PDF
# Download single document
aws s3 cp s3://igor-bucket/math_docs_dataset/pdfs/{doc_id}.pdf ./pdfs/ \
--endpoint-url=https://s3.eu-central-1.wasabisys.com \
--no-sign-request
Preview Samples
50 sample PDFs are included in the samples/ directory for preview without downloading the full dataset.
Metadata Fields
Each entry in metadata.jsonl contains:
doc_id: Unique document identifierpdf_path: Relative path to PDF filenum_pages: Number of pages in the documentmean_proba: Mean probability that document contains mathematical content
Data Collection
- Source: CommonCrawl PDF corpus
- Filtering: Documents classified by mathematical content probability
- Text Extraction: doct.ocr
Usage Examples
Load and Process Documents
import json
from pathlib import Path
# Load metadata
docs = []
with open("metadata.jsonl") as f:
for line in f:
docs.append(json.loads(line))
# Filter high-quality math documents
high_quality = [d for d in docs if d['mean_proba'] > 0.95]
print(f"Found {len(high_quality)} high-quality documents")
# Read document text
def read_document(doc_id):
text_dir = Path(f"texts/{doc_id}")
full_text = []
for page_file in sorted(text_dir.glob("page_*.md")):
with open(page_file) as f:
full_text.append(f.read())
return "\n\n".join(full_text)
# Example usage
doc = high_quality[0]
text = read_document(doc['doc_id'])
print(f"Document {doc['doc_id']}: {len(text)} characters")
Token Analysis
import json
# Load token statistics
with open("token_stats.json") as f:
stats = json.load(f)
print(f"Total tokens: {stats['total_tokens']:,}")
print(f"Avg tokens/doc: {stats['avg_tokens_per_doc']:.0f}")
# Load per-document token counts
with open("token_counts.jsonl") as f:
for line in f:
doc_tokens = json.loads(line)
# Process individual document token counts
break
Citation
If you use this dataset, please cite:
@dataset{math_docs_dataset,
title={Mathematical Documents Dataset},
author={Your Name},
year={2025},
publisher={HuggingFace},
url={https://huggingface.co/datasets/your-username/math-docs-dataset}
}
License
MIT License
Contact
For questions or issues, please open an issue on the dataset repository.
- Downloads last month
- 342