fitzgerald-corpus / README.md
jeremyrmanning's picture
Update arXiv link to live preprint (2510.21958)
cf2c9d0 verified
---
language: en
license: mit
task_categories:
- text-generation
tags:
- stylometry
- authorship-attribution
- literary-analysis
- fitzgerald
- classic-literature
- project-gutenberg
size_categories:
- n<1K
pretty_name: F. Scott Fitzgerald Corpus
---
# ContextLab F. Scott Fitzgerald Corpus
## Dataset Description
This dataset contains works of **F. Scott Fitzgerald** (1896-1940), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://arxiv.org/abs/2510.21958) (Stropkay et al., 2025).
The corpus includes **8 books** by F. Scott Fitzgerald, including The Great Gatsby, Tender Is the Night, and This Side of Paradise. All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.
### Quick Stats
- **Books:** 8
- **Total characters:** 3,363,535
- **Total words:** 592,393 (approximate)
- **Average book length:** 420,441 characters
- **Format:** Plain text (.txt files)
- **Language:** English (lowercase)
## Dataset Structure
### Books Included
Each `.txt` file contains the complete text of one book:
| File | Title |
|------|-------|
| `4368.txt` | Flappers and Philosophers |
| `64317.txt` | The Great Gatsby |
| `6695.txt` | Tales of the Jazz Age |
| `68229.txt` | All the Sad Young Men |
| `805.txt` | This Side of Paradise |
| `9830.txt` | The Beautiful and Damned |
| `gutenberg_net_au_ebooks03_0301261.txt` | Tender Is the Night |
| `gutenberg_net_au_fsf_PAT-HOBBY.txt` | The Pat Hobby Stories |
### Data Fields
- **text:** Complete book text (lowercase, cleaned)
- **filename:** Project Gutenberg ID
### Data Format
All files are plain UTF-8 text:
- Lowercase characters only
- Punctuation and structure preserved
- Paragraph breaks maintained
- No chapter headings or non-narrative text
## Usage
### Load with `datasets` library
```python
from datasets import load_dataset
# Load entire corpus
corpus = load_dataset("contextlab/fitzgerald-corpus")
# Iterate through books
for book in corpus['train']:
print(f"Book length: {len(book['text']):,} characters")
print(book['text'][:200]) # First 200 characters
print()
```
### Load specific file
```python
# Load single book by filename
dataset = load_dataset(
"contextlab/fitzgerald-corpus",
data_files="54.txt" # Specific Gutenberg ID
)
text = dataset['train'][0]['text']
print(f"Loaded {len(text):,} characters")
```
### Download files directly
```python
from huggingface_hub import hf_hub_download
# Download one book
file_path = hf_hub_download(
repo_id="contextlab/fitzgerald-corpus",
filename="54.txt",
repo_type="dataset"
)
with open(file_path, 'r') as f:
text = f.read()
```
### Use for training language models
```python
from datasets import load_dataset
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
# Load corpus
corpus = load_dataset("contextlab/fitzgerald-corpus")
# Combine all books into single text
full_text = " ".join([book['text'] for book in corpus['train']])
# Tokenize
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, max_length=1024)
tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])
# Initialize model
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Set up training
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=10,
per_device_train_batch_size=8,
save_steps=1000,
)
# Train
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized['train']
)
trainer.train()
```
### Analyze text statistics
```python
from datasets import load_dataset
import numpy as np
corpus = load_dataset("contextlab/fitzgerald-corpus")
# Calculate statistics
lengths = [len(book['text']) for book in corpus['train']]
print(f"Books: {len(lengths)}")
print(f"Total characters: {sum(lengths):,}")
print(f"Mean length: {np.mean(lengths):,.0f} characters")
print(f"Std length: {np.std(lengths):,.0f} characters")
print(f"Min length: {min(lengths):,} characters")
print(f"Max length: {max(lengths):,} characters")
```
## Dataset Creation
### Source Data
All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain.
**Project Gutenberg Links:**
- Books identified by Gutenberg ID numbers (filenames)
- Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54
- All works are in the public domain
### Preprocessing Pipeline
The raw Project Gutenberg texts underwent the following preprocessing:
1. **Header/footer removal:** Project Gutenberg license text and metadata removed
2. **Lowercase conversion:** All text converted to lowercase for stylometry
3. **Chapter heading removal:** Chapter titles and numbering removed
4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed
5. **Encoding normalization:** Converted to UTF-8
6. **Structure preservation:** Paragraph breaks and punctuation maintained
**Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.
**Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry
## Considerations for Using This Dataset
### Known Limitations
- **Historical language:** Reflects Jazz Age America vocabulary, grammar, and cultural context
- **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis)
- **Incomplete corpus:** May not include all of F. Scott Fitzgerald's writings (only public domain works on Gutenberg)
- **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source
- **Public domain only:** Limited to works published before copyright restrictions
### Intended Use Cases
- **Stylometry research:** Authorship attribution, style analysis
- **Language modeling:** Training author-specific models
- **Literary analysis:** Computational study of F. Scott Fitzgerald's writing
- **Historical NLP:** Jazz Age America language patterns
- **Educational:** Teaching computational text analysis
### Out-of-Scope Uses
- Case-sensitive text analysis
- Modern language applications
- Factual information retrieval
- Complete scholarly editions (use academic sources)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@article{StroEtal25,
title={A Stylometric Application of Large Language Models},
author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
journal={arXiv preprint arXiv:2510.21958},
year={2025}
}
```
## Additional Information
### Dataset Curator
[ContextLab](https://www.context-lab.com/), Dartmouth College
### Licensing
MIT License - Free to use with attribution
### Contact
- **Paper & Code:** https://github.com/ContextLab/llm-stylometry
- **Issues:** https://github.com/ContextLab/llm-stylometry/issues
- **Contact:** Jeremy R. Manning ([email protected])
### Related Resources
Explore datasets for all 8 authors in the study:
- [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus)
- [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus)
- [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus)
- [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus)
- [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus)
- [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus)
- [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus)
- [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus)