The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
			removing the
			loading script
			and relying on
			automated data support
			(you can use
			convert_to_parquet
			from the datasets library). If this is not possible, please
			open a discussion
			for direct help.
		
AI Alignment Research Dataset
The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
Sources
Here are the list of sources along with sample contents:
- agisf - recommended readings from AGI Safety Fundamentals 
- aisafety.info - Stampy's FAQ 
- arxiv - relevant research papers 
- blogs - entire websites automatically scraped - AI Impacts
- AI Safety Camp
- carado.moe
- Cold Takes
- DeepMind technical blogs
- DeepMind AI Safety Research
- EleutherAI
- generative.ink
- Gwern Branwen's blog
- Jack Clark's Import AI
- MIRI
- Jacob Steinhardt's blog
- ML Safety Newsletter
- Transformer Circuits Thread
- Open AI Research
- Victoria Krakovna's blog
- Eliezer Yudkowsky's blog
 
- eaforum - selected posts 
- lesswrong - selected posts 
- special_docs - individual documents curated from various resources - Make a suggestion for sources not already in the dataset
 
- youtube - playlists & channels 
Keys
All entries contain the following keys:
- id- string of unique identifier
- source- string of data source listed above
- title- string of document title of document
- authors- list of strings
- text- full text of document content
- url- string of valid link to text content
- date_published- in UTC format
Additional keys may be available depending on the source document.
Usage
Execute the following code to download and parse the files:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')
To only get the data for a specific source, pass it in as the second argument, e.g.:
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
Limitations and Bias
LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
Contributing
The scraper to generate this dataset is open-sourced on GitHub and currently maintained by volunteers at StampyAI / AI Safety Info. Learn more or join us on Discord.
Rebuilding info
This README contains info about the number of rows and their features which should be rebuilt each time datasets get changed. To do so, run:
datasets-cli test ./alignment-research-dataset --save_info --all_configs
Citing the Dataset
For more information, here is the paper and LessWrong post. Please use the following citation when using the dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
- Downloads last month
- 581
