Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
### Dataset Summary
|
| 2 |
+
|
| 3 |
+
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
|
| 4 |
+
|
| 5 |
+
| Dataset | Token Count |
|
| 6 |
+
|---------------|-------------|
|
| 7 |
+
| Commoncrawl | 878 Billion |
|
| 8 |
+
| C4 | 175 Billion |
|
| 9 |
+
| GitHub | 59 Billion |
|
| 10 |
+
| Books | 26 Billion |
|
| 11 |
+
| ArXiv | 28 Billion |
|
| 12 |
+
| Wikipedia | 24 Billion |
|
| 13 |
+
| StackExchange | 20 Billion |
|
| 14 |
+
| Total | 1.2 Trillion |
|
| 15 |
+
|
| 16 |
+
The dataset consists of 2084 jsonl files, whose URLs are listed in the following file:
|
| 17 |
+
```
|
| 18 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
To download the data, once can use the following command:
|
| 22 |
+
```
|
| 23 |
+
wget -i https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
|
| 27 |
+
|
| 28 |
+
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
|
| 29 |
+
|
| 30 |
+
### Languages
|
| 31 |
+
|
| 32 |
+
Primarily English, though the Wikipedia slice contains multiple languages.
|
| 33 |
+
|
| 34 |
+
## Dataset Structure
|
| 35 |
+
|
| 36 |
+
The dataset structure is as follows:
|
| 37 |
+
|
| 38 |
+
```
|
| 39 |
+
{
|
| 40 |
+
"text": ...,
|
| 41 |
+
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
|
| 42 |
+
}
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## Dataset Creation
|
| 46 |
+
|
| 47 |
+
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
|
| 48 |
+
|
| 49 |
+
### Source Data
|
| 50 |
+
|
| 51 |
+
#### Commoncrawl
|
| 52 |
+
|
| 53 |
+
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
|
| 54 |
+
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
|
| 55 |
+
classify paragraphs as Wikipedia references or random Commoncrawl samples.
|
| 56 |
+
|
| 57 |
+
#### C4
|
| 58 |
+
|
| 59 |
+
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
|
| 60 |
+
|
| 61 |
+
#### GitHub
|
| 62 |
+
|
| 63 |
+
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
|
| 64 |
+
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
|
| 65 |
+
|
| 66 |
+
#### Wikipedia
|
| 67 |
+
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
|
| 68 |
+
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
|
| 69 |
+
formatting boilerplate has been removed.
|
| 70 |
+
|
| 71 |
+
#### Gutenberg and Books3
|
| 72 |
+
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
|
| 73 |
+
simhash to remove near duplicates.
|
| 74 |
+
|
| 75 |
+
#### ArXiv
|
| 76 |
+
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
|
| 77 |
+
remove preambles, comments, macros and bibliographies.
|
| 78 |
+
|
| 79 |
+
#### Stackexchange
|
| 80 |
+
The Stack Exchange split of the dataset is download from the
|
| 81 |
+
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
|
| 82 |
+
remove html tags, group the posts into question-answer pairs, and order answers by their score.
|
| 83 |
+
|
| 84 |
+
<!--
|
| 85 |
+
### Annotations
|
| 86 |
+
#### Annotation process
|
| 87 |
+
[More Information Needed]
|
| 88 |
+
#### Who are the annotators?
|
| 89 |
+
[More Information Needed]
|
| 90 |
+
### Personal and Sensitive Information
|
| 91 |
+
[More Information Needed]
|
| 92 |
+
## Considerations for Using the Data
|
| 93 |
+
### Social Impact of Dataset
|
| 94 |
+
[More Information Needed]
|
| 95 |
+
### Discussion of Biases
|
| 96 |
+
[More Information Needed]
|
| 97 |
+
### Other Known Limitations
|
| 98 |
+
[More Information Needed]
|
| 99 |
+
## Additional Information
|
| 100 |
+
### Dataset Curators
|
| 101 |
+
[More Information Needed]
|
| 102 |
+
### Licensing Information
|
| 103 |
+
[More Information Needed]
|
| 104 |
+
### Citation Information
|
| 105 |
+
[More Information Needed]
|
| 106 |
+
### Contributions
|
| 107 |
+
[More Information Needed]
|
| 108 |
+
-->
|