Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ configs:
|
|
| 5 |
- split: train
|
| 6 |
path: "*.parquet"
|
| 7 |
---
|
| 8 |
-
Gemstones Training Dataset -
|
| 9 |
|
| 10 |
This data is a reporcessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma).
|
| 11 |
|
|
@@ -15,7 +15,7 @@ reproduce the training batches across the gpus is/was the run the training code.
|
|
| 15 |
This repo is the result of an attempt to simulate the way in which the training code loaded the data and
|
| 16 |
stream it out to a portable file format for use in downstream analyses of the model suite.
|
| 17 |
|
| 18 |
-
# Sharding format:
|
| 19 |
|
| 20 |
This version of the dataset approximates the order of the dataset _as if_ a model was being trained
|
| 21 |
on a single gpu without data parallelism. In reality, specific subsets of the data were loaded by the distributed
|
|
|
|
| 5 |
- split: train
|
| 6 |
path: "*.parquet"
|
| 7 |
---
|
| 8 |
+
Gemstones Training Dataset - Sequential version
|
| 9 |
|
| 10 |
This data is a reporcessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma).
|
| 11 |
|
|
|
|
| 15 |
This repo is the result of an attempt to simulate the way in which the training code loaded the data and
|
| 16 |
stream it out to a portable file format for use in downstream analyses of the model suite.
|
| 17 |
|
| 18 |
+
# Sharding format: sequential
|
| 19 |
|
| 20 |
This version of the dataset approximates the order of the dataset _as if_ a model was being trained
|
| 21 |
on a single gpu without data parallelism. In reality, specific subsets of the data were loaded by the distributed
|