Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -87,7 +87,7 @@ task_ids:
|
|
| 87 |
|
| 88 |
### Dataset Summary
|
| 89 |
|
| 90 |
-
This is a subset of the DiffusionDB dataset which has been turned into pixel-style art
|
| 91 |
|
| 92 |
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
|
| 93 |
|
|
@@ -103,23 +103,20 @@ The text in the dataset is mostly English. It also contains other languages such
|
|
| 103 |
|
| 104 |
### Subset
|
| 105 |
|
| 106 |
-
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data taken from the DiffusionDB 2M and has 2000 examples only.
|
| 107 |
|
| 108 |
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|
| 109 |
|:--|--:|--:|--:|--:|--:|
|
| 110 |
|DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`|
|
| 111 |
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
|
| 115 |
-
2. Images in DiffusionDB 2M are stored in `png` format.
|
| 116 |
|
| 117 |
## Dataset Structure
|
| 118 |
|
| 119 |
We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
|
| 120 |
|
| 121 |
```bash
|
| 122 |
-
# DiffusionDB
|
| 123 |
./
|
| 124 |
βββ images
|
| 125 |
β βββ part-000001
|
|
@@ -224,72 +221,6 @@ from datasets import load_dataset
|
|
| 224 |
dataset = load_dataset('jainr3/diffusiondb-pixelart', 'large_random_1k')
|
| 225 |
```
|
| 226 |
|
| 227 |
-
#### Method 2. Use the PoloClub Downloader
|
| 228 |
-
|
| 229 |
-
This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
|
| 230 |
-
|
| 231 |
-
##### Usage/Examples
|
| 232 |
-
|
| 233 |
-
The script is run using command-line arguments as follows:
|
| 234 |
-
|
| 235 |
-
- `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set.
|
| 236 |
-
- `-r` `--range` - Upper bound of range of files to download if `-i` is set.
|
| 237 |
-
- `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
|
| 238 |
-
- `-z` `--unzip` - Unzip the file/files after downloading
|
| 239 |
-
- `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
|
| 240 |
-
|
| 241 |
-
###### Downloading a single file
|
| 242 |
-
|
| 243 |
-
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
|
| 244 |
-
|
| 245 |
-
```bash
|
| 246 |
-
python download.py -i 23
|
| 247 |
-
```
|
| 248 |
-
|
| 249 |
-
###### Downloading a range of files
|
| 250 |
-
|
| 251 |
-
The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively.
|
| 252 |
-
|
| 253 |
-
```bash
|
| 254 |
-
python download.py -i 1 -r 2000
|
| 255 |
-
```
|
| 256 |
-
|
| 257 |
-
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
|
| 258 |
-
|
| 259 |
-
###### Downloading to a specific directory
|
| 260 |
-
|
| 261 |
-
The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link.
|
| 262 |
-
|
| 263 |
-
```bash
|
| 264 |
-
python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc
|
| 265 |
-
```
|
| 266 |
-
|
| 267 |
-
Again, the script will automatically add the `/` between the directory and the file when it downloads.
|
| 268 |
-
|
| 269 |
-
###### Setting the files to unzip once they've been downloaded
|
| 270 |
-
|
| 271 |
-
The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances.
|
| 272 |
-
|
| 273 |
-
```bash
|
| 274 |
-
python download.py -i 1 -r 2000 -z
|
| 275 |
-
```
|
| 276 |
-
|
| 277 |
-
#### Method 3. Use `metadata.parquet` (Text Only)
|
| 278 |
-
|
| 279 |
-
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table.
|
| 280 |
-
|
| 281 |
-
```python
|
| 282 |
-
from urllib.request import urlretrieve
|
| 283 |
-
import pandas as pd
|
| 284 |
-
|
| 285 |
-
# Download the parquet table
|
| 286 |
-
table_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/metadata.parquet'
|
| 287 |
-
urlretrieve(table_url, 'metadata.parquet')
|
| 288 |
-
|
| 289 |
-
# Read the table using Pandas
|
| 290 |
-
metadata_df = pd.read_parquet('metadata.parquet')
|
| 291 |
-
```
|
| 292 |
-
|
| 293 |
## Dataset Creation
|
| 294 |
|
| 295 |
### Curation Rationale
|
|
|
|
| 87 |
|
| 88 |
### Dataset Summary
|
| 89 |
|
| 90 |
+
**This is a subset of the DiffusionDB 2M dataset which has been turned into pixel-style art.**
|
| 91 |
|
| 92 |
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
|
| 93 |
|
|
|
|
| 103 |
|
| 104 |
### Subset
|
| 105 |
|
| 106 |
+
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data was taken from the DiffusionDB 2M and has 2000 examples only.
|
| 107 |
|
| 108 |
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|
| 109 |
|:--|--:|--:|--:|--:|--:|
|
| 110 |
|DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`|
|
| 111 |
|
| 112 |
+
Images in DiffusionDB-pixelart are stored in `png` format.
|
|
|
|
|
|
|
|
|
|
| 113 |
|
| 114 |
## Dataset Structure
|
| 115 |
|
| 116 |
We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
|
| 117 |
|
| 118 |
```bash
|
| 119 |
+
# DiffusionDB 2k
|
| 120 |
./
|
| 121 |
βββ images
|
| 122 |
β βββ part-000001
|
|
|
|
| 221 |
dataset = load_dataset('jainr3/diffusiondb-pixelart', 'large_random_1k')
|
| 222 |
```
|
| 223 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 224 |
## Dataset Creation
|
| 225 |
|
| 226 |
### Curation Rationale
|