Datasets:
File size: 6,242 Bytes
27c13fe 61109a1 d6d5c79 61109a1 d6d5c79 61109a1 d6d5c79 61109a1 d6d5c79 61109a1 d6d5c79 61109a1 27c13fe 61109a1 78cadc0 61109a1 27c13fe 61109a1 27c13fe 61109a1 27c13fe 61109a1 9e22f81 61109a1 9e22f81 61109a1 cd3b8c9 61109a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 |
---
dataset_info:
- config_name: all
features:
- name: image
dtype: image
- name: depth
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 22712394633.0
num_examples: 12000
download_size: 22677841215
dataset_size: 22712394633.0
- config_name: blender
features:
- name: image
dtype: image
- name: depth
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 11049913446.0
num_examples: 6000
download_size: 11050639109
dataset_size: 11049913446.0
- config_name: default
features:
- name: image
dtype: image
- name: depth
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_examples: 12000
download_size: 21573770746
dataset_size: 21702540415
- config_name: sdxl
features:
- name: image
dtype: image
- name: depth
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 11662481187.0
num_examples: 6000
download_size: 11626921894
dataset_size: 11662481187.0
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- config_name: blender
data_files:
- split: train
path: blender/train-*
- config_name: sdxl
data_files:
- split: train
path: sdxl/train-*
language:
- en
license: gpl-3.0
tags:
- vision
- image segmentation
- instance segmentation
- object detection
- synthetic
- sim-to-real
- depth estimation
- image to image
annotations_creators:
- machine-generated
pretty_name: SynWBM Dataset
size_categories:
- 10K<n<100K
task_categories:
- object-detection
- image-segmentation
- depth-estimation
- image-to-image
task_ids:
- instance-segmentation
- semantic-segmentation
---
# The SynWBM (Synthetic White Button Mushrooms) Dataset!

Synthetic dataset of white button mushrooms (Agaricus bisporus) with instance segmentation masks and depth maps.
### Dataset Summary
The SynWBM Dataset is a collection of synthetic images of white button mushroom. The dataset incorporates rendered (using Blender) and generated (using Stable Diffusion XL) synthetic images for training mushroom segmentation models. Each image is annotated with instance segmentation masks and depth maps. The dataset explicitly marks synthetic samples based on their creation method (either rendered from the Blender scene or generated using Stable Diffusion XL) to facilitate sim-to-real performance tests on different synthetic datasets. The dataset does not provide real-world samples for testing, it only contains the synthetic training data.
### Supported Tasks and Leaderboards
The dataset supports tasks such as semantic segmentation, instance segmentation, object detection, depth estimation, image-to-image, and pre-training models for sim-to-real transfer.
## Dataset Structure
### Data samples
The samples of the dataset are 1024x1024x3 images in PNG format. The annotations are 1024x1024x1 PNG images representing the instance segmentation masks (using uint16 data format) and 1024x1024x1 PNG images representing depth data.
### Data Fields
The data fields are:
1) 'image': 1024x1024x3 PNG image
2) 'depth': 1024x1024x1 PNG image
3) 'mask': 1024x1024x1 unit16 PNG image
⚠️ Note: Instance masks are stored as uint16 PNG images, where each pixel value corresponds to an instance ID. Many default image viewers will render them as black; for visualization, we recommend converting them to RGB with a color palette.
### Data Splits
The dataset contains only a training split for all image collections, but provides three configs (`all`, `blender` and `sdxl`) to load the images generated by different methods.
### Limitations
- The dataset only contains synthetic images; no real-world samples are provided.
- Stable Diffusion XL generations may contain artifacts or unrealistic structures.
- Depth annotations are derived from rendering/generation pipelines and may not reflect real-world sensor noise.
## Usage
You can load the dataset using 🤗 Datasets.
⚠️ **Note**: By default, `load_dataset` will download the *entire* dataset locally.
If you only want to iterate over the data without downloading everything first, use the `streaming=True` argument (see example below).
```python
from datasets import load_dataset
# Load all synthetic images
ds = load_dataset("ABC-iRobotics/SynWBM", name="all")
# Load only Blender-rendered images
ds_blender = load_dataset("ABC-iRobotics/SynWBM", name="blender")
# Load only SDXL-generated images
ds_sdxl = load_dataset("ABC-iRobotics/SynWBM", name="sdxl")
# Example visualization
example = ds["train"][0]
image, mask, depth = example["image"], example["mask"], example["depth"]
```
**Streaming mode (recommended for large datasets):**
```python
from datasets import load_dataset
# Load in streaming mode: data is streamed on the fly without full download
ds_stream = load_dataset("ABC-iRobotics/SynWBM", name="all", streaming=True)
# Iterate through samples
for example in ds_stream["train"]:
image, mask, depth = example["image"], example["mask"], example["depth"]
# process the example...
```
### Intended Use
The dataset is intended for research in:
- instance segmentation of mushrooms
- sim-to-real transfer learning
- synthetic-to-real domain adaptation
- benchmarking data generation pipelines
## Dataset Creation
### Curation Rationale
The dataset was created to train instance segmentation models on synthetic data for mushroom instance segmentation in real-world images.
### Source Data
The data is generated using two methods:
- Blender-rendered images are produced from a fully procedural Blender scene, with annotations generated using the [Blender Annotation Tool (BAT)](https://github.com/ABC-iRobotics/blender_annotation_tool)
- Stable Diffusion XL images are generated using ComfyUI with a [custom workflow](https://huggingface.co/datasets/ABC-iRobotics/SynWBM/blob/main/mushroom_worflow.json) provided in this repository.
## Citation Information
Coming Soon
## License
This dataset is released under [GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html). |