SynWBM / README.md
karolyartur's picture
Update README.md
9e22f81 verified
metadata
dataset_info:
  - config_name: all
    features:
      - name: image
        dtype: image
      - name: depth
        dtype: image
      - name: mask
        dtype: image
    splits:
      - name: train
        num_bytes: 22712394633
        num_examples: 12000
    download_size: 22677841215
    dataset_size: 22712394633
  - config_name: blender
    features:
      - name: image
        dtype: image
      - name: depth
        dtype: image
      - name: mask
        dtype: image
    splits:
      - name: train
        num_bytes: 11049913446
        num_examples: 6000
    download_size: 11050639109
    dataset_size: 11049913446
  - config_name: default
    features:
      - name: image
        dtype: image
      - name: depth
        dtype: image
      - name: mask
        dtype: image
    splits:
      - name: train
        num_examples: 12000
    download_size: 21573770746
    dataset_size: 21702540415
  - config_name: sdxl
    features:
      - name: image
        dtype: image
      - name: depth
        dtype: image
      - name: mask
        dtype: image
    splits:
      - name: train
        num_bytes: 11662481187
        num_examples: 6000
    download_size: 11626921894
    dataset_size: 11662481187
configs:
  - config_name: all
    data_files:
      - split: train
        path: all/train-*
  - config_name: blender
    data_files:
      - split: train
        path: blender/train-*
  - config_name: sdxl
    data_files:
      - split: train
        path: sdxl/train-*
language:
  - en
license: gpl-3.0
tags:
  - vision
  - image segmentation
  - instance segmentation
  - object detection
  - synthetic
  - sim-to-real
  - depth estimation
  - image to image
annotations_creators:
  - machine-generated
pretty_name: SynWBM Dataset
size_categories:
  - 10K<n<100K
task_categories:
  - object-detection
  - image-segmentation
  - depth-estimation
  - image-to-image
task_ids:
  - instance-segmentation
  - semantic-segmentation

The SynWBM (Synthetic White Button Mushrooms) Dataset!

Sample example

Synthetic dataset of white button mushrooms (Agaricus bisporus) with instance segmentation masks and depth maps.

Dataset Summary

The SynWBM Dataset is a collection of synthetic images of white button mushroom. The dataset incorporates rendered (using Blender) and generated (using Stable Diffusion XL) synthetic images for training mushroom segmentation models. Each image is annotated with instance segmentation masks and depth maps. The dataset explicitly marks synthetic samples based on their creation method (either rendered from the Blender scene or generated using Stable Diffusion XL) to facilitate sim-to-real performance tests on different synthetic datasets. The dataset does not provide real-world samples for testing, it only contains the synthetic training data.

Supported Tasks and Leaderboards

The dataset supports tasks such as semantic segmentation, instance segmentation, object detection, depth estimation, image-to-image, and pre-training models for sim-to-real transfer.

Dataset Structure

Data samples

The samples of the dataset are 1024x1024x3 images in PNG format. The annotations are 1024x1024x1 PNG images representing the instance segmentation masks (using uint16 data format) and 1024x1024x1 PNG images representing depth data.

Data Fields

The data fields are:

  1. 'image': 1024x1024x3 PNG image
  2. 'depth': 1024x1024x1 PNG image
  3. 'mask': 1024x1024x1 unit16 PNG image

⚠️ Note: Instance masks are stored as uint16 PNG images, where each pixel value corresponds to an instance ID. Many default image viewers will render them as black; for visualization, we recommend converting them to RGB with a color palette.

Data Splits

The dataset contains only a training split for all image collections, but provides three configs (all, blender and sdxl) to load the images generated by different methods.

Limitations

  • The dataset only contains synthetic images; no real-world samples are provided.
  • Stable Diffusion XL generations may contain artifacts or unrealistic structures.
  • Depth annotations are derived from rendering/generation pipelines and may not reflect real-world sensor noise.

Usage

You can load the dataset using 🤗 Datasets.

⚠️ Note: By default, load_dataset will download the entire dataset locally.
If you only want to iterate over the data without downloading everything first, use the streaming=True argument (see example below).

from datasets import load_dataset

# Load all synthetic images
ds = load_dataset("ABC-iRobotics/SynWBM", name="all")

# Load only Blender-rendered images
ds_blender = load_dataset("ABC-iRobotics/SynWBM", name="blender")

# Load only SDXL-generated images
ds_sdxl = load_dataset("ABC-iRobotics/SynWBM", name="sdxl")

# Example visualization
example = ds["train"][0]
image, mask, depth = example["image"], example["mask"], example["depth"]

Streaming mode (recommended for large datasets):

from datasets import load_dataset

# Load in streaming mode: data is streamed on the fly without full download
ds_stream = load_dataset("ABC-iRobotics/SynWBM", name="all", streaming=True)

# Iterate through samples
for example in ds_stream["train"]:
    image, mask, depth = example["image"], example["mask"], example["depth"]
    # process the example...

Intended Use

The dataset is intended for research in:

  • instance segmentation of mushrooms
  • sim-to-real transfer learning
  • synthetic-to-real domain adaptation
  • benchmarking data generation pipelines

Dataset Creation

Curation Rationale

The dataset was created to train instance segmentation models on synthetic data for mushroom instance segmentation in real-world images.

Source Data

The data is generated using two methods:

  • Blender-rendered images are produced from a fully procedural Blender scene, with annotations generated using the Blender Annotation Tool (BAT)
  • Stable Diffusion XL images are generated using ComfyUI with a custom workflow provided in this repository.

Citation Information

Coming Soon

License

This dataset is released under GNU General Public License v3.0.