metadata
license: mit
task_categories:
- visual-question-answering
- image-classification
language:
- en
tags:
- robotics
- condition-checking
- multi-modal
- vision-language
size_categories:
- n<1K
Condition Checking Dataset
This dataset contains condition checking conversations for robotics applications, with embedded base64 images from multiple camera viewpoints.
Dataset Description
- Task: Visual condition checking (True/False questions about robot states)
- Modality: Multi-modal (text + images)
- Domain: Robotics manipulation tasks
- Format: Conversational format suitable for VLM training
Dataset Structure
Data Fields
id: Unique identifier for each sampleimages: Dictionary containing base64-encoded images from multiple camera viewpointsconversations: List of conversation turns (human question + assistant answer)
Camera Viewpoints
The dataset includes images from 5 camera viewpoints:
observation_images_chestobservation_images_left_eyeobservation_images_left_wristobservation_images_right_eyeobservation_images_right_wrist
Each sample contains approximately 30 images total (6 per camera).
Sample Structure
{
"id": "frame_index_position_part",
"images": {
"camera_key": ["base64_image_1", "base64_image_2", ...],
...
},
"conversations": [
{
"from": "human",
"value": "Here are the observations... condition: (object is grasped) ..."
},
{
"from": "gpt",
"value": "True"
}
]
}
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("jeffshen4011/condition-checking-dataset")
# Access a sample
sample = dataset["train"][0]
print(f"Question: {sample['conversations'][0]['value'][:100]}...")
print(f"Answer: {sample['conversations'][1]['value']}")
print(f"Number of camera views: {len(sample['images'])}")
Dataset Statistics
- Training samples: 9
- Camera viewpoints: 5
- Images per sample: ~30
- Image format: Base64-encoded PNG
- Task type: Binary classification (True/False)
Applications
This dataset is designed for:
- Training vision-language models for robotics condition checking
- Multi-modal reasoning tasks
- Robot state verification
- Visual question answering in manipulation contexts
Citation
If you use this dataset, please cite:
@dataset{condition_checking_dataset,
title={Condition Checking Dataset for Robotics},
author={Research Team},
year={2025},
url={https://huggingface.co/datasets/jeffshen4011/condition-checking-dataset}
}