File size: 5,073 Bytes
4001853
 
 
 
 
 
 
 
 
 
 
 
 
 
46b2657
4001853
 
 
 
c99ed59
4001853
 
 
 
 
 
 
1b46bea
4001853
 
 
 
 
1b46bea
 
 
 
 
4001853
 
 
 
 
 
 
 
 
 
 
 
 
01e4c7c
4001853
 
 
 
 
 
 
 
 
 
 
 
1b46bea
4001853
1b46bea
4001853
1b46bea
 
4001853
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b46bea
4001853
 
 
1b46bea
4001853
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5db2916
4001853
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95a293e
 
 
 
 
 
10c7bb0
4001853
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46b2657
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
---
task_categories:
- robotics
language:
- en
tags:
- RDT
- rdt
- RDT 2
- manipulation
- bimanual
- ur5e
- webdatset
- vision-language-action
license: apache-2.0
---

## Dataset Summary

This dataset provides  shards in the **WebDataset** format for fine-tuning [RDT-2](https://rdt-robotics.github.io/rdt2/) or other policy models on **bimanual manipulation**.
Each sample packs:

* a **binocular RGB image** (left + right wrist cameras concatenated horizontally)
* a **relative action chunk** (continuous control, 0.8s, 30Hz)
* a **discrete action token sequence** (e.g., from an [Residual VQ action tokenizer](https://huggingface.co/robotics-diffusion-transformer/RVQActionTokenizer))
* a **metadata JSON** with an instruction key `sub_task_instruction_key` to index corresponding instruction from `instructions.json`

Data were collected on a **bimanual UR5e** setup.

---

## Supported Tasks

* **Instruction-conditioned bimanual manipulation**, including:
  - Pouring water: different water bottles and cups
  - Cleaning the desktop: different dustpans and paper balls
  - Folding towels: towels of different sizes and colors
  - Stacking cups: cups of different sizes and colors

---

## Data Structure

### Shard layout

Shards are named `shard-*.tar`. Inside each shard:

```
shard-000000.tar
β”œβ”€β”€ 0.image.jpg          # binocular RGB, H=384, W=768, C=3, uint8
β”œβ”€β”€ 0.action.npy         # relative actions, shape (24, 20), float32
β”œβ”€β”€ 0.action_token.npy   # action tokens, shape (27,), int16 ∈ [0, 1024)
β”œβ”€β”€ 0.meta.json          # metadata; includes "sub_task_instruction_key"
β”œβ”€β”€ 1.image.jpg
β”œβ”€β”€ 1.action.npy
β”œβ”€β”€ 1.action_token.npy
β”œβ”€β”€ 1.meta.json
└── ...
shard-000001.tar
shard-000002.tar
...
```

> **Image:** binocular wrist cameras concatenated horizontally β†’ `np.ndarray` of shape `(384, 768, 3)` with `dtype=uint8` (stored as JPEG).
> 
> **Action (continuous):** `np.ndarray` of shape `(24, 20)`, `dtype=float32` (24-step chunk, 20-D control).
> 
> **Action tokens (discrete):** `np.ndarray` of shape `(27,)`, `dtype=int16`, values in `[0, 1024]`.
> 
> **Metadata:** `meta.json` contains at least `sub_task_instruction_key` pointing to an entry in top-level `instructions.json`.


---

## Example Data Instance

```json
{
  "image": "0.image.jpg",
  "action": "0.action.npy",
  "action_token": "0.action_token.npy",
  "meta": {
    "sub_task_instruction_key": "fold_cloth_step_3"
  }
}
```

---

## How to Use

### 1) Official Guidelines to fine-tune RDT 2 series

Use the example [scripts](https://github.com/thu-ml/RDT2/blob/cf71b69927f726426c928293e37c63c4881b0165/data/utils.py#L48) and [guidelines](https://github.com/thu-ml/RDT2/blob/cf71b69927f726426c928293e37c63c4881b0165/data/utils.py#L48):

### 2) Minimal Loading example 

```python
import os
import glob
import json
import random

import webdataset as wds


def no_split(src):
    yield from src

def get_train_dataset(shards_dir):
    shards = sorted(glob.glob(os.path.join(shards_dir, "shard-*.tar")))
    random.shuffle(shards)
    
    num_workers = wds.utils.pytorch_worker_info()[-1]
    workersplitter = wds.split_by_worker if len(shards) > num_workers else no_split
    
    assert shards, f"No shards under {shards_dir}"
    dataset = (
        wds.WebDataset(
            shards,
            shardshuffle=False,
            nodesplitter=no_split,
            workersplitter=workersplitter,
            resampled=True,
        )
        .repeat()
        .shuffle(8192, initial=8192)
        .decode("pil")
        .map(
            lambda sample: {
                "image": sample["image.jpg"],
                "action_token": sample["action_token.npy"],
                "meta": sample["meta.json"],
            }
        )
        .with_epoch(nsamples=(2048 * 30 * 60 * 60))    # 2048 hours
    )
    
    return dataset

with open(os.path.join("<Dataset Diretory>", "instructions.json") as fp:
    instructions = json.load(fp)
dataset = get_train_dataset(os.path.join("<Dataset Diretory>", "shards"))
```
---

## Ethical Considerations

* Contains robot teleoperation/automation data. No PII is present by design.
* Ensure safe deployment/testing on real robots; follow lab safety and manufacturer guidelines.

---

## Citation

If you use this dataset, please cite the dataset and your project appropriately. For example:

```bibtex
@software{rdt2,
    title={RDT2: Enabling Zero-Shot Cross-Embodiment Generalization by Scaling Up UMI Data},
    author={RDT Team},
    url={https://github.com/thu-ml/RDT2},
    month={September},
    year={2025}
}
```

---

## License

* **Dataset license:** Apache-2.0 (unless otherwise noted by the maintainers of your fork/release).
* Ensure compliance when redistributing derived data or models.

---

## Maintainers & Contributions

We welcome fixes and improvements to the conversion scripts and docs (see https://github.com/thu-ml/RDT2/tree/main#troubleshooting).
Please open issues/PRs with:

* OS + Python versions
* Minimal repro code
* Error tracebacks
* Any other helpful context