OfficerChul commited on
Commit
a4af5e2
ยท
verified ยท
1 Parent(s): b33e005

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +102 -3
  2. download_android_control.ipynb +119 -0
  3. episode_goals.jsonl +0 -0
README.md CHANGED
@@ -1,3 +1,102 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Android Control Dataset
2
+
3
+ ## Overview
4
+
5
+ This directory contains two dataset files (`and_ctrl_train.json` and `and_ctrl_test.json`) derived from the [Android Control](https://github.com/google-research/google-research/tree/master/android_control) project by Google Research. These datasets have been formatted specifically for GUI grounding training in LLaMA-Factory.
6
+
7
+ ## Dataset Description
8
+
9
+ The Android Control dataset consists of episodes where each episode contains multiple steps. Each step includes:
10
+ - **Step instructions**: Natural language instructions for UI interactions
11
+ - **Actions**: The type of action to perform (click, scroll, input text, etc.)
12
+ - **Coordinates**: Precise x, y coordinates for the action
13
+
14
+ The data has been extracted and formatted to train models for mobile UI understanding and interaction tasks.
15
+
16
+ ## Files
17
+
18
+ - `and_ctrl_train.json`: Training dataset
19
+ - `and_ctrl_test.json`: Test/evaluation dataset
20
+ - `download_android_control.ipynb`: Jupyter notebook for downloading images and processing the original data
21
+
22
+ ## Data Format
23
+
24
+ Each entry in the JSON files follows the LLaMA-Factory conversation format:
25
+
26
+ ```json
27
+ {
28
+ "messages": [
29
+ {
30
+ "role": "system",
31
+ "content": "You are a helpful assistant that can identify what action to perform on mobile UI Screenshot given the user instruction."
32
+ },
33
+ {
34
+ "role": "user",
35
+ "content": "<image>Click on the Recording 2"
36
+ },
37
+ {
38
+ "role": "assistant",
39
+ "content": "{\"action_type\": \"click\", \"x\": 561, \"y\": 535}"
40
+ }
41
+ ],
42
+ "images": ["and_ctrl/out_episode_18557_step_001.png"]
43
+ }
44
+ ```
45
+
46
+ ## Setup Instructions
47
+
48
+ To use these datasets in LLaMA-Factory:
49
+
50
+ 1. **Create the image directory**:
51
+ ```bash
52
+ mkdir -p data/and_ctrl
53
+ ```
54
+
55
+ 2. **Download images**:
56
+ Run the provided `download_android_control.ipynb` notebook to download and process the original images. The notebook will:
57
+ - Download TFRecord files from Google Storage (`gs://gresearch/android_control/`)
58
+ - Extract images and save them directly to `and_ctrl/` directory
59
+ - Automatically organize images with the naming convention: `out_episode_{episode_id}_step_{step_number}.png`
60
+ - Generate an `and_ctrl.json` file with the processed data
61
+
62
+ 3. **Dataset files**:
63
+ - Images: Stored in `data/and_ctrl/` folder
64
+ - Training dataset: `and_ctrl_train.json` in `data/datasets/`
65
+ - Test dataset: `and_ctrl_test.json` in `data/datasets/`
66
+
67
+ ## Dataset Statistics
68
+
69
+ **Total samples**: Train: 82,944 | Test: 904
70
+
71
+ | Action Type | Train | Test |
72
+ |-------------|-------|------|
73
+ | click | 51,793 (62.44%) | 125 (13.83%) |
74
+ | scroll | 11,005 (13.27%) | 125 (13.83%) |
75
+ | input_text | 5,966 (7.19%) | 125 (13.83%) |
76
+ | wait | 5,657 (6.82%) | 125 (13.83%) |
77
+ | open_app | 5,572 (6.72%) | 125 (13.83%) |
78
+ | navigate_back | 2,909 (3.51%) | 125 (13.83%) |
79
+ | long_press | 42 (0.05%) | 125 (13.83%) |
80
+ | navigate_home | 0 (0.00%) | 29 (3.21%) |
81
+
82
+ **Note**: The training dataset shows a natural distribution with click actions being dominant (62.44%), while the test dataset is intentionally balanced with most action types having equal representation (~13.83% each). The `navigate_home` action appears only in the test set.
83
+
84
+ ## Training Usage
85
+
86
+ These datasets are specifically formatted for training multimodal language models to:
87
+ - Understand mobile UI screenshots
88
+ - Ground natural language instructions to specific UI elements
89
+ - Generate precise action coordinates for UI automation
90
+ - Learn mobile app interaction patterns
91
+
92
+ ## Source and Attribution
93
+
94
+ Original dataset: [Google Research Android Control](https://github.com/google-research/google-research/tree/master/android_control)
95
+
96
+ The Android Control dataset was created by Google Research for advancing mobile UI understanding and automation research.
97
+
98
+ ## Notes
99
+
100
+ - The images are referenced with relative paths starting with `and_ctrl/`
101
+ - Each action includes the action type and necessary parameters (coordinates, text, direction, etc.)
102
+ - The test set can be used for evaluating model performance on unseen mobile UI interactions
download_android_control.ipynb ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "!git clone https://github.com/google-research/google-research.git\n",
10
+ "!pip install tensorflow"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "markdown",
15
+ "metadata": {},
16
+ "source": [
17
+ "# Download images"
18
+ ]
19
+ },
20
+ {
21
+ "cell_type": "code",
22
+ "execution_count": null,
23
+ "metadata": {
24
+ "colab": {
25
+ "base_uri": "https://localhost:8080/"
26
+ },
27
+ "id": "CgoFvmqCtps3",
28
+ "outputId": "64381a1a-5e3d-4ce4-92dd-6a11e43f61ed"
29
+ },
30
+ "outputs": [],
31
+ "source": "# -*- coding: utf-8 -*-\nimport io\nimport json\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Tuple\n\nimport tensorflow as tf\nfrom PIL import Image\n\n# =========================\n# TF.Feature -> Python ๊ฐ’ ๋ณต์›\n# =========================\ndef feature_to_list_bytes(f: tf.train.Feature) -> List[bytes]:\n return list(f.bytes_list.value)\n\ndef feature_to_list_int(f: tf.train.Feature) -> List[int]:\n return list(f.int64_list.value)\n\ndef feature_to_list_float(f: tf.train.Feature) -> List[float]:\n return list(f.float_list.value)\n\ndef get_feature(example: tf.train.Example, key: str) -> Optional[tf.train.Feature]:\n fmap = example.features.feature\n return fmap[key] if key in fmap else None\n\n# =========================\n# ์•ก์…˜(dict)์—์„œ ์ขŒํ‘œ ์ถ”์ถœ(์ •๊ทœํ™”/ํ”ฝ์…€ ๋ชจ๋‘ ์ง€์›)\n# =========================\ndef extract_points_from_action(\n action: Dict[str, Any],\n img_w: int,\n img_h: int\n) -> List[Tuple[int, int]]:\n \"\"\"\n ๋ฐ˜ํ™˜: [(x_px, y_px), ...]\n - ์  ํ•˜๋‚˜(ํƒญ)๋ฉด ๊ธธ์ด 1\n - ๋“œ๋ž˜๊ทธ/์Šค์™€์ดํ”„๋ฉด ์‹œ์ž‘/๋ 2์ \n - ์—†์œผ๋ฉด []\n \"\"\"\n pts: List[Tuple[int, int]] = []\n\n def to_px(x: float, y: float, normalized: Optional[bool]=None) -> Tuple[int,int]:\n if normalized is None:\n normalized = (0.0 <= x <= 1.0 and 0.0 <= y <= 1.0)\n if normalized:\n return (int(round(x * img_w)), int(round(y * img_h)))\n else:\n return (int(round(x)), int(round(y)))\n\n # 1) ์ตœ์ƒ์œ„ x, y\n if \"x\" in action and \"y\" in action:\n pts.append(to_px(float(action[\"x\"]), float(action[\"y\"]), None))\n\n # 2) point / click / tap / press / long_press / long_tap\n for k in [\"point\", \"click\", \"tap\", \"press\", \"long_press\", \"long_tap\"]:\n if k in action and isinstance(action[k], dict):\n px = action[k]\n if \"x\" in px and \"y\" in px:\n pts.append(to_px(float(px[\"x\"]), float(px[\"y\"]), None))\n if k in action and isinstance(action[k], list):\n for px in action[k]:\n if isinstance(px, dict) and \"x\" in px and \"y\" in px:\n pts.append(to_px(float(px[\"x\"]), float(px[\"y\"]), None))\n\n # 3) from/to, start/end\n for a, b in [(\"from\", \"to\"), (\"start\", \"end\")]:\n if a in action and b in action and isinstance(action[a], dict) and isinstance(action[b], dict):\n ax, ay = action[a].get(\"x\"), action[a].get(\"y\")\n bx, by = action[b].get(\"x\"), action[b].get(\"y\")\n if ax is not None and ay is not None and bx is not None and by is not None:\n pts.append(to_px(float(ax), float(ay), None))\n pts.append(to_px(float(bx), float(by), None))\n\n # 4) start_x/start_y/end_x/end_y\n cand = {\"start_x\": None, \"start_y\": None, \"end_x\": None, \"end_y\": None}\n found = False\n for ck in cand.keys():\n if ck in action:\n cand[ck] = float(action[ck])\n found = True\n if found and cand[\"start_x\"] is not None and cand[\"start_y\"] is not None:\n pts.append(to_px(cand[\"start_x\"], cand[\"start_y\"], None))\n if cand[\"end_x\"] is not None and cand[\"end_y\"] is not None:\n pts.append(to_px(cand[\"end_x\"], cand[\"end_y\"], None))\n\n # ์ค‘๋ณต ์ œ๊ฑฐ\n uniq: List[Tuple[int,int]] = []\n seen = set()\n for p in pts:\n if p not in seen:\n uniq.append(p)\n seen.add(p)\n return uniq\n\n# =========================\n# ์—ํ”ผ์†Œ๋“œ ํŒŒ์‹ฑ\n# =========================\ndef load_episode_from_example(ex: tf.train.Example) -> Dict[str, Any]:\n f = ex.features.feature\n\n screenshots_bytes = feature_to_list_bytes(f[\"screenshots\"])\n a11y_bytes_list = feature_to_list_bytes(f[\"accessibility_trees\"])\n widths = feature_to_list_int(f[\"screenshot_widths\"])\n heights = feature_to_list_int(f[\"screenshot_heights\"])\n\n actions_json_list = [b.decode(\"utf-8\") for b in feature_to_list_bytes(f[\"actions\"])]\n step_insts = [b.decode(\"utf-8\") for b in feature_to_list_bytes(f[\"step_instructions\"])]\n actions = [json.loads(s) for s in actions_json_list]\n\n goal = f[\"goal\"].bytes_list.value[0].decode(\"utf-8\")\n episode_id = int(f[\"episode_id\"].int64_list.value[0]) if f[\"episode_id\"].int64_list.value else int(\n f[\"episode_id\"].bytes_list.value[0].decode(\"utf-8\")\n )\n\n assert len(screenshots_bytes) == len(widths) == len(heights), \"screenshot/width/height ๊ธธ์ด ๋ถˆ์ผ์น˜\"\n assert len(actions) == len(step_insts) == (len(screenshots_bytes) - 1), \\\n \"actions/step_instructions๋Š” screenshots-1๊ณผ ๊ฐ™์•„์•ผ ํ•จ\"\n\n return {\n \"episode_id\": episode_id,\n \"goal\": goal,\n \"screenshots\": screenshots_bytes,\n \"a11y\": a11y_bytes_list,\n \"widths\": widths,\n \"heights\": heights,\n \"actions\": actions,\n \"step_instructions\": step_insts,\n }\n\n# =========================\n# ์•ก์…˜ ๋งคํ•‘ & ์œ ํ‹ธ\n# =========================\ndef _center_xy(w: int, h: int) -> Tuple[int,int]:\n return (int(round(w/2)), int(round(h/2)))\n\ndef _norm_dir(d: Optional[str]) -> str:\n if not d: return \"down\"\n d = str(d).lower()\n if d in [\"up\",\"down\",\"left\",\"right\"]:\n return d\n if d in [\"u\",\"top\"]: return \"up\"\n if d in [\"d\",\"bottom\"]: return \"down\"\n if d in [\"l\"]: return \"left\"\n if d in [\"r\"]: return \"right\"\n return \"down\"\n\ndef map_action(\n action: Dict[str, Any],\n w: int,\n h: int,\n pts: List[Tuple[int,int]],\n) -> Optional[Dict[str, Any]]:\n \"\"\"\n ํ—ˆ์šฉ ๋งคํ•‘:\n click -> {\"type\": \"touch\", \"x\": <x>, \"y\": <y>}\n long_press -> {\"type\": \"long_touch\", \"x\": <x>, \"y\": <y>}\n input_text -> {\"type\": \"set_text\", \"text\": \"...\", \"x\": <x>, \"y\": <y>}\n scroll -> {\"type\": \"scroll\", \"direction\": \"up|down|left|right\", \"x\": <center_x>, \"y\": <center_y>}\n navigate_home -> {\"type\": \"press\", \"key\": \"home\"}\n navigate_back -> {\"type\": \"press\", \"key\": \"back\"}\n \"\"\"\n atype = (action.get(\"action_type\") or action.get(\"type\") or action.get(\"action\") or \"\").lower()\n x, y = (pts[0] if pts else _center_xy(w, h))\n\n if atype in [\"click\", \"tap\", \"press\", \"click_view\"]:\n return {\"type\": \"touch\", \"x\": x, \"y\": y}\n\n if atype in [\"long_press\", \"long_tap\", \"long_click\"]:\n return {\"type\": \"long_touch\", \"x\": x, \"y\": y}\n\n if atype in [\"input_text\", \"set_text\", \"type_text\", \"enter_text\", \"text\"]:\n text = action.get(\"text\") or action.get(\"input_text\") or action.get(\"value\") or \"\"\n return {\"type\": \"set_text\", \"text\": str(text), \"x\": x, \"y\": y}\n\n if atype in [\"scroll\", \"swipe\"]:\n if len(pts) >= 2:\n cx = (pts[0][0] + pts[1][0]) // 2\n cy = (pts[0][1] + pts[1][1]) // 2\n else:\n cx, cy = _center_xy(w, h)\n return {\"type\": \"scroll\", \"direction\": _norm_dir(action.get(\"direction\")), \"x\": cx, \"y\": cy}\n\n if atype in [\"navigate_home\", \"home\", \"press_home\"]:\n return {\"type\": \"press\", \"key\": \"home\"}\n\n if atype in [\"navigate_back\", \"back\", \"press_back\"]:\n return {\"type\": \"press\", \"key\": \"back\"}\n\n # ๊ทธ ์™ธ(open_app, wait ๋“ฑ) โ†’ ์ €์žฅํ•˜์ง€ ์•Š์Œ\n return None\n\ndef save_clean_image(img_bytes: bytes, episode_id: int, step_idx: int, base_dir: str = \"and_ctrl\") -> str:\n \"\"\"\n out_episode_{EP}_step_{STEP:03d}.png (์˜ค๋ฒ„๋ ˆ์ด ์—†์Œ)\n \"\"\"\n Path(base_dir).mkdir(parents=True, exist_ok=True)\n fname = f\"out_episode_{episode_id}_step_{step_idx:03d}.png\"\n fpath = Path(base_dir) / fname\n Image.open(io.BytesIO(img_bytes)).convert(\"RGB\").save(fpath)\n # Return just the relative path from base_dir\n return f\"{base_dir}/{fname}\"\n\n# =========================\n# ๋ฉ”์‹œ์ง€ JSON ๋‚ด๋ณด๋‚ด๊ธฐ\n# =========================\ndef export_messages(ds, limit_episodes: int = 5, out_json: str = \"and_ctrl.json\", image_dir: str = \"and_ctrl\"):\n \"\"\"\n ์ง€์ •ํ•œ TFRecordDataset์—์„œ ์•ž N๊ฐœ ์—ํ”ผ์†Œ๋“œ์˜ ์Šคํ… ์ค‘\n ํ—ˆ์šฉ ์•ก์…˜๋งŒ ๋ชจ์•„ ์š”์ฒญ ํฌ๋งท์œผ๋กœ and_ctrl.json ์ €์žฅ.\n \"\"\"\n all_items: List[Dict[str, Any]] = []\n ep_cnt = 0\n\n for raw in ds:\n ex = tf.train.Example()\n ex.ParseFromString(raw.numpy())\n ep = load_episode_from_example(ex)\n\n ep_id = ep[\"episode_id\"]\n for i, (action, inst) in enumerate(zip(ep[\"actions\"], ep[\"step_instructions\"])):\n w, h = ep[\"widths\"][i], ep[\"heights\"][i]\n img_bytes = ep[\"screenshots\"][i]\n pts = extract_points_from_action(action, w, h)\n mapped = map_action(action, w, h, pts)\n if not mapped:\n continue # ์Šคํ‚ต\n\n img_path = save_clean_image(img_bytes, ep_id, i, base_dir=image_dir)\n\n all_items.append({\n \"messages\": [\n {\"role\": \"user\", \"content\": f\"<image>\\n{inst}\"},\n # JSON์ด ์•„๋‹Œ, ํŒŒ์ด์ฌ dict ๋ฌธ์ž์—ด(single quote)๋กœ ์ €์žฅ\n {\"role\": \"assistant\", \"content\": str(mapped)}\n ],\n \"images\": [img_path]\n })\n\n ep_cnt += 1\n if ep_cnt >= limit_episodes:\n break\n\n with open(out_json, \"w\", encoding=\"utf-8\") as f:\n json.dump(all_items, f, ensure_ascii=False, indent=2)\n\n print(f\"[DONE] episodes processed: {ep_cnt}, items saved: {len(all_items)} โ†’ {out_json}\")\n\n# =========================\n# ์‹คํ–‰ ์ง„์ž…์ \n# =========================\ndef main():\n # ํ•„์š”์‹œ ๊ฒฝ๋กœ ํŒจํ„ด ์กฐ์ •\n filenames = tf.io.gfile.glob('gs://gresearch/android_control/android_control*')\n ds = tf.data.TFRecordDataset(filenames, compression_type='GZIP')\n export_messages(ds, limit_episodes=50, out_json=\"and_ctrl.json\", image_dir=\"and_ctrl\")\n\nif __name__ == \"__main__\":\n main()"
32
+ },
33
+ {
34
+ "cell_type": "code",
35
+ "execution_count": null,
36
+ "metadata": {
37
+ "colab": {
38
+ "base_uri": "https://localhost:8080/"
39
+ },
40
+ "id": "9wgmlG4CxlkL",
41
+ "outputId": "8dfce9bd-7003-4637-98ad-6843f2aeb9f9"
42
+ },
43
+ "outputs": [
44
+ {
45
+ "name": "stdout",
46
+ "output_type": "stream",
47
+ "text": [
48
+ "46.5 GiB gs://gresearch/android_control/android_control*\n"
49
+ ]
50
+ }
51
+ ],
52
+ "source": [
53
+ "!gsutil du -sh gs://gresearch/android_control/android_control*\n"
54
+ ]
55
+ },
56
+ {
57
+ "cell_type": "markdown",
58
+ "metadata": {},
59
+ "source": [
60
+ "# Download Images and episodes"
61
+ ]
62
+ },
63
+ {
64
+ "cell_type": "code",
65
+ "execution_count": null,
66
+ "metadata": {
67
+ "id": "eP8n5YdK8Mns"
68
+ },
69
+ "outputs": [
70
+ {
71
+ "ename": "",
72
+ "evalue": "",
73
+ "output_type": "error",
74
+ "traceback": [
75
+ "\u001b[1;31mRunning cells with 'and_ctrl (Python 3.12.11)' requires the ipykernel package.\n",
76
+ "\u001b[1;31mInstall 'ipykernel' into the Python environment. \n",
77
+ "\u001b[1;31mCommand: '/home/work/kyochul/and_ctrl/bin/python -m pip install ipykernel -U --force-reinstall'"
78
+ ]
79
+ }
80
+ ],
81
+ "source": [
82
+ "import os"
83
+ ]
84
+ },
85
+ {
86
+ "cell_type": "code",
87
+ "execution_count": null,
88
+ "metadata": {},
89
+ "outputs": [],
90
+ "source": []
91
+ }
92
+ ],
93
+ "metadata": {
94
+ "accelerator": "GPU",
95
+ "colab": {
96
+ "gpuType": "T4",
97
+ "provenance": []
98
+ },
99
+ "kernelspec": {
100
+ "display_name": "and_ctrl",
101
+ "language": "python",
102
+ "name": "python3"
103
+ },
104
+ "language_info": {
105
+ "codemirror_mode": {
106
+ "name": "ipython",
107
+ "version": 3
108
+ },
109
+ "file_extension": ".py",
110
+ "mimetype": "text/x-python",
111
+ "name": "python",
112
+ "nbconvert_exporter": "python",
113
+ "pygments_lexer": "ipython3",
114
+ "version": "3.12.11"
115
+ }
116
+ },
117
+ "nbformat": 4,
118
+ "nbformat_minor": 0
119
+ }
episode_goals.jsonl ADDED
The diff for this file is too large to render. See raw diff