Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 185, in _split_generators
raise ValueError(f"Found metadata files with different extensions: {list(metadata_ext)}")
ValueError: Found metadata files with different extensions: ['.csv', '.parquet']
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Look and Tell — A Dataset for Multimodal Grounding Across Egocentric and Exocentric Views
This page hosts the KTH-ARIA Referential / "Look and Tell" dataset, introduced in our poster "Look and Tell: A Dataset for Multimodal Grounding Across Egocentric and Exocentric Views", presented at the NeurIPS 2025 SpaVLE Workshop (SPACE in Vision, Language, and Embodied AI), San Diego.
Dataset Description
This dataset investigates the synchronization of eye tracking and speech recognition using Aria smart glasses to determine whether individuals exhibit visual and verbal synchronization when identifying an object. Participants were tasked with identifying food items from a recipe while wearing Aria glasses, which recorded their eye movements and speech in real time. The dataset enables analysis of gaze–speech synchronization and offers a rich resource for studying how people visually and verbally ground references in real environments.
Key Features
- Dual perspectives: Egocentric (first-person via ARIA glasses) and exocentric (third-person via GoPro camera) video recordings
- Gaze tracking: Eye-tracking data synchronized with video
- Audio & transcription: Speech recordings with automatic word-level transcription (WhisperX)
- Referential expressions: Natural language references to objects with temporal and spatial grounding
- Recipe metadata: Ingredient locations and preparation steps with spatial annotations
- 125 recordings: 25 participants × 5 recipes
- Total duration: 3.7 hours (average recording: 108 seconds)
Dataset Details
- Curated by: KTH Royal Institute of Technology
- Language(s): English
- License: CC BY-NC-ND 4.0 (Link)
- Participants: 25 individuals (7 men, 18 women)
- Data Collection Setup: Participants memorized a series of ingredients and steps in five recipes and verbally instructed the steps while wearing ARIA glasses
Direct Use
This dataset is suitable for research in:
- Referential expression grounding
- Gaze and speech synchronization
- Egocentric video understanding
- Multi-modal cooking activity recognition
- Spatial reasoning with language
- Human-robot interaction and multimodal dialogue systems
- Eye-tracking studies in task-based environments
Out-of-Scope Use
- The dataset is not intended for commercial applications without proper ethical considerations
- Misuse in contexts where privacy-sensitive information might be inferred or manipulated should be avoided
Dataset Structure
data/
par_01/
raw/
rec_01/
ego_video.mp4 # Egocentric video (ARIA glasses)
exo_video.mp4 # Exocentric video (GoPro camera)
audio.wav # Audio recording
ego_gaze.csv # Gaze tracking data
rec_02/
...
annotations/
v1/
rec_01/
whisperx_transcription.tsv # ASR word-level transcription
references.csv # Referential expressions with gaze fixations
rec_02/
...
par_02/
...
manifests/
metadata.parquet # Dataset metadata
metadata.csv # CSV version
recipes.json # Recipe details with ingredient locations
schema.md # Data format documentation
Data Fields
Raw Data
Egocentric Video (ego_video.mp4)
- First-person perspective from ARIA glasses
- 30 FPS
- Captures participant's point of view during cooking
Exocentric Video (exo_video.mp4)
- Third-person perspective from GoPro camera
- 30 FPS
- Captures overall scene and participant actions
Audio (audio.wav)
- Sample rate: 48kHz
- Format: WAV
- Contains participant's verbal instructions
Gaze Data (ego_gaze.csv)
- Real-time eye movement tracking from ARIA glasses
- Timestamp-synchronized with video
- Gaze coordinates and fixation data
Annotations
Transcription (whisperx_transcription.tsv)
- Word-level automatic speech recognition (WhisperX)
- Timestamps for each word
- Speaker diarization
References (references.csv)
- Referential expressions (e.g., "the red paprika")
- Temporal alignment with video and speech
- Gaze fixations during utterances
- Object references with spatial grounding
Metadata
metadata.parquet - One row per recording with:
participant_id: Participant identifier (par_01 to par_25)recording_id: Recording identifier (rec_01 to rec_05)recording_uid: Unique recording ID (par_XX_rec_YY)recipe_id: Recipe identifier (recipe_01 to recipe_05)duration_sec: Video duration in secondsego_fps,exo_fps: Frame rateshas_*: Boolean flags for data availabilityn_references: Number of referential expressionsnotes: Data quality notes
recipes.json - Recipe details including:
- Recipe name and preparation steps
- Ingredients with spatial locations
- Surface mapping (table, countertop, cupboard shelf, window surface)
- Location IDs for spatial grounding
Dataset Statistics
- Total recordings: 125
- Total participants: 25
- Recordings per participant: 5
- Unique recipes: 5
- Average recording duration: 108 seconds
- Total dataset duration: 3.7 hours
Dataset Creation
Curation Rationale
The dataset was created to explore how gaze and speech synchronize in referential communication and whether object location influences this synchronization. It provides a rich resource for multimodal grounding research across egocentric and exocentric perspectives.
Source Data
Data Collection and Processing
- Hardware: ARIA smart glasses, GoPro camera
- Collection Method: Participants wore ARIA glasses while describing recipe ingredients and steps, allowing real-time capture of gaze and verbal utterances
- Annotation Process:
- Temporal correlation between gaze and speech detected using Python scripts
- Automatic transcription using WhisperX
- Referential expressions annotated with gaze fixations
Who are the source data producers?
KTH Students involved in the project:
- Gong, Yanliang
- Hafsteinsdóttir, Kristín
- He, Yiyan
- Lin, Wei-Jun
- Lindh, Matilda
- Liu, Tianyun
- Lu, Yu
- Yan, Jingyi
- Zhang, Ruopeng
- Zhang, Yulu
Loading the Dataset
Using the metadata
import pandas as pd
import json
# Load metadata
metadata = pd.read_parquet('data/manifests/metadata.parquet')
# Load recipes
with open('data/manifests/recipes.json') as f:
recipes = json.load(f)
# Filter recordings
recipe_1_recordings = metadata[metadata['recipe_id'] == 'recipe_01']
Using the provided loader script
from scripts.load_dataset import ARIAReferentialDataset
# Initialize dataset
dataset = ARIAReferentialDataset('data')
# Load a specific recording
recording = dataset.load_recording('par_01', 'rec_01')
print(f"Recipe: {recording['recipe']['name']}")
print(f"Duration: {recording['metadata']['duration_sec']:.1f}s")
print(f"Has gaze: {recording['metadata']['has_gaze']}")
print(f"References: {recording['metadata']['n_references']}")
# Access data
gaze_df = recording['gaze']
references_df = recording['references']
See scripts/load_dataset.py for complete examples.
Citation
If you use this dataset in your research, please cite:
@misc{deichler2024lookandtell,
title={Look and Tell: A Dataset for Multimodal Grounding Across Egocentric and Exocentric Views},
year={2024},
eprint={2510.22672},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.22672},
note={Presented at NeurIPS 2025 SpaVLE Workshop}
}
License
This dataset is released under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
You are free to:
- Share — copy and redistribute the material in any medium or format
Under the following terms:
- Attribution — You must give appropriate credit
- NonCommercial — You may not use the material for commercial purposes
- NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material
Contact
For questions or issues, please open an issue on this dataset repository or contact the KTH Royal Institute of Technology team.
Acknowledgments
This work was conducted at KTH Royal Institute of Technology. We thank all participants who contributed their data to this research.
- Downloads last month
- 1,827



