Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 185, in _split_generators
raise ValueError(f"Found metadata files with different extensions: {list(metadata_ext)}")
ValueError: Found metadata files with different extensions: ['.csv', '.parquet']
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Japanese TV Commercial Video Dataset
Dataset Description
This dataset contains Japanese TV commercial (CM) videos with hierarchical scene detection annotations.
Dataset Summary
- Total Videos: 100
- Total Duration: 39.5 minutes (2370 seconds)
- Total Scenes: 2,320
- Average Duration per Video: 23.7s
- Average Scenes per Video: 23.2
Languages
Commercial videos contain a mix of:
- Japanese (primary)
- English (secondary)
Dataset Structure
dataset/
├── videos/ # Video files (.mp4)
│ ├── CM_000.mp4
│ ├── CM_001.mp4
│ └── ...
├── thumbnails/ # Representative thumbnails for each video
│ ├── CM_000.jpg
│ ├── CM_001.jpg
│ └── ...
├── scenes/ # Scene detection results
│ ├── CM_000/
│ │ ├── scenes.json # Scene boundaries and metadata
│ │ └── scene_*.jpg # Thumbnails for each scene
│ └── ...
├── metadata.parquet # Dataset metadata (recommended)
└── metadata.csv # Human-readable metadata
Data Fields
metadata.parquet / metadata.csv:
video_id: Unique identifier (e.g., "CM_000")video_path: Path to video filethumbnail: Path to representative thumbnail imageduration: Video duration in secondsnum_scenes: Total number of detected scenesnum_groups: Number of scene groupsscenes_json_path: Path to scenes.json filescene_thumbnails_dir: Directory containing scene thumbnailslevel1_count,level2_count,level3_count: Scene counts per detection levellevel1_threshold,level2_threshold,level3_threshold: Detection thresholds used
scenes.json (per video):
{
"video_name": "CM_000",
"video_duration": 29.66,
"total_scenes": 15,
"scenes": [
{
"scene_number": 1,
"start_time": 0.0,
"end_time": 2.102,
"duration": 2.102,
"start_timecode": "00:00:00.000",
"end_timecode": "00:00:02.102",
"thumbnail": "scene_001.jpg",
"level": 1,
"threshold": 5.0
}
]
}
Scene Detection Methodology
Scenes are detected using hierarchical scene detection with PySceneDetect:
- Level 1 (threshold: 5.0): Major scene changes (coarse)
- Level 2 (threshold: 3.0): Medium scene changes
- Level 3 (threshold: 1.0): Subtle scene changes (fine)
Each scene includes:
- Precise start/end timestamps
- Duration
- Detection level (indicating cut intensity)
- Thumbnail image
Usage
Load with Hugging Face Datasets
from datasets import load_dataset
# Load metadata
dataset = load_dataset("your-username/tv-commercial-videos")
# Access first video
sample = dataset["train"][0]
print(f"Video: {sample['video_path']}")
print(f"Duration: {sample['duration']}s")
print(f"Scenes: {sample['num_scenes']}")
Load with Pandas
import pandas as pd
# Load metadata
df = pd.read_parquet("metadata.parquet")
# Load scene data for specific video
import json
with open(df.iloc[0]['scenes_json_path'], 'r') as f:
scenes = json.load(f)
Use Cases
This dataset is suitable for:
- Video segmentation: Scene boundary detection
- Content analysis: Commercial structure analysis
- Computer vision: Object detection in commercial contexts
- Temporal analysis: Shot duration patterns
- Multi-modal learning: Video + audio + text
- Advertisement research: Creative patterns in commercials
Limitations
- Videos are sourced from Japanese TV commercials (specific domain)
- Scene detection is automated and may have occasional errors
- No manual verification of scene boundaries
- No semantic labels (e.g., product categories, themes)
Citation
If you use this dataset, please cite:
@dataset{tv_commercial_videos_2024,
title={Japanese TV Commercial Video Dataset with Scene Detection},
author={Your Name},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/your-username/tv-commercial-videos}
}
License
This dataset is released under CC-BY-4.0 license.
Contact
For questions or issues, please open an issue on the dataset repository.
- Downloads last month
- 143