File size: 2,738 Bytes
ea76589 33cdaaf ea76589 29e75f9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
license: cc-by-nc-4.0
task_categories:
- video-classification
language:
- en
tags:
- code
pretty_name: StrokeVision Bench
size_categories:
- 1K<n<10K
---
# StrokeVision-Bench: A Multimodal Video and 2D Pose Benchmark for Tracking Stroke Recovery
StrokeVision-Bench is an action recognition dataset of short segments of stroke patients performing the Box-Block Test.
StrokeVision-Bench contains 1,000 annotated videos (1 s @ 30 FPS) categorized into four clinically meaningful action classes (Non-task movement, Grasping, Transport with block, Transport without block), with each sample represented in two modalities: raw video segments and 2D skeletal keypoints. We benchmark several state-of-the-art video and skeleton-based action classification methods to establish performance baselines for this domain and facilitate future research in automated stroke rehabilitation assessment.
## Dataset Summary
- Samples: 1,036 short videos (1 s @ 30 FPS)
- Modalities: RGB frames, 2D skeleton keypoints
- Action classes: Non-task movement, Grasping, Transport with block, Transport without block
- Keypoints: COCO 17-keypoint format
- Train-Test Split: 827 train segments, 209 test segments
**Paper**: https://arxiv.org/abs/2509.07994
## Dataset Structure
- videos
- grasping/
- non_task/
- transport_with_block/
- transport_without_block/
- keypoints/
- grasping/
- non_task/
- transport_with_block/
- transport_without_block/
- annotations/
- train.csv
- val.csv
The `videos` folder contains the raw video segments separated by class. The `keypoints` folder contains the 2D skeletal keypoints as npy files with shape (30, 17, 2) separated by class.
Each instance in the annotations contains the following features:
- subject_id: The subject of the instance (P01-P04)
- file_name: File name of the instance within the **videos** and **keypoints** directories, formatted as "{subject_id}_segment{segment_id}"
- label: Action class (Non-task movement, Grasping, Transport with block, Transport without block)
- hand: Which hand is being used (left, right)
Example entry: P01,P01_segment0201,transport_with_block,left
You can load the annotations directly with pandas and access files via the dataset's `videos` and `keypoints` folders.
## License
This dataset is released under the CC BY-NC 4.0 license.
## Citation
```bibtex
@inproceedings{strokevisionbench,
title = {StrokeVision-Bench: A Multimodal Video and 2D Pose Benchmark for Tracking Stroke Recovery},
author = {David Robinson and Animesh Gupta and Rizwan Qureshi and Qiushi Fu and Mubarak Shah},
booktitle = {Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP)},
year = {2025}
}
``` |