strokevision-bench / README.md
DavidRobinson05's picture
Update README.md
4507126 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - video-classification
language:
  - en
tags:
  - code
pretty_name: StrokeVision Bench
size_categories:
  - 1K<n<10K

StrokeVision-Bench: A Multimodal Video and 2D Pose Benchmark for Tracking Stroke Recovery

StrokeVision-Bench is an action recognition dataset of short segments of stroke patients performing the Box-Block Test.

StrokeVision-Bench contains 1,000 annotated videos (1 s @ 30 FPS) categorized into four clinically meaningful action classes (Non-task movement, Grasping, Transport with block, Transport without block), with each sample represented in two modalities: raw video segments and 2D skeletal keypoints. We benchmark several state-of-the-art video and skeleton-based action classification methods to establish performance baselines for this domain and facilitate future research in automated stroke rehabilitation assessment.

Dataset Summary

  • Samples: 1,036 short videos (1 s @ 30 FPS)
  • Modalities: RGB frames, 2D skeleton keypoints
  • Action classes: Non-task movement, Grasping, Transport with block, Transport without block
  • Keypoints: COCO 17-keypoint format
  • Train-Test Split: 827 train segments, 209 test segments

Paper: https://arxiv.org/abs/2509.07994

Dataset Structure

  • videos
    • grasping/
    • non_task/
    • transport_with_block/
    • transport_without_block/
  • keypoints/
    • grasping/
    • non_task/
    • transport_with_block/
    • transport_without_block/
  • annotations/
    • train.csv
    • val.csv

The videos folder contains the raw video segments separated by class. The keypoints folder contains the 2D skeletal keypoints as npy files with shape (30, 17, 2) separated by class.

Each instance in the annotations contains the following features:

  • subject_id: The subject of the instance (P01-P04)
  • file_name: File name of the instance within the videos and keypoints directories, formatted as "{subject_id}_segment{segment_id}"
  • label: Action class (Non-task movement, Grasping, Transport with block, Transport without block)
  • hand: Which hand is being used (left, right)

Example entry: P01,P01_segment0201,transport_with_block,left

You can load the annotations directly with pandas and access files via the dataset's videos and keypoints folders.

License

This dataset is released under the CC BY-NC 4.0 license.

Citation

@inproceedings{strokevisionbench,
  title     = {StrokeVision-Bench: A Multimodal Video and 2D Pose Benchmark for Tracking Stroke Recovery},
  author    = {David Robinson and Animesh Gupta and Rizwan Qureshi and Qiushi Fu and Mubarak Shah},
  booktitle = {Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP)},
  year      = {2025}
}