nielsr HF Staff commited on
Commit
7637c72
·
verified ·
1 Parent(s): 56efe6e

Add dataset card for HAROOD benchmark

Browse files

This PR adds a dataset card for the HAROOD benchmark, which includes:
- Links to the paper (KDD 2026) and the official GitHub repository.
- Metadata including task categories and descriptive tags.
- Sample usage code snippets derived from the GitHub README.
- A list of supported OOD generalization algorithms.
- Citation information for the research paper.

Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - other
4
+ tags:
5
+ - human-activity-recognition
6
+ - sensor-data
7
+ - time-series
8
+ - out-of-distribution
9
+ ---
10
+
11
+ # HAROOD: A Benchmark for Out-of-distribution Generalization in Sensor-based Human Activity Recognition
12
+
13
+ [**Paper**](https://huggingface.co/papers/2512.10807) | [**GitHub Repository**](https://github.com/AIFrontierLab/HAROOD)
14
+
15
+ HAROOD is a modular and reproducible benchmark framework for studying generalization in sensor-based human activity recognition (HAR). It unifies preprocessing pipelines, standardizes four realistic OOD scenarios (cross-person, cross-position, cross-dataset, and cross-time), and implements 16 representative algorithms across CNN and Transformer architectures.
16
+
17
+ ## Key Features
18
+
19
+ - **6 public HAR datasets** unified under a single framework.
20
+ - **5 realistic OOD scenarios**: cross-person, cross-position, cross-dataset, cross-time, and cross-device.
21
+ - **16 generalization algorithms** spanning Data Manipulation, Representation Learning, and Learning Strategies.
22
+ - **Backbone support**: Includes both CNN and Transformer-based architectures.
23
+ - **Standardized splits**: Provides train/val/test model selection protocols.
24
+
25
+ ## Usage
26
+
27
+ The benchmark is designed to be modular. Below are examples of how to run experiments using the official implementation:
28
+
29
+ ### Run with a YAML config
30
+
31
+ ```python
32
+ from core import train
33
+ results = train(config='./config/experiment.yaml')
34
+ ```
35
+
36
+ ### Run with a Python dict
37
+
38
+ ```python
39
+ from core import train
40
+ config_dict = {
41
+ 'algorithm': 'CORAL',
42
+ 'batch_size': 32,
43
+ }
44
+ results = train(config=config_dict)
45
+ ```
46
+
47
+ ### Override parameters
48
+
49
+ ```python
50
+ from core import train
51
+ results = train(
52
+ config='./config/experiment.yaml',
53
+ lr=2e-3,
54
+ max_epoch=200,
55
+ )
56
+ ```
57
+
58
+ ## Supported Algorithms
59
+
60
+ The benchmark implements 16 algorithms across three main categories:
61
+
62
+ - **Data Manipulation**: Mixup, DDLearn.
63
+ - **Representation Learning**: ERM, DANN, CORAL, MMD, VREx, LAG.
64
+ - **Learning Strategy**: MLDG, RSC, GroupDRO, ANDMask, Fish, Fishr, URM, ERM++.
65
+
66
+ ## Citation
67
+
68
+ If you use HAROOD in your research, please cite the following paper:
69
+
70
+ ```bibtex
71
+ @inproceedings{lu2026harood,
72
+ title={HAROOD: A Benchmark for Out-of-distribution Generalization in Sensor-based Human Activity Recognition},
73
+ author={Lu, Wang and Zhu, Yao and Wang, Jindong},
74
+ booktitle={The 32rd ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},
75
+ year={2026}
76
+ }
77
+ ```