Update README
Browse files
README.md
CHANGED
|
@@ -174,4 +174,128 @@ configs:
|
|
| 174 |
data_files:
|
| 175 |
- split: test
|
| 176 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 177 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 174 |
data_files:
|
| 175 |
- split: test
|
| 176 |
path: data/test-*
|
| 177 |
+
license: apache-2.0
|
| 178 |
+
task_categories:
|
| 179 |
+
- question-answering
|
| 180 |
+
language:
|
| 181 |
+
- en
|
| 182 |
+
tags:
|
| 183 |
+
- code
|
| 184 |
+
pretty_name: EffiBench-X
|
| 185 |
+
size_categories:
|
| 186 |
+
- n<1K
|
| 187 |
---
|
| 188 |
+
# Dataset Card for EffiBench-X
|
| 189 |
+
|
| 190 |
+
**EffiBench-X** is the first multi-language benchmark designed specifically to evaluate the efficiency of LLM-generated code across six programming languages: Python, C++, Java, JavaScript, Ruby, and Golang. The dataset comprises 623 competitive programming problems sourced from platforms released after October 2023 to mitigate data contamination, paired with human expert solutions as efficiency baselines.
|
| 191 |
+
|
| 192 |
+
## Dataset Details
|
| 193 |
+
|
| 194 |
+
### Dataset Description
|
| 195 |
+
|
| 196 |
+
EffiBench-X addresses critical limitations in existing code generation benchmarks by providing:
|
| 197 |
+
- **Multi-language evaluation** across Python, C++, Java, JavaScript, Ruby, and Golang
|
| 198 |
+
- **Efficiency-focused metrics** including execution time, memory peak, and memory integral
|
| 199 |
+
- **Recent competitive programming problems** (post-October 2023) to avoid data contamination
|
| 200 |
+
- **Human expert baselines** for reliable efficiency comparison
|
| 201 |
+
|
| 202 |
+
- **Curated by:** Yuhao Qing, Boyu Zhu, Mingzhe Du, Zhijiang Guo, Terry Yue Zhuo, Qianru Zhang, Jie M. Zhang, Heming Cui, Siu-Ming Yiu, Dong Huang, See-Kiong Ng, Luu Anh Tuan
|
| 203 |
+
- **Institutions:** HKU, UCL, NTU, NUS, HKUST, Monash University, CSIRO's Data61, KCL
|
| 204 |
+
- **Language(s) (NLP):** English
|
| 205 |
+
- **License:** Apache License 2.0
|
| 206 |
+
|
| 207 |
+
### Dataset Sources
|
| 208 |
+
|
| 209 |
+
- **Repository:** [EffiBench-X (GitHub)](https://github.com/EffiBench/EffiBench-X)
|
| 210 |
+
- **Dataset:** [EffiBench/effibench-x](https://huggingface.co/datasets/EffiBench/effibench-x)
|
| 211 |
+
- **Paper:** [arXiv:2505.13004](https://arxiv.org/abs/2505.13004)
|
| 212 |
+
- **Problem Sources:**
|
| 213 |
+
- [LeetCode](https://leetcode.com)
|
| 214 |
+
- [Aizu Online Judge](https://onlinejudge.u-aizu.ac.jp/)
|
| 215 |
+
- [AtCoder](https://atcoder.jp)
|
| 216 |
+
- [CodeChef](https://www.codechef.com)
|
| 217 |
+
- [Codeforces](https://codeforces.com)
|
| 218 |
+
|
| 219 |
+
## Uses
|
| 220 |
+
|
| 221 |
+
### Direct Use
|
| 222 |
+
|
| 223 |
+
- **Benchmarking LLM code generation efficiency**: Evaluate models on runtime performance, memory usage, and correctness across multiple languages
|
| 224 |
+
- **Cross-language performance analysis**: Compare model capabilities across different programming paradigms
|
| 225 |
+
- **Model development**: Train and fine-tune models for efficient code generation
|
| 226 |
+
- **Research**: Study efficiency gaps between LLM-generated and human expert code
|
| 227 |
+
|
| 228 |
+
### Out-of-Scope Use
|
| 229 |
+
|
| 230 |
+
- **Production deployment without validation**: Solutions should be verified before production use
|
| 231 |
+
- **Security-critical applications**: The dataset focuses on algorithmic efficiency, not security
|
| 232 |
+
- **Non-competitive programming domains**: Problems are algorithmic in nature and may not represent all software engineering contexts
|
| 233 |
+
|
| 234 |
+
## Dataset Structure
|
| 235 |
+
|
| 236 |
+
The dataset contains 623 problems categorized into:
|
| 237 |
+
- **Functional problems**: Implement specific functions/classes with I/O handled by test templates
|
| 238 |
+
- **Standard I/O problems**: Complete programs reading from stdin and writing to stdout
|
| 239 |
+
|
| 240 |
+
Key fields per record include:
|
| 241 |
+
|
| 242 |
+
- `id`, `title`, `title_slug`, `description`, `description_md`, `difficulty`, `tags`, `source`, `url`, `type`
|
| 243 |
+
- Limits: `time_limit_nanos`, `memory_limit_bytes`
|
| 244 |
+
- Code artifacts:
|
| 245 |
+
- `starter_code`: language-keyed starter snippets
|
| 246 |
+
- `solutions`: language-keyed canonical solutions (e.g., for `cpp`, `golang`, `java`, `javascript`, `python3`, `ruby`)
|
| 247 |
+
- `test_case_generator`: executable code string that programmatically produces tests
|
| 248 |
+
- `evaluator`: executable code string to determine pass/fail given expected vs. program output
|
| 249 |
+
- `generated_tests`: serialized tests produced by the generator
|
| 250 |
+
- `test_runners`: language-keyed runner templates for executing solutions
|
| 251 |
+
|
| 252 |
+
All problems are from competitive programming platforms, with release dates after October 2023 to minimize data contamination.
|
| 253 |
+
|
| 254 |
+
## Dataset Creation
|
| 255 |
+
|
| 256 |
+
### Curation Rationale
|
| 257 |
+
|
| 258 |
+
Existing code generation benchmarks primarily focus on functional correctness with limited attention to efficiency, often restricted to Python. EffiBench-X addresses three critical limitations:
|
| 259 |
+
|
| 260 |
+
1. **Language diversity**: Extends beyond Python to include statically-typed (C++, Java, Go) and dynamically-typed languages (Python, JavaScript, Ruby)
|
| 261 |
+
2. **Data contamination**: Uses recent problems (post-October 2023) to avoid memorization effects
|
| 262 |
+
3. **Complexity**: Features algorithmically challenging problems requiring optimization techniques
|
| 263 |
+
|
| 264 |
+
### Source Data
|
| 265 |
+
|
| 266 |
+
#### Data Collection and Processing
|
| 267 |
+
|
| 268 |
+
Problems are curated from competitive programming platforms released after October 2023 to minimize data contamination. Each problem includes:
|
| 269 |
+
- Human expert solutions verified for correctness and efficiency
|
| 270 |
+
- 100 programmatically generated test cases
|
| 271 |
+
- Test runners and evaluators for automated assessment
|
| 272 |
+
- Cross-language validation to ensure consistency
|
| 273 |
+
|
| 274 |
+
#### Who are the source data producers?
|
| 275 |
+
|
| 276 |
+
- **Problem creators**: Competitive programming platforms and contest organizers
|
| 277 |
+
- **Solution authors**: Human expert programmers from competitive programming communities
|
| 278 |
+
- **Dataset curators**: EffiBench research team
|
| 279 |
+
|
| 280 |
+
## Citation
|
| 281 |
+
|
| 282 |
+
Please cite our paper if you use this dataset:
|
| 283 |
+
|
| 284 |
+
```bibtex
|
| 285 |
+
@article{qing2025effibench,
|
| 286 |
+
title={EffiBench-X: A Multi-Language Benchmark for Measuring Efficiency of LLM-Generated Code},
|
| 287 |
+
author={Qing, Yuhao and Zhu, Boyu and Du, Mingzhe and Guo, Zhijiang and Zhuo, Terry Yue and Zhang, Qianru and Zhang, Jie M and Cui, Heming and Yiu, Siu-Ming and Huang, Dong and Ng, See-Kiong and Tuan, Luu Anh},
|
| 288 |
+
journal={Advances in neural information processing systems},
|
| 289 |
+
year={2025}
|
| 290 |
+
}
|
| 291 |
+
```
|
| 292 |
+
|
| 293 |
+
## More Information
|
| 294 |
+
|
| 295 |
+
- **Dataset Statistics**: 623 problems, 100 test cases per problem, 6 languages
|
| 296 |
+
- **Evaluation**: Sandboxed execution environment for consistent performance measurements
|
| 297 |
+
- For detailed information and benchmark results, please refer to the [paper](https://arxiv.org/abs/2505.13004) and [GitHub repository](https://github.com/EffiBench/EffiBench-X)
|
| 298 |
+
|
| 299 |
+
## Dataset Card Contact
|
| 300 |
+
|
| 301 |
+
For questions and feedback, please open an issue on our [GitHub repository](https://github.com/EffiBench/EffiBench-X).
|