Update README
Browse files
README.md
CHANGED
|
@@ -187,7 +187,7 @@ size_categories:
|
|
| 187 |
---
|
| 188 |
# Dataset Card for EffiBench-X
|
| 189 |
|
| 190 |
-
**EffiBench-X** is the first multi-language benchmark designed specifically to evaluate the efficiency of LLM-generated code across six programming languages: Python, C++, Java, JavaScript, Ruby, and Golang. The dataset comprises 623 competitive programming problems
|
| 191 |
|
| 192 |
## Dataset Details
|
| 193 |
|
|
@@ -196,7 +196,6 @@ size_categories:
|
|
| 196 |
EffiBench-X addresses critical limitations in existing code generation benchmarks by providing:
|
| 197 |
- **Multi-language evaluation** across Python, C++, Java, JavaScript, Ruby, and Golang
|
| 198 |
- **Efficiency-focused metrics** including execution time, memory peak, and memory integral
|
| 199 |
-
- **Recent competitive programming problems** (post-October 2023) to avoid data contamination
|
| 200 |
- **Human expert baselines** for reliable efficiency comparison
|
| 201 |
|
| 202 |
- **Curated by:** Yuhao Qing, Boyu Zhu, Mingzhe Du, Zhijiang Guo, Terry Yue Zhuo, Qianru Zhang, Jie M. Zhang, Heming Cui, Siu-Ming Yiu, Dong Huang, See-Kiong Ng, Luu Anh Tuan
|
|
@@ -249,7 +248,7 @@ Key fields per record include:
|
|
| 249 |
- `generated_tests`: serialized tests produced by the generator
|
| 250 |
- `test_runners`: language-keyed runner templates for executing solutions
|
| 251 |
|
| 252 |
-
All problems are from competitive programming platforms
|
| 253 |
|
| 254 |
## Dataset Creation
|
| 255 |
|
|
@@ -265,7 +264,7 @@ Existing code generation benchmarks primarily focus on functional correctness wi
|
|
| 265 |
|
| 266 |
#### Data Collection and Processing
|
| 267 |
|
| 268 |
-
Problems are curated from competitive programming platforms
|
| 269 |
- Human expert solutions verified for correctness and efficiency
|
| 270 |
- 100 programmatically generated test cases
|
| 271 |
- Test runners and evaluators for automated assessment
|
|
|
|
| 187 |
---
|
| 188 |
# Dataset Card for EffiBench-X
|
| 189 |
|
| 190 |
+
**EffiBench-X** is the first multi-language benchmark designed specifically to evaluate the efficiency of LLM-generated code across six programming languages: Python, C++, Java, JavaScript, Ruby, and Golang. The dataset comprises 623 competitive programming problems paired with human expert solutions as efficiency baselines.
|
| 191 |
|
| 192 |
## Dataset Details
|
| 193 |
|
|
|
|
| 196 |
EffiBench-X addresses critical limitations in existing code generation benchmarks by providing:
|
| 197 |
- **Multi-language evaluation** across Python, C++, Java, JavaScript, Ruby, and Golang
|
| 198 |
- **Efficiency-focused metrics** including execution time, memory peak, and memory integral
|
|
|
|
| 199 |
- **Human expert baselines** for reliable efficiency comparison
|
| 200 |
|
| 201 |
- **Curated by:** Yuhao Qing, Boyu Zhu, Mingzhe Du, Zhijiang Guo, Terry Yue Zhuo, Qianru Zhang, Jie M. Zhang, Heming Cui, Siu-Ming Yiu, Dong Huang, See-Kiong Ng, Luu Anh Tuan
|
|
|
|
| 248 |
- `generated_tests`: serialized tests produced by the generator
|
| 249 |
- `test_runners`: language-keyed runner templates for executing solutions
|
| 250 |
|
| 251 |
+
All problems are from competitive programming platforms.
|
| 252 |
|
| 253 |
## Dataset Creation
|
| 254 |
|
|
|
|
| 264 |
|
| 265 |
#### Data Collection and Processing
|
| 266 |
|
| 267 |
+
Problems are curated from competitive programming platforms. Each problem includes:
|
| 268 |
- Human expert solutions verified for correctness and efficiency
|
| 269 |
- 100 programmatically generated test cases
|
| 270 |
- Test runners and evaluators for automated assessment
|