dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: title_slug
dtype: string
- name: description
dtype: string
- name: description_md
dtype: string
- name: difficulty
dtype: string
- name: tags
list: string
- name: source
dtype: string
- name: url
dtype: string
- name: type
dtype: string
- name: release_timestamp
dtype: int64
- name: release_date
dtype: string
- name: time_limit_nanos
dtype: int64
- name: memory_limit_bytes
dtype: int64
- name: starter_code
struct:
- name: c
dtype: string
- name: cpp
dtype: string
- name: csharp
dtype: string
- name: dart
dtype: string
- name: elixir
dtype: string
- name: erlang
dtype: string
- name: golang
dtype: string
- name: java
dtype: string
- name: javascript
dtype: string
- name: kotlin
dtype: string
- name: php
dtype: string
- name: python
dtype: string
- name: python3
dtype: string
- name: racket
dtype: string
- name: ruby
dtype: string
- name: rust
dtype: string
- name: scala
dtype: string
- name: swift
dtype: string
- name: typescript
dtype: string
- name: solutions
struct:
- name: cpp
struct:
- name: code
dtype: string
- name: memory
dtype: int64
- name: memoryDistribution
dtype: string
- name: runtime
dtype: int64
- name: runtimeDistribution
dtype: string
- name: golang
struct:
- name: code
dtype: string
- name: memory
dtype: int64
- name: memoryDistribution
dtype: string
- name: runtime
dtype: int64
- name: runtimeDistribution
dtype: string
- name: java
struct:
- name: code
dtype: string
- name: memory
dtype: int64
- name: memoryDistribution
dtype: string
- name: runtime
dtype: int64
- name: runtimeDistribution
dtype: string
- name: javascript
struct:
- name: code
dtype: string
- name: memory
dtype: int64
- name: memoryDistribution
dtype: string
- name: runtime
dtype: int64
- name: runtimeDistribution
dtype: string
- name: python3
struct:
- name: code
dtype: string
- name: memory
dtype: int64
- name: memoryDistribution
dtype: string
- name: runtime
dtype: int64
- name: runtimeDistribution
dtype: string
- name: ruby
struct:
- name: code
dtype: string
- name: memory
dtype: int64
- name: memoryDistribution
dtype: string
- name: runtime
dtype: int64
- name: runtimeDistribution
dtype: string
- name: test_case_generator
dtype: string
- name: evaluator
dtype: string
- name: generated_tests
dtype: string
- name: test_runners
struct:
- name: cpp
dtype: string
- name: golang
dtype: string
- name: java
dtype: string
- name: javascript
dtype: string
- name: python3
dtype: string
- name: ruby
dtype: string
splits:
- name: test
num_bytes: 3865548641
num_examples: 623
download_size: 2341977516
dataset_size: 3865548641
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- code
pretty_name: EffiBench-X
size_categories:
- n<1K
Dataset Card for EffiBench-X
EffiBench-X is the first multi-language benchmark designed specifically to evaluate the efficiency of LLM-generated code across six programming languages: Python, C++, Java, JavaScript, Ruby, and Golang. The dataset comprises 623 competitive programming problems paired with human expert solutions as efficiency baselines.
Dataset Details
Dataset Description
EffiBench-X addresses critical limitations in existing code generation benchmarks by providing:
Multi-language evaluation across Python, C++, Java, JavaScript, Ruby, and Golang
Efficiency-focused metrics including execution time, memory peak, and memory integral
Human expert baselines for reliable efficiency comparison
Curated by: Yuhao Qing, Boyu Zhu, Mingzhe Du, Zhijiang Guo, Terry Yue Zhuo, Qianru Zhang, Jie M. Zhang, Heming Cui, Siu-Ming Yiu, Dong Huang, See-Kiong Ng, Luu Anh Tuan
Institutions: HKU, UCL, NTU, NUS, HKUST, Monash University, CSIRO's Data61, KCL
Language(s) (NLP): English
License: Apache License 2.0
Dataset Sources
- Repository: EffiBench-X (GitHub)
- Dataset: EffiBench/effibench-x
- Paper: arXiv:2505.13004
- Problem Sources:
Uses
Direct Use
- Benchmarking LLM code generation efficiency: Evaluate models on runtime performance, memory usage, and correctness across multiple languages
- Cross-language performance analysis: Compare model capabilities across different programming paradigms
- Model development: Train and fine-tune models for efficient code generation
- Research: Study efficiency gaps between LLM-generated and human expert code
Out-of-Scope Use
- Production deployment without validation: Solutions should be verified before production use
- Security-critical applications: The dataset focuses on algorithmic efficiency, not security
- Non-competitive programming domains: Problems are algorithmic in nature and may not represent all software engineering contexts
Dataset Structure
The dataset contains 623 problems categorized into:
- Functional problems: Implement specific functions/classes with I/O handled by test templates
- Standard I/O problems: Complete programs reading from stdin and writing to stdout
Key fields per record include:
id,title,title_slug,description,description_md,difficulty,tags,source,url,type- Limits:
time_limit_nanos,memory_limit_bytes - Code artifacts:
starter_code: language-keyed starter snippetssolutions: language-keyed canonical solutions (e.g., forcpp,golang,java,javascript,python3,ruby)test_case_generator: executable code string that programmatically produces testsevaluator: executable code string to determine pass/fail given expected vs. program outputgenerated_tests: serialized tests produced by the generatortest_runners: language-keyed runner templates for executing solutions
All problems are from competitive programming platforms.
Dataset Creation
Curation Rationale
Existing code generation benchmarks primarily focus on functional correctness with limited attention to efficiency, often restricted to Python. EffiBench-X addresses three critical limitations:
- Language diversity: Extends beyond Python to include statically-typed (C++, Java, Go) and dynamically-typed languages (Python, JavaScript, Ruby)
- Data contamination: Uses recent problems (post-October 2023) to avoid memorization effects
- Complexity: Features algorithmically challenging problems requiring optimization techniques
Source Data
Data Collection and Processing
Problems are curated from competitive programming platforms. Each problem includes:
- Human expert solutions verified for correctness and efficiency
- 100 programmatically generated test cases
- Test runners and evaluators for automated assessment
- Cross-language validation to ensure consistency
Who are the source data producers?
- Problem creators: Competitive programming platforms and contest organizers
- Solution authors: Human expert programmers from competitive programming communities
- Dataset curators: EffiBench research team
Citation
Please cite our paper if you use this dataset:
@article{qing2025effibench,
title={EffiBench-X: A Multi-Language Benchmark for Measuring Efficiency of LLM-Generated Code},
author={Qing, Yuhao and Zhu, Boyu and Du, Mingzhe and Guo, Zhijiang and Zhuo, Terry Yue and Zhang, Qianru and Zhang, Jie M and Cui, Heming and Yiu, Siu-Ming and Huang, Dong and Ng, See-Kiong and Tuan, Luu Anh},
journal={Advances in neural information processing systems},
year={2025}
}
More Information
- Dataset Statistics: 623 problems, 100 test cases per problem, 6 languages
- Evaluation: Sandboxed execution environment for consistent performance measurements
- For detailed information and benchmark results, please refer to the paper and GitHub repository
Dataset Card Contact
For questions and feedback, please open an issue on our GitHub repository.