Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
programming-language
code
program-synthesis
automatic-code-repair
code-retrieval
code-translation
License:
update data loader
Browse files- xCodeEval.py +32 -4
xCodeEval.py
CHANGED
|
@@ -148,8 +148,7 @@ _DESCRIPTIONS = {
|
|
| 148 |
18. `prob_desc_created_at`: The Unix timestamp when the problem was released. Use `datetime` lib in Python to parse it to a human-readable format.
|
| 149 |
19. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 150 |
20. `hidden_unit_tests`: a list of unit tests returned as string. use `json.loads(hidden_unit_tests)` to load the data.
|
| 151 |
-
|
| 152 |
-
Objective: Given a source code in lang, generate a code in target lang."""
|
| 153 |
),
|
| 154 |
"program_synthesis": textwrap.dedent(
|
| 155 |
"""### Key Definitions
|
|
@@ -161,7 +160,20 @@ _DESCRIPTIONS = {
|
|
| 161 |
6. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
|
| 162 |
7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.
|
| 163 |
8. `exec_outcome`: Execution outcome status. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't includes unit-test to avoid potential hacks. We provide unit test for only dev and test data.
|
| 164 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 165 |
),
|
| 166 |
"retrieval_code_code": textwrap.dedent(
|
| 167 |
"""### Key Definitions
|
|
@@ -169,6 +181,7 @@ _DESCRIPTIONS = {
|
|
| 169 |
2. `negative_code` : list of negative codes for `nl`
|
| 170 |
3. `src_uid` : A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
| 171 |
4. `source_code` : A source code given as input query.
|
|
|
|
| 172 |
Objective: Given a source_code retrieve similar source code from `retrieval_corpus`."""
|
| 173 |
),
|
| 174 |
"retrieval_nl_code": textwrap.dedent(
|
|
@@ -177,12 +190,14 @@ _DESCRIPTIONS = {
|
|
| 177 |
2. `positive_code` : list of positive codes for `nl`
|
| 178 |
3. `negative_code` : list of negative codes for `nl`
|
| 179 |
4. `src_uid` : A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
|
|
|
| 180 |
Objective: Given a nl (problem description) retrieve similar source code from `retrieval_corpus`."""
|
| 181 |
),
|
| 182 |
"retrieval_corpus": textwrap.dedent(
|
| 183 |
"""### Key Definitions
|
| 184 |
1. `idx` : unique index for each sample on a specific langauge (read language from filename).
|
| 185 |
4. `source_code` : A source code given as retrieval document.
|
|
|
|
| 186 |
Objective: Use the retrival_corpus to perform query for retrieval_nl_code and retrieval_code_code ."""
|
| 187 |
),
|
| 188 |
"code_compilation": textwrap.dedent(
|
|
@@ -194,6 +209,7 @@ _DESCRIPTIONS = {
|
|
| 194 |
5. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
|
| 195 |
6. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
| 196 |
7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.
|
|
|
|
| 197 |
Objective: Given a `source_code` the objective is to classify if the code compiles or not (label:compilation_error) ."""
|
| 198 |
),
|
| 199 |
"tag_classification": textwrap.dedent(
|
|
@@ -205,7 +221,19 @@ _DESCRIPTIONS = {
|
|
| 205 |
5. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
|
| 206 |
6. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
| 207 |
7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.
|
| 208 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 209 |
),
|
| 210 |
}
|
| 211 |
|
|
|
|
| 148 |
18. `prob_desc_created_at`: The Unix timestamp when the problem was released. Use `datetime` lib in Python to parse it to a human-readable format.
|
| 149 |
19. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 150 |
20. `hidden_unit_tests`: a list of unit tests returned as string. use `json.loads(hidden_unit_tests)` to load the data.
|
| 151 |
+
Objective: Given a source code (`source_code`) in `lang_cluster`, generate a code in target programming language."""
|
|
|
|
| 152 |
),
|
| 153 |
"program_synthesis": textwrap.dedent(
|
| 154 |
"""### Key Definitions
|
|
|
|
| 160 |
6. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
|
| 161 |
7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.
|
| 162 |
8. `exec_outcome`: Execution outcome status. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't includes unit-test to avoid potential hacks. We provide unit test for only dev and test data.
|
| 163 |
+
9. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
|
| 164 |
+
10. `prob_desc_input_from`: How the program should take the unit test.
|
| 165 |
+
11. `prob_desc_output_to`: Where the program should output the result of the unit test.
|
| 166 |
+
12. `prob_desc_time_limit`: Time limit to solve the problem.
|
| 167 |
+
13. `prob_desc_memory_limit`: Memory limit to solve the problem.
|
| 168 |
+
14. `prob_desc_input_spec`: How and in what order the input will be given to the program? It also includes the date range, types, and sizes.
|
| 169 |
+
15. `prob_desc_output_spec`: How the outputs should be printed. Most of the time the unit test results are matched with an *exact string match* or *floating point comparison* with a precision boundary.
|
| 170 |
+
16. `prob_desc_sample_inputs`: A sample input for the code that is expected to solve the problem described in `description`.
|
| 171 |
+
17. `prob_desc_sample_outputs`: The expected output for the `sample_input` that is expected to solve the problem described in `description`.
|
| 172 |
+
18. `prob_desc_notes`: Explanation of `sample_inputs` & `sample_outputs`.
|
| 173 |
+
19. `prob_desc_created_at`: The Unix timestamp when the problem was released. Use `datetime` lib in Python to parse it to a human-readable format.
|
| 174 |
+
20. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 175 |
+
21. `hidden_unit_tests`: a list of unit tests returned as string. use `json.loads(hidden_unit_tests)` to load the data.
|
| 176 |
+
Objective: Given a `src_uid` read problem description from `problem_descriptions.jsonl` and generate a solution for problem description."""
|
| 177 |
),
|
| 178 |
"retrieval_code_code": textwrap.dedent(
|
| 179 |
"""### Key Definitions
|
|
|
|
| 181 |
2. `negative_code` : list of negative codes for `nl`
|
| 182 |
3. `src_uid` : A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
| 183 |
4. `source_code` : A source code given as input query.
|
| 184 |
+
5. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 185 |
Objective: Given a source_code retrieve similar source code from `retrieval_corpus`."""
|
| 186 |
),
|
| 187 |
"retrieval_nl_code": textwrap.dedent(
|
|
|
|
| 190 |
2. `positive_code` : list of positive codes for `nl`
|
| 191 |
3. `negative_code` : list of negative codes for `nl`
|
| 192 |
4. `src_uid` : A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
| 193 |
+
5. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 194 |
Objective: Given a nl (problem description) retrieve similar source code from `retrieval_corpus`."""
|
| 195 |
),
|
| 196 |
"retrieval_corpus": textwrap.dedent(
|
| 197 |
"""### Key Definitions
|
| 198 |
1. `idx` : unique index for each sample on a specific langauge (read language from filename).
|
| 199 |
4. `source_code` : A source code given as retrieval document.
|
| 200 |
+
3. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 201 |
Objective: Use the retrival_corpus to perform query for retrieval_nl_code and retrieval_code_code ."""
|
| 202 |
),
|
| 203 |
"code_compilation": textwrap.dedent(
|
|
|
|
| 209 |
5. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
|
| 210 |
6. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
| 211 |
7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.
|
| 212 |
+
8. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 213 |
Objective: Given a `source_code` the objective is to classify if the code compiles or not (label:compilation_error) ."""
|
| 214 |
),
|
| 215 |
"tag_classification": textwrap.dedent(
|
|
|
|
| 221 |
5. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
|
| 222 |
6. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
|
| 223 |
7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.
|
| 224 |
+
8. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
|
| 225 |
+
9. `prob_desc_input_from`: How the program should take the unit test.
|
| 226 |
+
10. `prob_desc_output_to`: Where the program should output the result of the unit test.
|
| 227 |
+
11. `prob_desc_time_limit`: Time limit to solve the problem.
|
| 228 |
+
12. `prob_desc_memory_limit`: Memory limit to solve the problem.
|
| 229 |
+
13. `prob_desc_input_spec`: How and in what order the input will be given to the program? It also includes the date range, types, and sizes.
|
| 230 |
+
14. `prob_desc_output_spec`: How the outputs should be printed. Most of the time the unit test results are matched with an *exact string match* or *floating point comparison* with a precision boundary.
|
| 231 |
+
15. `prob_desc_sample_inputs`: A sample input for the code that is expected to solve the problem described in `description`.
|
| 232 |
+
16. `prob_desc_sample_outputs`: The expected output for the `sample_input` that is expected to solve the problem described in `description`.
|
| 233 |
+
17. `prob_desc_notes`: Explanation of `sample_inputs` & `sample_outputs`.
|
| 234 |
+
18. `prob_desc_created_at`: The Unix timestamp when the problem was released. Use `datetime` lib in Python to parse it to a human-readable format.
|
| 235 |
+
19. `file_name`: Name of the source jsonl file from where data is loaded.
|
| 236 |
+
Objective: Given a `source_code` the objective is to classify the code into multi-label tags (label:tags)."""
|
| 237 |
),
|
| 238 |
}
|
| 239 |
|