Datasets:
				
			
			
	
			
	
		
			
	
		Size:
	
	
	
	
	1M<n<10M
	
	
	ArXiv:
	
	
	
	
	
	
	
	
Tags:
	
	
	
	
	programming-language
	
	
	
	
	code
	
	
	
	
	program-synthesis
	
	
	
	
	automatic-code-repair
	
	
	
	
	code-retrieval
	
	
	
	
	code-translation
	
	
	License:
	
	
	
	
	
	
	
		root
		
	commited on
		
		
					Commit 
							
							·
						
						e99541e
	
1
								Parent(s):
							
							467d25a
								
update split naming
Browse files- xCodeEval.py +7 -7
 
    	
        xCodeEval.py
    CHANGED
    
    | 
         @@ -107,8 +107,8 @@ _DESCRIPTIONS = { 
     | 
|
| 107 | 
         
             
            			13. `apr_id`: A unique ID for the apr sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `apr_id`.
         
     | 
| 108 | 
         
             
            			14. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.  
         
     | 
| 109 | 
         
             
            			15. `tags`: List of potential algorithmic techniques required to write the program.
         
     | 
| 110 | 
         
            -
            			16. `bug_exec_outcome`: A pre-run execution outcome of `bug_source_code`. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only  
     | 
| 111 | 
         
            -
            			17. `fix_exec_outcome`: A pre-run execution outcome of `fix_source_code`. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only  
     | 
| 112 | 
         
             
            			18. `potential_dominant_fix_op`: A potential fix op recommended by difflib.
         
     | 
| 113 | 
         
             
            			19. `lang_cluster`: A generic programming language name the value of `lang` belongs to.
         
     | 
| 114 | 
         
             
            			20. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
         
     | 
| 
         @@ -133,7 +133,7 @@ _DESCRIPTIONS = { 
     | 
|
| 133 | 
         
             
            			3. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
         
     | 
| 134 | 
         
             
            			4. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
         
     | 
| 135 | 
         
             
            			5. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.  
         
     | 
| 136 | 
         
            -
            			6. `exec_outcome`: Execution outcome status. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only  
     | 
| 137 | 
         
             
                        7. `lang_cluster`: A generic programming language name the value of `lang` belongs to.
         
     | 
| 138 | 
         
             
                        8. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
         
     | 
| 139 | 
         
             
            			9. `prob_desc_input_from`: How the program should take the unit test.
         
     | 
| 
         @@ -159,7 +159,7 @@ _DESCRIPTIONS = { 
     | 
|
| 159 | 
         
             
            			5. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
         
     | 
| 160 | 
         
             
            			6. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
         
     | 
| 161 | 
         
             
            			7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.  
         
     | 
| 162 | 
         
            -
            			8. `exec_outcome`: Execution outcome status. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only  
     | 
| 163 | 
         
             
                        9. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
         
     | 
| 164 | 
         
             
            			10. `prob_desc_input_from`: How the program should take the unit test.
         
     | 
| 165 | 
         
             
            			11. `prob_desc_output_to`: Where the program should output the result of the unit test.
         
     | 
| 
         @@ -2437,7 +2437,7 @@ class xCodeEval(datasets.GeneratorBasedBuilder): 
     | 
|
| 2437 | 
         
             
                            },
         
     | 
| 2438 | 
         
             
                        ),
         
     | 
| 2439 | 
         
             
                        datasets.SplitGenerator(
         
     | 
| 2440 | 
         
            -
                            name= 
     | 
| 2441 | 
         
             
                            gen_kwargs={
         
     | 
| 2442 | 
         
             
                                "filepaths": validation_downloaded_files,
         
     | 
| 2443 | 
         
             
                                "problem_description_file": prob_desc_file,
         
     | 
| 
         @@ -2445,7 +2445,7 @@ class xCodeEval(datasets.GeneratorBasedBuilder): 
     | 
|
| 2445 | 
         
             
                            },
         
     | 
| 2446 | 
         
             
                        ),
         
     | 
| 2447 | 
         
             
                        datasets.SplitGenerator(
         
     | 
| 2448 | 
         
            -
                            name= 
     | 
| 2449 | 
         
             
                            gen_kwargs={
         
     | 
| 2450 | 
         
             
                                "filepaths": test_downloaded_files,
         
     | 
| 2451 | 
         
             
                                "problem_description_file": prob_desc_file,
         
     | 
| 
         @@ -2456,7 +2456,7 @@ class xCodeEval(datasets.GeneratorBasedBuilder): 
     | 
|
| 2456 | 
         
             
                    if task_name == "code_translation":
         
     | 
| 2457 | 
         
             
                        split_info.append(
         
     | 
| 2458 | 
         
             
                            datasets.SplitGenerator(
         
     | 
| 2459 | 
         
            -
                                name=" 
     | 
| 2460 | 
         
             
                                gen_kwargs={
         
     | 
| 2461 | 
         
             
                                    "filepaths": validation_small_downloaded_files,
         
     | 
| 2462 | 
         
             
                                    "problem_description_file": prob_desc_file,
         
     | 
| 
         | 
|
| 107 | 
         
             
            			13. `apr_id`: A unique ID for the apr sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `apr_id`.
         
     | 
| 108 | 
         
             
            			14. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.  
         
     | 
| 109 | 
         
             
            			15. `tags`: List of potential algorithmic techniques required to write the program.
         
     | 
| 110 | 
         
            +
            			16. `bug_exec_outcome`: A pre-run execution outcome of `bug_source_code`. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only compact and titan test data.   
         
     | 
| 111 | 
         
            +
            			17. `fix_exec_outcome`: A pre-run execution outcome of `fix_source_code`. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only compact and titan test data.   
         
     | 
| 112 | 
         
             
            			18. `potential_dominant_fix_op`: A potential fix op recommended by difflib.
         
     | 
| 113 | 
         
             
            			19. `lang_cluster`: A generic programming language name the value of `lang` belongs to.
         
     | 
| 114 | 
         
             
            			20. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
         
     | 
| 
         | 
|
| 133 | 
         
             
            			3. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
         
     | 
| 134 | 
         
             
            			4. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
         
     | 
| 135 | 
         
             
            			5. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.  
         
     | 
| 136 | 
         
            +
            			6. `exec_outcome`: Execution outcome status. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only compact and titan test data.
         
     | 
| 137 | 
         
             
                        7. `lang_cluster`: A generic programming language name the value of `lang` belongs to.
         
     | 
| 138 | 
         
             
                        8. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
         
     | 
| 139 | 
         
             
            			9. `prob_desc_input_from`: How the program should take the unit test.
         
     | 
| 
         | 
|
| 159 | 
         
             
            			5. `src_uid`: A specific identifier that shows which problem the code is associated with. This identifier is **important** for the training of the model. The problem referred to by the `src_uid` provides a natural description of the problem that the code successfully solved. Refer to [Structure of `problem_descriptions.jsonl`](./README.md#structure-of-problem_descriptionsjsonl)
         
     | 
| 160 | 
         
             
            			6. `code_uid`: A unique ID for the sample. It is not important for model training. If you find any issue with the sample, you can report it to us mentioning the `code_uid`.
         
     | 
| 161 | 
         
             
            			7. `difficulty`: Difficulty rating of the problem indicated by `src_uid`. The higher the harder.  
         
     | 
| 162 | 
         
            +
            			8. `exec_outcome`: Execution outcome status. Follow [Section 4.1](https://arxiv.org/pdf/2303.03004.pdf) to know the potential list of outcomes. The `exec_outcome` flags in the training data comes from a pre-run environmeent. However, training data doesn't  includes unit-test to avoid potential hacks. We provide unit test for only compact and titan test data.
         
     | 
| 163 | 
         
             
                        9. `prob_desc_description`: Problem description in textual format, math operations are written in latex.
         
     | 
| 164 | 
         
             
            			10. `prob_desc_input_from`: How the program should take the unit test.
         
     | 
| 165 | 
         
             
            			11. `prob_desc_output_to`: Where the program should output the result of the unit test.
         
     | 
| 
         | 
|
| 2437 | 
         
             
                            },
         
     | 
| 2438 | 
         
             
                        ),
         
     | 
| 2439 | 
         
             
                        datasets.SplitGenerator(
         
     | 
| 2440 | 
         
            +
                            name='compact',
         
     | 
| 2441 | 
         
             
                            gen_kwargs={
         
     | 
| 2442 | 
         
             
                                "filepaths": validation_downloaded_files,
         
     | 
| 2443 | 
         
             
                                "problem_description_file": prob_desc_file,
         
     | 
| 
         | 
|
| 2445 | 
         
             
                            },
         
     | 
| 2446 | 
         
             
                        ),
         
     | 
| 2447 | 
         
             
                        datasets.SplitGenerator(
         
     | 
| 2448 | 
         
            +
                            name='titan',
         
     | 
| 2449 | 
         
             
                            gen_kwargs={
         
     | 
| 2450 | 
         
             
                                "filepaths": test_downloaded_files,
         
     | 
| 2451 | 
         
             
                                "problem_description_file": prob_desc_file,
         
     | 
| 
         | 
|
| 2456 | 
         
             
                    if task_name == "code_translation":
         
     | 
| 2457 | 
         
             
                        split_info.append(
         
     | 
| 2458 | 
         
             
                            datasets.SplitGenerator(
         
     | 
| 2459 | 
         
            +
                                name="compact_small",
         
     | 
| 2460 | 
         
             
                                gen_kwargs={
         
     | 
| 2461 | 
         
             
                                    "filepaths": validation_small_downloaded_files,
         
     | 
| 2462 | 
         
             
                                    "problem_description_file": prob_desc_file,
         
     |