Commit 
							
							·
						
						13135f2
	
1
								Parent(s):
							
							4ebe417
								
End of training
Browse files
    	
        README.md
    CHANGED
    
    | @@ -16,9 +16,9 @@ should probably proofread and complete it, then remove this comment. --> | |
| 16 |  | 
| 17 | 
             
            This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on the None dataset.
         | 
| 18 | 
             
            It achieves the following results on the evaluation set:
         | 
| 19 | 
            -
            - Loss: 0. | 
| 20 | 
            -
            - Accuracy: 0. | 
| 21 | 
            -
            - Best Accuracy: 0. | 
| 22 |  | 
| 23 | 
             
            ## Model description
         | 
| 24 |  | 
| @@ -37,23 +37,24 @@ More information needed | |
| 37 | 
             
            ### Training hyperparameters
         | 
| 38 |  | 
| 39 | 
             
            The following hyperparameters were used during training:
         | 
| 40 | 
            -
            - learning_rate:  | 
| 41 | 
             
            - train_batch_size: 8
         | 
| 42 | 
             
            - eval_batch_size: 8
         | 
| 43 | 
             
            - seed: 42
         | 
| 44 | 
             
            - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
         | 
| 45 | 
             
            - lr_scheduler_type: cosine
         | 
| 46 | 
             
            - lr_scheduler_warmup_ratio: 0.05
         | 
| 47 | 
            -
            - training_steps:  | 
| 48 |  | 
| 49 | 
             
            ### Training results
         | 
| 50 |  | 
| 51 | 
             
            | Training Loss | Epoch | Step | Validation Loss | Accuracy | Best Accuracy |
         | 
| 52 | 
             
            |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|
         | 
| 53 | 
            -
            | 0. | 
| 54 | 
            -
            | 0. | 
| 55 | 
            -
            | 0. | 
| 56 | 
            -
            | 0. | 
|  | |
| 57 |  | 
| 58 |  | 
| 59 | 
             
            ### Framework versions
         | 
|  | |
| 16 |  | 
| 17 | 
             
            This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on the None dataset.
         | 
| 18 | 
             
            It achieves the following results on the evaluation set:
         | 
| 19 | 
            +
            - Loss: 0.1651
         | 
| 20 | 
            +
            - Accuracy: 0.9439
         | 
| 21 | 
            +
            - Best Accuracy: 0.9439
         | 
| 22 |  | 
| 23 | 
             
            ## Model description
         | 
| 24 |  | 
|  | |
| 37 | 
             
            ### Training hyperparameters
         | 
| 38 |  | 
| 39 | 
             
            The following hyperparameters were used during training:
         | 
| 40 | 
            +
            - learning_rate: 1.238e-05
         | 
| 41 | 
             
            - train_batch_size: 8
         | 
| 42 | 
             
            - eval_batch_size: 8
         | 
| 43 | 
             
            - seed: 42
         | 
| 44 | 
             
            - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
         | 
| 45 | 
             
            - lr_scheduler_type: cosine
         | 
| 46 | 
             
            - lr_scheduler_warmup_ratio: 0.05
         | 
| 47 | 
            +
            - training_steps: 915
         | 
| 48 |  | 
| 49 | 
             
            ### Training results
         | 
| 50 |  | 
| 51 | 
             
            | Training Loss | Epoch | Step | Validation Loss | Accuracy | Best Accuracy |
         | 
| 52 | 
             
            |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|
         | 
| 53 | 
            +
            | 0.4914        | 0.19  | 183  | 0.2747          | 0.8956   | 0.8956        |
         | 
| 54 | 
            +
            | 0.2639        | 0.37  | 366  | 0.3623          | 0.8925   | 0.8956        |
         | 
| 55 | 
            +
            | 0.2105        | 0.56  | 549  | 0.2257          | 0.9224   | 0.9224        |
         | 
| 56 | 
            +
            | 0.1669        | 0.74  | 732  | 0.1651          | 0.9439   | 0.9439        |
         | 
| 57 | 
            +
            | 0.1037        | 0.93  | 915  | 0.1676          | 0.9408   | 0.9439        |
         | 
| 58 |  | 
| 59 |  | 
| 60 | 
             
            ### Framework versions
         |