results_lora
This model is a fine-tuned version of Salesforce/codet5-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5548
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 2.2238 | 1.0 | 941 | 0.4982 |
| 0.5068 | 2.0 | 1882 | 0.4804 |
| 0.4874 | 3.0 | 2823 | 0.4810 |
| 0.4821 | 4.0 | 3764 | 0.4825 |
| 0.4811 | 5.0 | 4705 | 0.4766 |
| 0.4801 | 6.0 | 5646 | 0.4801 |
| 0.4743 | 7.0 | 6587 | 0.4825 |
| 0.4732 | 8.0 | 7528 | 0.4769 |
| 0.4717 | 9.0 | 8469 | 0.4731 |
| 0.4675 | 10.0 | 9410 | 0.4779 |
| 0.4667 | 11.0 | 10351 | 0.4795 |
| 0.4637 | 12.0 | 11292 | 0.4822 |
| 0.4575 | 13.0 | 12233 | 0.4817 |
| 0.4505 | 14.0 | 13174 | 0.4884 |
| 0.4452 | 15.0 | 14115 | 0.4935 |
| 0.4413 | 16.0 | 15056 | 0.4993 |
| 0.4386 | 17.0 | 15997 | 0.4943 |
| 0.4398 | 18.0 | 16938 | 0.5011 |
| 0.4263 | 19.0 | 17879 | 0.5114 |
| 0.4293 | 20.0 | 18820 | 0.4993 |
| 0.4255 | 21.0 | 19761 | 0.5084 |
| 0.4245 | 22.0 | 20702 | 0.5193 |
| 0.4153 | 23.0 | 21643 | 0.5032 |
| 0.4164 | 24.0 | 22584 | 0.5182 |
| 0.4012 | 25.0 | 23525 | 0.5239 |
| 0.3999 | 26.0 | 24466 | 0.5245 |
| 0.3969 | 27.0 | 25407 | 0.5293 |
| 0.3988 | 28.0 | 26348 | 0.5221 |
| 0.4033 | 29.0 | 27289 | 0.5394 |
| 0.3918 | 30.0 | 28230 | 0.5299 |
| 0.3841 | 31.0 | 29171 | 0.5396 |
| 0.3805 | 32.0 | 30112 | 0.5426 |
| 0.3733 | 33.0 | 31053 | 0.5435 |
| 0.384 | 34.0 | 31994 | 0.5532 |
| 0.3762 | 35.0 | 32935 | 0.5476 |
| 0.371 | 36.0 | 33876 | 0.5485 |
| 0.3629 | 37.0 | 34817 | 0.5529 |
| 0.3675 | 38.0 | 35758 | 0.5512 |
| 0.3671 | 39.0 | 36699 | 0.5543 |
| 0.3596 | 40.0 | 37640 | 0.5548 |
Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ngdangkhanh/results_lora
Base model
Salesforce/codet5-base