myBit-Llama2-jp-127M-test-6
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 5.4087
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.8e-05
 - train_batch_size: 96
 - eval_batch_size: 96
 - seed: 42
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: polynomial
 - lr_scheduler_warmup_steps: 250
 - num_epochs: 1
 
Training results
| Training Loss | Epoch | Step | Validation Loss | 
|---|---|---|---|
| 9.8677 | 0.04 | 100 | 9.1385 | 
| 8.4868 | 0.07 | 200 | 7.7575 | 
| 7.2146 | 0.11 | 300 | 6.8688 | 
| 6.6972 | 0.14 | 400 | 6.5702 | 
| 6.4628 | 0.18 | 500 | 6.3746 | 
| 6.3058 | 0.22 | 600 | 6.2362 | 
| 6.1813 | 0.25 | 700 | 6.1241 | 
| 6.0708 | 0.29 | 800 | 6.0228 | 
| 5.963 | 0.33 | 900 | 5.9109 | 
| 5.8577 | 0.36 | 1000 | 5.7948 | 
| 5.7614 | 0.4 | 1100 | 5.7155 | 
| 5.6876 | 0.43 | 1200 | 5.6376 | 
| 5.6044 | 0.47 | 1300 | 5.5631 | 
| 5.5538 | 0.51 | 1400 | 5.5045 | 
| 5.5007 | 0.54 | 1500 | 5.4649 | 
| 5.4556 | 0.58 | 1600 | 5.4282 | 
| 5.4246 | 0.62 | 1700 | 5.3917 | 
| 5.3982 | 0.65 | 1800 | 5.3762 | 
| 5.3854 | 0.69 | 1900 | 5.3546 | 
| 5.365 | 0.72 | 2000 | 5.3447 | 
| 5.3579 | 0.76 | 2100 | 5.3473 | 
| 5.3552 | 0.8 | 2200 | 5.3463 | 
| 5.3682 | 0.83 | 2300 | 5.3630 | 
| 5.3743 | 0.87 | 2400 | 5.3718 | 
| 5.3957 | 0.91 | 2500 | 5.3887 | 
| 5.4079 | 0.94 | 2600 | 5.4010 | 
| 5.423 | 0.98 | 2700 | 5.4087 | 
Framework versions
- Transformers 4.38.2
 - Pytorch 2.1.0+cu121
 - Datasets 2.18.0
 - Tokenizers 0.15.2
 
- Downloads last month
 - -
 
Model tree for HachiML/myBit-Llama2-jp-127M-test-6
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0