Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,7 @@ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
|
|
| 7 |
model-index:
|
| 8 |
- name: airoboros-lora-out
|
| 9 |
results: []
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
@@ -15,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 15 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
| 16 |
# airoboros-lora-out
|
| 17 |
|
| 18 |
-
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) on the
|
| 19 |
It achieves the following results on the evaluation set:
|
| 20 |
- Loss: 0.7230
|
| 21 |
|
|
@@ -29,7 +30,7 @@ More information needed
|
|
| 29 |
|
| 30 |
## Training and evaluation data
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
## Training procedure
|
| 35 |
|
|
@@ -81,4 +82,4 @@ The following `bitsandbytes` quantization config was used during training:
|
|
| 81 |
### Framework versions
|
| 82 |
|
| 83 |
|
| 84 |
-
- PEFT 0.6.0
|
|
|
|
| 7 |
model-index:
|
| 8 |
- name: airoboros-lora-out
|
| 9 |
results: []
|
| 10 |
+
pipeline_tag: text-generation
|
| 11 |
---
|
| 12 |
|
| 13 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
|
| 16 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
| 17 |
# airoboros-lora-out
|
| 18 |
|
| 19 |
+
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) on the `jondurbin/airoboros-3.1` dataset.
|
| 20 |
It achieves the following results on the evaluation set:
|
| 21 |
- Loss: 0.7230
|
| 22 |
|
|
|
|
| 30 |
|
| 31 |
## Training and evaluation data
|
| 32 |
|
| 33 |
+
https://wandb.ai/wing-lian/airoboros-tinyllama
|
| 34 |
|
| 35 |
## Training procedure
|
| 36 |
|
|
|
|
| 82 |
### Framework versions
|
| 83 |
|
| 84 |
|
| 85 |
+
- PEFT 0.6.0
|