Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,14 @@ Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 85 hours
|
|
| 20 |
|
| 21 |
This is a full fine tuned model.
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
**Example Prompt**:
|
| 25 |
|
|
|
|
| 20 |
|
| 21 |
This is a full fine tuned model.
|
| 22 |
|
| 23 |
+
Links for quantized models are given below.
|
| 24 |
+
|
| 25 |
+
**Exllama**
|
| 26 |
+
|
| 27 |
+
Exllama v2:[Link](https://huggingface.co/bartowski/Code-290k-6.7B-Instruct-exl2)
|
| 28 |
+
|
| 29 |
+
Extremely thankful to [Bartowski](https://huggingface.co/bartowski) for making Quantized version of the model.
|
| 30 |
+
|
| 31 |
|
| 32 |
**Example Prompt**:
|
| 33 |
|