File size: 614 Bytes
508e00b 0df9332 9173756 508e00b 2145600 508e00b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
---
tags:
- gguf
- llama.cpp
- unsloth
---
# llamabotomy-test - GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
Super tiny version of Llama's 1 B parameter model quantized using the lowest precision Unsloth offers. Training this one on junk data and destroying the weights should fully lobotomize it, but it honestly works a little too well for being around ~500MB.
Shoutout Unsloth's quantization magic I guess...
## Available Model files:
- `llama-3.2-1b-instruct.Q3_K_S.gguf`
## Ollama
An Ollama Modelfile is included for easy deployment.
|