metadata
tags:
- gguf
- llama.cpp
- unsloth
llamabotomy-test - GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Super tiny version of Llama's 1 B parameter model quantized using the lowest precision Unsloth offers. Training this one on junk data and destroying the weights should fully lobotomize it, but it honestly works a little too well for being around ~500MB. Shoutout Unsloth's quantization magic I guess...
Available Model files:
llama-3.2-1b-instruct.Q3_K_S.gguf
Ollama
An Ollama Modelfile is included for easy deployment.