llamabotomy-test - GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Super tiny version of Llama's 1 B parameter model quantized using the lowest precision Unsloth offers. Training this one on junk data and destroying the weights should fully lobotomize it, but it honestly works a little too well for being around ~500MB. Shoutout Unsloth's quantization magic I guess...

Available Model files:

  • llama-3.2-1b-instruct.Q3_K_S.gguf

Ollama

An Ollama Modelfile is included for easy deployment.

Downloads last month
140
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support