M3.2-24B-Loki-V1.3-GGUF
GGUF model files for M3.2-24B-Loki-V1.3.
This repository contains GGUF models quantized using llama.cpp.
- Base Model:
M3.2-24B-Loki-V1.3 - Quantization Methods Processed in this Job:
Q8_0,Q6_K,Q5_K_M,Q5_0,Q5_K_S,Q4_K_M,Q4_K_S,Q4_0,Q3_K_L,Q3_K_M,Q3_K_S,Q2_K,BF16 - Importance Matrix Used: No
This specific upload is for the BF16 quantization.
- Downloads last month
- 1,288
Hardware compatibility
Log In
to view the estimation
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for CrucibleLab-TG/M3.2-24B-Loki-V1.3-GGUF
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503
Finetuned
CrucibleLab/M3.2-24B-Loki-V1.3