Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
CrucibleLab-TG
/
M3.2-24B-Loki-V1.2-GGUF
like
1
Follow
CrucibleLab-TG
17
GGUF
llama.cpp
bf16
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
M3.2-24B-Loki-V1.2-GGUF
225 GB
1 contributor
History:
27 commits
Darkhn
Upload README.md with huggingface_hub
420cdd9
verified
3 months ago
.gitattributes
2.38 kB
Add BF16 GGUF quant: M3.2-24B-Loki-V1.2-BF16.gguf
3 months ago
M3.2-24B-Loki-V1.2-BF16.gguf
47.2 GB
xet
Add BF16 GGUF quant: M3.2-24B-Loki-V1.2-BF16.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q2_K.gguf
8.89 GB
xet
Add Q2_K GGUF quant: M3.2-24B-Loki-V1.2-Q2_K.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q3_K_L.gguf
12.4 GB
xet
Add Q3_K_L GGUF quant: M3.2-24B-Loki-V1.2-Q3_K_L.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q3_K_M.gguf
11.5 GB
xet
Add Q3_K_M GGUF quant: M3.2-24B-Loki-V1.2-Q3_K_M.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q3_K_S.gguf
10.4 GB
xet
Add Q3_K_S GGUF quant: M3.2-24B-Loki-V1.2-Q3_K_S.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q4_0.gguf
13.4 GB
xet
Add Q4_0 GGUF quant: M3.2-24B-Loki-V1.2-Q4_0.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q4_K_M.gguf
14.3 GB
xet
Add Q4_K_M GGUF quant: M3.2-24B-Loki-V1.2-Q4_K_M.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q4_K_S.gguf
13.5 GB
xet
Add Q4_K_S GGUF quant: M3.2-24B-Loki-V1.2-Q4_K_S.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q5_0.gguf
16.3 GB
xet
Add Q5_0 GGUF quant: M3.2-24B-Loki-V1.2-Q5_0.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q5_K_M.gguf
16.8 GB
xet
Add Q5_K_M GGUF quant: M3.2-24B-Loki-V1.2-Q5_K_M.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q5_K_S.gguf
16.3 GB
xet
Add Q5_K_S GGUF quant: M3.2-24B-Loki-V1.2-Q5_K_S.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q6_K.gguf
19.3 GB
xet
Add Q6_K GGUF quant: M3.2-24B-Loki-V1.2-Q6_K.gguf
3 months ago
M3.2-24B-Loki-V1.2-Q8_0.gguf
25.1 GB
xet
Add Q8_0 GGUF quant: M3.2-24B-Loki-V1.2-Q8_0.gguf
3 months ago
README.md
542 Bytes
Upload README.md with huggingface_hub
3 months ago