-
-
-
-
-
-
Inference Providers
Active filters:
quantllm
codewithdark/Llama-3.2-3B-4bit
3B
•
Updated
•
6
codewithdark/Llama-3.2-3B-GGUF-4bit
3B
•
Updated
•
5
codewithdark/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
60
QuantLLM/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
17
QuantLLM/Llama-3.2-3B-2bit-mlx
Text Generation
•
3B
•
Updated
•
10
QuantLLM/Llama-3.2-3B-8bit-mlx
Text Generation
•
3B
•
Updated
•
35
QuantLLM/Llama-3.2-3B-5bit-mlx
Text Generation
•
3B
•
Updated
•
16
QuantLLM/Llama-3.2-3B-5bit-gguf
3B
•
Updated
•
2
QuantLLM/Llama-3.2-3B-2bit-gguf
3B
•
Updated
•
7
QuantLLM/functiongemma-270m-it-8bit-gguf
0.3B
•
Updated
•
2
•
1
QuantLLM/functiongemma-270m-it-4bit-gguf
0.3B
•
Updated
•
6
QuantLLM/functiongemma-270m-it-4bit-mlx
Text Generation
•
0.3B
•
Updated
•
20