nm-testing/tinyllama-fp8-dynamic-compressed
1B
•
Updated
•
407
nm-testing/SmolLM-1.7B-Instruct-quantized.w4a16
Text Generation
•
0.4B
•
Updated
•
11
nm-testing/SmolLM-360M-Instruct-quantized.w4a16
0.1B
•
Updated
•
6
nm-testing/SmolLM-135M-Instruct-quantized.w4a16
Text Generation
•
71.6M
•
Updated
•
10
nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-channel-quantized
6B
•
Updated
•
649
nm-testing/Meta-Llama-3-8B-Instruct-fp8-compressed
8B
•
Updated
•
8
nm-testing/Phi-3-mini-128k-instruct-FP8
4B
•
Updated
•
855
nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8-quantized
47B
•
Updated
•
9
nm-testing/Mixtral-8x7B-Instruct-v0.1-W8A16-quantized
12B
•
Updated
•
642
nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-quantized
6B
•
Updated
•
637
nm-testing/tinyllama-oneshot-w8a8-dynamic-token-v2-asym
Text Generation
•
1B
•
Updated
•
22
nm-testing/Qwen2-1.5B-Instruct-FP8W8
Text Generation
•
2B
•
Updated
•
10
nm-testing/Meta-Llama-3-8B-Instruct-W4A16-ACTORDER-compressed-tensors-test
Text Generation
•
2B
•
Updated
•
8
nm-testing/Meta-llama3-8b-Instruct-quant-FP8
Text Generation
•
8B
•
Updated
•
14
nm-testing/Meta-llama3-8b-Instruct-SmoothQuant-Fp8
Text Generation
•
8B
•
Updated
•
12
nm-testing/Meta-Llama-3-8B-Instruct-nonuniform-test
Text Generation
•
8B
•
Updated
•
10.5k
Text Generation
•
8B
•
Updated
•
9
nm-testing/Meta-Llama-3-8B-Instruct-Non-Uniform-compressed-tensors
Text Generation
•
8B
•
Updated
•
10
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation
•
8B
•
Updated
•
18
•
2
nm-testing/Meta-Llama-3-8B-Instruct-FP8-K-V
Text Generation
•
8B
•
Updated
•
26
nm-testing/Qwen2-0.5B-Instruct
Text Generation
•
0.6B
•
Updated
•
12
nm-testing/Meta-Llama-3-8B-Instruct-W4A16-compressed-tensors-test
Text Generation
•
2B
•
Updated
•
12
nm-testing/TinyLlama-1.1B-compressed-tensors-kv-cache-scheme
Text Generation
•
0.4B
•
Updated
•
2.46k
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test-bos
Text Generation
•
8B
•
Updated
•
9
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation
•
8B
•
Updated
•
3.29k
nm-testing/Meta-Llama-3-8B-Instruct-W4-Group128-A16-Test
Text Generation
•
2B
•
Updated
•
8
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Per-Token-Test
Text Generation
•
8B
•
Updated
•
80
nm-testing/tinyllama-oneshot-w8a16-per-channel
Text Generation
•
0.4B
•
Updated
•
647
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-Dyn-Per-Token-2048-Samples
Text Generation
•
8B
•
Updated
•
11
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-Dyn-Per-Token
Text Generation
•
8B
•
Updated
•
7