This model finding1/DeepSeek-V3.1-Terminus-MLX-mixed_4_6 was converted to MLX format from deepseek-ai/DeepSeek-V3.1-Terminus using mlx-lm version 0.28.0 plus pull request #494 with mlx_lm.convert --quantize --q-bits 4 --mlx-path MLX-mixed_4_6 --quant-predicate mixed_4_6 --hf-path deepseek-ai/DeepSeek-V3.1-Terminus. The console reported 4.809 bits per weight.

Downloads last month
431
Safetensors
Model size
671B params
Tensor type
BF16
·
U32
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for finding1/DeepSeek-V3.1-Terminus-MLX-mixed_4_6

Quantized
(18)
this model