Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
DavidAU 
posted an update 4 days ago
Post
4589
Tiny but mighty: LFM 1.2B - 11 Distill / Fine tunes : Exceeding all benchmarks at 300-700+ T/S on GPU, 60+ T/S CPU.

Almost all exceed LFM 1.2B Benchmarks - which are already very impressive.
All benchmarks posted.

A specialized merge of multiple of these fine tunes by @nightmedia FAR exceeds the benchmarks set by the already impressive LFM.

(LFM2.5-1.2B-MEGABRAIN-Thinking-Polaris-ClaudeHOPUS-Deepseek-GLM)

Included are GLM 4.7 Flash, DeepSeek, Claude, Kimi V2 and other distill fine tunes.

Here is the collection ( Quants by MRadermarcher).

https://huggingface.co/collections/DavidAU/lfm-12b-sota-400-700-t-s-enhanced-fine-tunes-distills

Thanks much.

You said "here is the collection" followed by Quants by MRadermacher" so I was thinking quants are on the link. But they aren't, I have to search with Radermacher

·

For each model ; the quants are listed under quantizations.