Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
DavidAU
's Collections
Dark Planet Series (see "source" coll. for FP)
Grand Horror 16.5B+ Horror and Fiction Generation
Dark MOEs - Mixture of Experts - Uncensored Creative Models
Dark Champion Collection MOE - Mixture Of Experts
200+ Roleplay, Creative Writing, Uncensored, NSFW models.
OpenAI-GPT 20B, 37B ,120B: Neo, reg, uncensored, ablit.
Dark / Evil / NSFW Reasoning Models (gguf/source)
100 Coder/Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Coders and Programmers: 128 expert / A3B Coders
Coders and Programmers: 1x to 10x MOE - Mixture of Experts
Coders and Programmers: 11B and 12B Fused Models
Coders and Programmers: 4B Models with Brainstorm 20x (6B)
Coders and Programmers: 0.8B, 2.4B, 3.4B
Qwen3 - 30B-A3B (128 experts) and higher
Qwen3 - 16B-A3B - 64 experts + Brainstorm versions
Thinking / Reasoning Models - Reg and MOEs.
MOE - Reasoning - Gated IQ Multi-Tier Models
MOE/Mixture of Experts Models (see also "source" cll)
Qwen 3 / 2.5 Reasoning/Thinking REG + MOEs.
Qwen 3 - Horror / Neo Imatrix / Max Qs / 32-256k ctx
Gemma The Writer Series - GGUF / Source)
Gemma3 - Modified / Augumented Models
Rogue Creative Series - GGUF / Incl Source too.
Brainstorm Adapter Models - Augmented/Expanded Reasoning
David's Software, DOCs and How To for Models.
Higher Precision GGUFs / Imatrix Plus
X-Quants - State of Mind Adjusted Quants
Long Context - 16k,32k,64k,128k,200k,256k,512k,1000k
Instruct Models - Better instruction following.
10B / 10.7B / 11B models (except MOEs)
WS - Qwen3 4B - Fiction on Fire - Merge/pruning series.
WS - Dark Planet Wordstorm Project - Random Prune / Form.
WS - Wordstorm 10 Part Series incl Full Source
Source files for GGUF, EXL2, AWQ, GPTQ, HQQ etc etc
Reasoning Source files for GGUF, EXL2, AWQ, GPTQ
Reasoning Adapters / LORAs -> Any model to reasoning
Experiments in Merging Top Models
Older models - 1b,3b,4b - Upgraded Quants Q6/Q8
Older Models - High Quality / Hard to Find
Older Models: Solar Models (Q6) - Exceptional Performance
Older Models: Mini-MOEs - Mixture of Experts 2x, x4 and x8
50 Leaderboards, Benchs, GGUF Tools, and Utilities
2000+ Run LLMs here - Directly in your browser
Older models - 1b,3b,4b - Upgraded Quants Q6/Q8
updated
17 days ago
Quants of models not available in Q6 / Q8 generated from original FP16 / FP32 files.
Upvote
2
This collection has no items.
Upvote
2
Share collection
View history
Collection guide
Browse collections