Upload IMG_3134.jpeg
#5 opened about 15 hours ago
by
top40ent
"Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj"
7
#4 opened 12 days ago
by
MrRyukami
XPU Not working "No backend can handle 'dequantize_per_tensor_fp8': eager: x: device xpu not in {'cuda', 'cpu'}"
3
#3 opened 16 days ago
by
AI-Joe-git
Create README.md
#2 opened 16 days ago
by
dayz1593572159
Fp8 text encoder
🔥
👍
9
5
#1 opened 17 days ago
by
kakkkarotto