gguf-node test pack
locate gguf from Add Node > extension dropdown menu (between 3d and api; second last option)
setup (in general)
- drag gguf file(s) to diffusion_models folder (
./ComfyUI/models/diffusion_models) - drag clip or encoder(s) to text_encoders folder (
./ComfyUI/models/text_encoders) - drag controlnet adapter(s), if any, to controlnet folder (
./ComfyUI/models/controlnet) - drag lora adapter(s), if any, to loras folder (
./ComfyUI/models/loras) - drag vae decoder(s) to vae folder (
./ComfyUI/models/vae) 
run it straight (no installation needed way; recommended)
- get the comfy pack with the new gguf-node (pack)
 - run the .bat file in the main directory
 
or, for existing user (alternative method)
- you could git clone the node to your 
./ComfyUI/custom_nodes(more details here) - either navigate to 
./ComfyUI/custom_nodesfirst or drag and drop the node clone (gguf repo) there 
workflow
- drag any workflow json file to the activated browser; or
 - drag any generated output file (i.e., picture, video, etc.; which contains the workflow metadata) to the activated browser
 
simulator
- design your own prompt; or
 - generate random prompt/descriptor(s) by the simulator (might not applicable for all models)
 

- Prompt
 - anime style anime girl with massive fennec ears and one big fluffy tail, she has blonde long hair blue eyes wearing a maid outfit with a long black gold leaf pattern dress, walking slowly to the front with sweetie smile, holding a fancy black forest cake with candles on top in the kitchen of an old dark Victorian mansion lit by candlelight with a bright window to the foggy forest
 

- Prompt
 - drag it <metadata inside>
 

- Prompt
 - drag it <metadata inside>
 
booster
- drag safetensors file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
 - select the safetensors model; click 
Queue(run); simply track the progress from console - when it was done; the boosted safetensors fp32 will be saved to the output folder (./ComfyUI/output)
 
cutter (beta)
- drag safetensors file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
 - select the safetensors model; click 
Queue(run); simply track the progress from console - when it was done; the half-cut safetensors fp8_e4m3fn will be saved to the output folder (./ComfyUI/output)
 
convertor (alpha)
- drag safetensors file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
 - select the safetensors model; click 
Queue(run); track the progress from console - the converted gguf file will be saved to the output folder (./ComfyUI/output)
 
convertor (reverse)
- drag gguf file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
 - select the gguf model; click 
Queue(run); track the progress from console - the reverse converted safetensors file will be saved to the output folder (./ComfyUI/output)
 
convertor (zero)
- drag safetensors file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
 - select the safetensors model; click 
Queue(run); track the progress from console - the converted gguf file will be saved to the output folder (./ComfyUI/output)
 - zero means no restrictions; different from alpha; any form of safetensors can be converted; pig architecture will be applied for the output gguf
 
latest feature: gguf vae🐷 loader is now working on gguf-node
- gguf vae is able to save memory consumption of your machine
 - convert your safetensors vae to gguf vae using convertor (zero)
 - then use it with the new gguf vae loader
 - same as gguf clip loaders, gguf vae loader is compatible with both safetensors and gguf file(s)
 
disclaimer
- some models (original files) as well as part of the codes are obtained from somewhere or provided by someone else and we might not easily spot out the creator/contributor(s) behind, unless it was specified in the source; rather let it blank instead of anonymous/unnamed/unknown
 - we hope we are able to make more effort to trace the source; if it is your work, do let us know; we will address it back properly and probably; thanks for everything
 
reference
- sd3.5, sdxl from stabilityai
 - flux from black-forest-labs
 - pixart from pixart-alpha
 - lumina from alpha-vllm
 - aura from fal
 - mochi from genmo
 - hyvid from tencent
 - wan from wan-ai
 - ltxv from lightricks
 - cosmos from nvidia
 - pig architecture from connector
 - comfyui from comfyanonymous
 - comfyui-gguf from city96
 - llama.cpp from ggerganov
 - llama-cpp-python from abetlen
 - gguf-connector (pypi|repo)
 - gguf-node (pypi|repo|pack)
 
- Downloads last month
 - 5,727
 
							Hardware compatibility
						Log In
								
								to view the estimation
3-bit
4-bit
16-bit
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	🙋
			
		Ask for provider support
