ritam5013 commited on
Commit
47ef9a5
·
verified ·
1 Parent(s): 4fd4548

Publish adapter: higgsfield

Browse files
Files changed (4) hide show
  1. README.md +41 -0
  2. manifest.md +14 -0
  3. run.json +6 -0
  4. token_mod.pt +3 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: stabilityai/stable-diffusion-3.5-medium
4
+ library_name: diffusers
5
+ tags:
6
+ - sd3.5
7
+ - adapter
8
+ - higgsfield
9
+ inference: false
10
+ ---
11
+
12
+ # phenomenalai/sd3-token-mod-1024
13
+
14
+ Adapter for `stabilityai/stable-diffusion-3.5-medium` trained with Higgsfield.
15
+
16
+ ## Usage
17
+
18
+ ```python
19
+ import torch
20
+ from diffusers import StableDiffusion3Pipeline
21
+ from higgsfield.adapters.token_mod import GlobalTokenModulator
22
+ from huggingface_hub import hf_hub_download
23
+
24
+ base_model = "stabilityai/stable-diffusion-3.5-medium"
25
+ repo_id = "phenomenalai/sd3-token-mod-1024"
26
+
27
+ pipe = StableDiffusion3Pipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16).to("cuda")
28
+ # download adapter
29
+ adapter_path = hf_hub_download(repo_id, filename="token_mod.pt")
30
+ # load adapter
31
+ mod = GlobalTokenModulator(num_tokens=333, embed_dim=4096, out_channels=pipe.transformer.config.in_channels)
32
+ mod.load_state_dict(torch.load(adapter_path, map_location="cpu"))
33
+ mod.to("cuda").eval()
34
+ # Now use your own inference loop adding bias from `mod` like in your codebase.
35
+ ```
36
+
37
+ ## Files
38
+ - `token_mod.pt` adapter weights
39
+ - `run.json` metadata
40
+ - `manifest.md` human summary
41
+
manifest.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Higgsfield Run Manifest
2
+
3
+ Model: stabilityai/stable-diffusion-3.5-medium
4
+ Epochs: 2
5
+ Batch size: 16
6
+ Precision: bf16
7
+
8
+ ## Artifacts
9
+ - Final adapter: runs/sd35_token_mod/final/token_mod.pt
10
+ - Run JSON: runs/sd35_token_mod/final/run.json
11
+
12
+ ## Notes
13
+ - Trained adapter: GlobalTokenModulator (token-to-latent bias)
14
+ - Base components (VAE/Transformer/Encoders) were frozen.
run.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "model_id": "stabilityai/stable-diffusion-3.5-medium",
3
+ "epochs": 2,
4
+ "batch_size": 16,
5
+ "precision": "bf16"
6
+ }
token_mod.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4107a373703cb581741aa06724fc6ffa3bd3194e1618ffa10e2b771233bdb802
3
+ size 5753711