Spaces:
Running
Running
config(core): Update default image model to Qwen/Qwen-Image
Browse files- [config] Change default `img_model_name` to `Qwen/Qwen-Image` (app.py:348)
- [config] Change default `img_provider` to `fal-ai` (app.py:354)
- [config] Modify image generation `presets` list (app.py:414-419)
- [ui] Update `img_model_name` placeholder text (app.py:350)
- [ui] Increase `gr.Chatbot` height for `chatbot_display` (app.py:207)
- [ui] Reorder "Supported Providers" section (app.py:491-498)
- [docs] List `Qwen/Qwen-Image` as default model in table (README.md:91)
- [docs] Reorder model entries in "The app requires:" table (README.md:91-93)
README.md
CHANGED
|
@@ -88,9 +88,9 @@ The app requires:
|
|
| 88 |
|
| 89 |
| Model | Provider | Description |
|
| 90 |
|-------|----------|-------------|
|
| 91 |
-
| `
|
| 92 |
| `black-forest-labs/FLUX.1-dev` | Nebius, Together | State-of-the-art image model |
|
| 93 |
-
| `
|
| 94 |
|
| 95 |
## π¨ Usage Examples
|
| 96 |
|
|
|
|
| 88 |
|
| 89 |
| Model | Provider | Description |
|
| 90 |
|-------|----------|-------------|
|
| 91 |
+
| `Qwen/Qwen-Image` | Fal.ai, Replicate | Advanced image generation (default) |
|
| 92 |
| `black-forest-labs/FLUX.1-dev` | Nebius, Together | State-of-the-art image model |
|
| 93 |
+
| `stabilityai/stable-diffusion-xl-base-1.0` | HF Inference, NScale | High-quality SDXL model |
|
| 94 |
|
| 95 |
## π¨ Usage Examples
|
| 96 |
|
app.py
CHANGED
|
@@ -203,7 +203,7 @@ with gr.Blocks(title="HF-Inferoxy AI Hub", theme=gr.themes.Soft()) as demo:
|
|
| 203 |
chatbot_display = gr.Chatbot(
|
| 204 |
label="Chat",
|
| 205 |
type="messages",
|
| 206 |
-
height=
|
| 207 |
show_copy_button=True
|
| 208 |
)
|
| 209 |
|
|
@@ -345,13 +345,13 @@ with gr.Blocks(title="HF-Inferoxy AI Hub", theme=gr.themes.Soft()) as demo:
|
|
| 345 |
with gr.Group():
|
| 346 |
gr.Markdown("**π€ Model & Provider**")
|
| 347 |
img_model_name = gr.Textbox(
|
| 348 |
-
value="
|
| 349 |
label="Model Name",
|
| 350 |
-
placeholder="e.g., stabilityai/stable-diffusion-xl-base-1.0"
|
| 351 |
)
|
| 352 |
img_provider = gr.Dropdown(
|
| 353 |
choices=["hf-inference", "fal-ai", "nebius", "nscale", "replicate", "together"],
|
| 354 |
-
value="
|
| 355 |
label="Provider",
|
| 356 |
interactive=True
|
| 357 |
)
|
|
@@ -412,10 +412,10 @@ with gr.Blocks(title="HF-Inferoxy AI Hub", theme=gr.themes.Soft()) as demo:
|
|
| 412 |
gr.Markdown("**π― Popular Presets**")
|
| 413 |
preset_buttons = []
|
| 414 |
presets = [
|
| 415 |
-
("SDXL (HF)", "stabilityai/stable-diffusion-xl-base-1.0", "hf-inference"),
|
| 416 |
-
("FLUX.1 (Nebius)", "black-forest-labs/FLUX.1-dev", "nebius"),
|
| 417 |
("Qwen (Fal.ai)", "Qwen/Qwen-Image", "fal-ai"),
|
| 418 |
-
("
|
|
|
|
|
|
|
| 419 |
]
|
| 420 |
|
| 421 |
for name, model, provider in presets:
|
|
@@ -489,12 +489,12 @@ with gr.Blocks(title="HF-Inferoxy AI Hub", theme=gr.themes.Soft()) as demo:
|
|
| 489 |
- Higher inference steps = better quality but slower generation
|
| 490 |
|
| 491 |
**Supported Providers:**
|
|
|
|
| 492 |
- **hf-inference**: Core API with comprehensive model support
|
| 493 |
- **cerebras**: High-performance inference
|
| 494 |
- **cohere**: Advanced language models with multilingual support
|
| 495 |
- **groq**: Ultra-fast inference, optimized for speed
|
| 496 |
- **together**: Collaborative AI hosting, wide model support
|
| 497 |
-
- **fal-ai**: High-quality image generation
|
| 498 |
- **nebius**: Cloud-native services with enterprise features
|
| 499 |
- **nscale**: Optimized inference performance
|
| 500 |
- **replicate**: Collaborative AI hosting
|
|
|
|
| 203 |
chatbot_display = gr.Chatbot(
|
| 204 |
label="Chat",
|
| 205 |
type="messages",
|
| 206 |
+
height=1000,
|
| 207 |
show_copy_button=True
|
| 208 |
)
|
| 209 |
|
|
|
|
| 345 |
with gr.Group():
|
| 346 |
gr.Markdown("**π€ Model & Provider**")
|
| 347 |
img_model_name = gr.Textbox(
|
| 348 |
+
value="Qwen/Qwen-Image",
|
| 349 |
label="Model Name",
|
| 350 |
+
placeholder="e.g., Qwen/Qwen-Image or stabilityai/stable-diffusion-xl-base-1.0"
|
| 351 |
)
|
| 352 |
img_provider = gr.Dropdown(
|
| 353 |
choices=["hf-inference", "fal-ai", "nebius", "nscale", "replicate", "together"],
|
| 354 |
+
value="fal-ai",
|
| 355 |
label="Provider",
|
| 356 |
interactive=True
|
| 357 |
)
|
|
|
|
| 412 |
gr.Markdown("**π― Popular Presets**")
|
| 413 |
preset_buttons = []
|
| 414 |
presets = [
|
|
|
|
|
|
|
| 415 |
("Qwen (Fal.ai)", "Qwen/Qwen-Image", "fal-ai"),
|
| 416 |
+
("Qwen (Replicate)", "Qwen/Qwen-Image", "replicate"),
|
| 417 |
+
("FLUX.1 (Nebius)", "black-forest-labs/FLUX.1-dev", "nebius"),
|
| 418 |
+
("SDXL (HF)", "stabilityai/stable-diffusion-xl-base-1.0", "hf-inference"),
|
| 419 |
]
|
| 420 |
|
| 421 |
for name, model, provider in presets:
|
|
|
|
| 489 |
- Higher inference steps = better quality but slower generation
|
| 490 |
|
| 491 |
**Supported Providers:**
|
| 492 |
+
- **fal-ai**: High-quality image generation (default for images)
|
| 493 |
- **hf-inference**: Core API with comprehensive model support
|
| 494 |
- **cerebras**: High-performance inference
|
| 495 |
- **cohere**: Advanced language models with multilingual support
|
| 496 |
- **groq**: Ultra-fast inference, optimized for speed
|
| 497 |
- **together**: Collaborative AI hosting, wide model support
|
|
|
|
| 498 |
- **nebius**: Cloud-native services with enterprise features
|
| 499 |
- **nscale**: Optimized inference performance
|
| 500 |
- **replicate**: Collaborative AI hosting
|