Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

MohamedRashad 
posted an update 3 days ago
MonsterMMORPG 
posted an update 2 days ago
view post
Post
1873
How to Use SwarmUI Presets & Workflows in ComfyUI + Custom Model Paths Setup for ComfyUI & SwarmUI : https://www.youtube.com/watch?v=EqFilBM3i7s

Full tutorial link > https://www.youtube.com/watch?v=EqFilBM3i7s

Info

Generating workflow inside SwarmUI and using in ComfyUI is literally 1-click. In this tutorial I will show you how to use our 40+ amazing generative AI presets made for SwarmUI in ComfyUI with most easy way. You will be able to get very best outcomes of all AI models such as SDXL, FLUX, Z Image Turbo, Wan 2.1, Wan 2.2, FLUX 2, Qwen Image, Qwen Image Edit, FLUX Kontext, Image Outpainting, Image Inpainting and many more. Moreover, I will show how to use custom model paths in ComfyUI and SwarmUI to unify your models in same folder and avoid model duplication and save massive amount of disk space.

  • 1 reply
·
sergiopaniego 
posted an update 3 days ago
prithivMLmods 
posted an update 3 days ago
view post
Post
3390
Update: TRELLIS.2 (Text to 3D, Image to 3D) Gradio with Rerun Embedded demo with improved visualization of the 3D model previewer is now available on Hugging Face. Generate assets and view them in the 3D viewer, powered and streamlined with Microsoft’s TRELLIS.2 and Tongyi-MAI’s Z-Image-Turbo models.

🤗 TRELLIS.2 (Demo): prithivMLmods/TRELLIS.2-Text-to-3D
🕹️ GitHub: https://github.com/PRITHIVSAKTHIUR/TRELLIS.2-Text-to-3D-RERUN
🕹️ Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations

To know more about it, visit the app page or the respective model page!
dhruv3006 
posted an update 1 day ago
view post
Post
1830
Git is powerful, but it’s also one of the biggest sources of developer mistakes.

What is Git GUI, and how does it help here ?

Git GUI makes version control visual, predictable, and easier to reason about especially when things go wrong.

That’s exactly why we built Git GUI in Voiden.

Instead of relying on memorized commands, Voiden lets you see what Git is doing before it does it.

What Voiden’s Git GUI helps developers do
• View exact file and line-level changes before committing
• Stage only intended changes (no accidental commits)
• Clearly distinguish staged vs unstaged files
• Inspect visual diffs with full context
• Understand branches, commit history, and repo state instantly

When Git behavior is hidden, errors increase. Voiden’s Git GUI doesn’t abstract Git away, it explains Git.

Whether you’re new to Git or an experienced developer who prefers clarity, this is Git you can reason about.

Version control should feel safe, not stressful.

What Git pain points slow you down today?

Try out Git GUI in beta : https://voiden.md ( Now in Linux and Mac )

  • 2 replies
·
Reubencf 
posted an update 1 day ago
view post
Post
1949
As 2025 is ending i would like to thank everyone for trying out
Reubencf/Nano_Banana_Editor

looking forward to build and release more in the future for the open source community

DawnC 
posted an update 2 days ago
view post
Post
2449
VividFlow: AI Image-to-Video Generation 🎬✨

Bring your images to life with cinematic motion! VividFlow transforms any static image—portraits, artwork, products, or landscapes, into dynamic videos with professional animation quality.
The system supports both curated motion templates and custom natural language prompts, giving you complete creative freedom to describe camera movements, subject actions, and atmospheric effects in your own words.

What's Inside?
🎭 Smart Motion Templates — 8 curated categories from fashion cinematography to wildlife animations, each with tested prompts that prevent common artifacts like phantom hands in portraits

⚡ Optimized Engine — Powered by Wan2.2-I2V-A14B with Lightning LoRA distillation and FP8 quantization for memory-efficient inference

🎯 Full Creative Control — Seed-based reproducibility for consistent results, adjustable duration from half a second to five seconds, optional AI prompt expansion with Qwen2.5 for enhanced descriptions, and real-time resolution preview

Current Performance & Development Roadmap
VividFlow runs on ZeroGPU with generation taking about 3-4 minutes for 3-second videos. While I am actively optimizing the pipeline to reduce this time, the current version prioritizes output stability and quality, results are worth the wait!

Future development focuses on dedicated GPU deployment for faster processing, batch generation to create multiple variations at once, and expanding our motion template library based on what the community wants to see.

👉 Try it now: DawnC/VividFlow

If VividFlow brings motion to your creative vision, please show your support with a ❤️, your engagement influences future development priorities!

#AI #ImageToVideo #GenerativeAI #VideoGeneration #DeepLearning
·
projectlosangeles 
posted an update 1 day ago
MikeDoes 
posted an update 1 day ago
view post
Post
809
Anonymizing a prompt is half the battle. Reliably de-anonymizing the response is the other.

To build a truly reliable privacy pipeline, you have to test it. A new Master's thesis does just that, and our data was there for every step.

We're excited to showcase this work on handling confidential data in LLM prompts from Nedim Karavdic at Mälardalen University. To build their PII anonymization pipeline, they first trained a custom NER model. We're proud that the Ai4Privacy pii-masking-200k dataset was used as the foundational training data for this critical first step.

But it didn't stop there. The research also used our dataset to create the parallel data needed to train and test the generative "Seek" models for de-anonymization. It's a win-win when our open-source data not only helps build the proposed "better solution" but also helps prove why it's better by enabling a rigorous, data-driven comparison.

🔗 Check out the full thesis for a great deep-dive into building a practical, end-to-end privacy solution: https://www.diva-portal.org/smash/get/diva2:1980696/FULLTEXT01.pdf

#OpenSource
#DataPrivacy
#LLM
#Anonymization
#AIsecurity
#HuggingFace
#Ai4Privacy
#Worldslargestopensourceprivacymaskingdataset
eaddario 
posted an update 3 days ago
view post
Post
1574
Experimental global target bits‑per‑weight quantization of allenai/Olmo-3-7B-Instruct and allenai/Olmo-3-7B-Think

Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target.

Key Advantages:
- VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM).
- Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs.

Full benchmarks (PPL, KLD, ARC, MMLU, etc.) and methodology in the models' cards

eaddario/Olmo-3-7B-Instruct-GGUF
eaddario/Olmo-3-7B-Think-GGUF