Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
34.1
TFLOPS
1
1
Jake
jake661
Follow
0 followers
·
10 following
AI & ML interests
None yet
Recent Activity
reacted
to
obsxrver
's
post
with ❤️
about 1 month ago
(https://github.com/obsxrver/wan22-lora-training) If you’ve been wanting to train your own Wan 2.2 Video LoRAs but are intimidated by the hardware requirements, parameter tweaking insanity, or the installation nightmare—I built a solution that handles it all for you. This is currently the easiest, fastest, and cheapest way to get a high-quality training run done. Why this method? * Zero Setup: No installing Python, CUDA, or hunting for dependencies. You launch a pre-built [Vast.AI](http://Vast.AI) template, and it's ready in minutes. * Full WebUI: Drag-and-drop your videos/images, edit captions, and click "Start." No terminal commands required. * Extremely Cheap: You can rent a dual RTX 5090 node, train a full LoRA in 2-3 hours, and auto-shutdown. Total cost is usually $3 or less. * Auto-Save: It automatically uploads your finished LoRA to your Cloud Storage (Google Drive/S3/Dropbox) and kills the instance so you don't pay for a second longer than necessary. How it works: 1. Click the Vast.AI template link (in the repo). 2. Open the WebUI in your browser. 3. Upload your dataset and press Train. 4. Come back in an hour to find your LoRA in your Google Drive. It supports both Text-to-Video and Image-to-Video, and optimizes for dual-GPU setups (training High/Low noise simultaneously) to cut training time in half. Repo + Template Link: https://github.com/obsxrver/wan22-lora-training Let me know if you have questions
new
activity
about 2 months ago
obsxrver/wan2.2-t2v-bdsm:
Thanks for another lora!
View all activity
Organizations
models
0
None public yet
datasets
0
None public yet