I made a code sniping agent to detect when new AI papers with code (and weights) are released, and then automatically create a Gradio demo on Hugging Face 🧙
I call this agent CheatCode (https://github.com/jbilcke-hf/CheatCode) because it skips so many steps that it kinda feels like breaking the rules of the AI tech release game 😅
As with any experimental technology, there is still room for improvement 👩🏻🔬:
- Currently the demos are all generated in one go and not built or tested by the agent itself. A more robust version should loop over the deployed app to fix build/runtime issues. - There is still a bit of human curation done to avoid making demos for things that can’t really be demonstrated on ZeroGPU (eg. tasks taking several minutes) - Some papers can actually be showcased in a variety of ways, which isn’t really supported (see Demo 2)
✨ Compresses long sequences visually to bypass token limits ✨ Reduces computational and memory costs ✨ Preserves meaning through multimodal encoding ✨ Built on GLM-4.1V-9B-Base
✨ Any prior in → 3D world out ✨ Mix camera, intrinsics, depth as priors ✨ Predict point clouds, normals, Gaussians & more in one pass ✨ Unified architecture for all 3D task
If you've ever trained a VLM, you know this problem: nobody shares their data mixtures. It's a black box, making replicating SOTA work impossible. We wanted to change that.
FineVision unifies 200 sources into 24 million samples. With 17.3 million images and 9.5 billion answer tokens, it's the largest open resource of its kind.
In the paper, we share how we built it: 🔍 finding and cleaning data at scale 🧹 removing excessive duplicates across sources 🤗 decontaminating against 66 public benchmarks
My favorite part is Figure 6 (in the video!). It's our visual diversity analysis. It shows that FineVision isn't just bigger; it's more balanced and conceptually richer than other open datasets. NVIDIA's Eagle 2 paper highlighted just how critical this visual diversity is, and our results confirm it: models trained on FineVision consistently outperform those trained on any other open dataset on 11 benchmarks!
🎉 To celebrate the paper, I’m also releasing a concatenated and shuffled version of the full dataset! 👉HuggingFaceM4/FineVision_full_shuffled
It’s ready to stream, so you can start training your own models right away:
from datasets import load_dataset d = load_dataset("HuggingFaceM4/FineVision_full_shuffled", split="train", streaming=True) print(next(iter(d)))
A big shoutout to the first authors: Luis Wiedmann and Orr Zohar. They are rockstars!
deepseek-ai/DeepSeek-OCR is out! 🔥 my take ⤵️ > pretty insane it can parse and re-render charts in HTML > it uses CLIP and SAM features concatenated, so better grounding > very efficient per vision tokens/performance ratio > covers 100 languages
✨ Trained on Honey-Data-15M, a 15M-sample SFT corpus with dual-level CoT reasoning ✨ Backed by HoneyPipe, a transparent & reproducible open data curation suite