Jet-Nemotron: Efficient Language Model with Post Neural Architecture Search
Abstract
Jet-Nemotron, a hybrid-architecture language model developed using PostNAS, achieves high accuracy and significantly improved generation throughput compared to leading full-attention models.
We present Jet-Nemotron, a new family of hybrid-architecture language models, which matches or exceeds the accuracy of leading full-attention models while significantly improving generation throughput. Jet-Nemotron is developed using Post Neural Architecture Search (PostNAS), a novel neural architecture exploration pipeline that enables efficient model design. Unlike prior approaches, PostNAS begins with a pre-trained full-attention model and freezes its MLP weights, allowing efficient exploration of attention block designs. The pipeline includes four key components: (1) learning optimal full-attention layer placement and elimination, (2) linear attention block selection, (3) designing new attention blocks, and (4) performing hardware-aware hyperparameter search. Our Jet-Nemotron-2B model achieves comparable or superior accuracy to Qwen3, Qwen2.5, Gemma3, and Llama3.2 across a comprehensive suite of benchmarks while delivering up to 53.6x generation throughput speedup and 6.1x prefilling speedup. It also achieves higher accuracy on MMLU and MMLU-Pro than recent advanced MoE full-attention models, such as DeepSeek-V3-Small and Moonlight, despite their larger scale with 15B total and 2.2B activated parameters.
Community
Is this publication so bad or does it contribute so little if it did not appear in the ranking of the best daily papers?
Will the model be released to public as open source? If so, when?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models (2025)
- OverFill: Two-Stage Models for Efficient Language Model Decoding (2025)
- LExI: Layer-Adaptive Active Experts for Efficient MoE Model Inference (2025)
- Efficient Attention Mechanisms for Large Language Models: A Survey (2025)
- Z-Pruner: Post-Training Pruning of Large Language Models for Efficiency without Retraining (2025)
- TriangleMix: A Lossless and Efficient Attention Pattern for Long Context Prefilling (2025)
- Scaling Linear Attention with Sparse State Expansion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
It's available now, here are the links:
https://huggingface.co/collections/jet-ai/jet-nemotron-68ac76e8356b5399ef83ac9c
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper