File size: 2,729 Bytes
4ea7d1a d2a101b 4ea7d1a d2a101b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
license: apache-2.0
datasets:
- agentica-org/DeepScaleR-Preview-Dataset
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
tags:
- reinforcement-learning
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/BHITqJU33sXqf-Jbytrxg.png" width="100"/>
<b><span style="font-size:28px">SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression</span></b>
</p>
<p align="center">
📃 <a href="https://arxiv.org/abs/2509.25176" target="_blank">Paper</a> • 📝 <a href="https://api.wandb.ai/links/teamsiri/isge4elx" target="_blank">Wandb</a>
</p>
---
## 🔍 Overview
**SIRI (Scaling Iterative Reinforcement Learning with Interleaved Compression)** is a reinforcement-learning–based framework designed to improve the efficiency and accuracy of **Large Reasoning Models (LRMs)**.
Traditional RL training often causes **overthinking** and long, redundant reasoning traces. Prior methods that compress outputs (length penalties, pruning, or skipping thought tokens) improve efficiency but hurt accuracy.
SIRI solves this trade-off by **iteratively alternating between compression and expansion of the reasoning budget**, controlled by a cosine length scheduler. This approach dynamically balances concise reasoning with long-horizon exploration.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/SXow6xntEgrwhvWtzvrkE.png" alt="pareto_front" width="500"/>
</p>
---
## 🚀 Key Features
- **Interleaved Compression–Expansion**:
- *Compression phase*: forces concise, high-density reasoning by limiting rollout length.
- *Expansion phase*: restores longer rollouts to encourage exploration and planning.
- **Token Efficiency without Accuracy Loss**: Unlike previous methods, SIRI improves accuracy *while reducing average token usage*.
- **Iterative RL Training**: Built on GRPO with modifications from DAPO (clip-high/low decoupling, KL removal).
- **Generalization Across Model Sizes**: Validated on both **1.5B** and **7B** models.
---
## 📊 Benchmarks

---
## 📝 Citation
```bibtex
@misc{wen2025siriscalingiterativereinforcement,
title={SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression},
author={Haoming Wen and Yushi Bai and Juanzi Li and Jie Tang},
year={2025},
eprint={2509.25176},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2509.25176},
}
``` |