Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs
Abstract
Reinforcement learning for large language models is enhanced by a rollout-level objective that rewards rare high-level reasoning strategies, improving diverse solution discovery without sacrificing initial performance.
Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@k across large sampling budgets and increases the area under the pass@k curve (AUC@K) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.
Community
Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs
好文章
等我有时间再好好看看罒▽罒
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Can LLMs Guide Their Own Exploration? Gradient-Guided Reinforcement Learning for LLM Reasoning (2025)
- Beyond High-Entropy Exploration: Correctness-Aware Low-Entropy Segment-Based Advantage Shaping for Reasoning LLMs (2025)
- Differential Smoothing Mitigates Sharpening and Improves LLM Reasoning (2025)
- SPINE: Token-Selective Test-Time Reinforcement Learning with Entropy-Band Regularization (2025)
- Orchestrating Tokens and Sequences: Dynamic Hybrid Policy Optimization for RLVR (2026)
- Diversity or Precision? A Deep Dive into Next Token Prediction (2025)
- Rethinking Sample Polarity in Reinforcement Learning with Verifiable Rewards (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper