DeepPrune: Parallel Scaling without Inter-trace Redundancy

🖥️ Code • 📃 Paper • ✈️ Project Page

Abstract

Parallel scaling has emerged as a powerful paradigm to enhance reasoning capabilities in large language models (LLMs) by generating multiple Chain-of-Thought (CoT) traces simultaneously. However, this approach introduces significant computational inefficiency due to inter-trace redundancy -- our analysis reveals that over 80% of parallel reasoning traces yield identical final answers, representing substantial wasted computation. To address this critical efficiency bottleneck, we propose DeepPrune, a novel framework that enables efficient parallel scaling through dynamic pruning. Our method features a specialized judge model trained with focal loss and oversampling techniques to accurately predict answer equivalence from partial reasoning traces which realizes 0.87 AUROC on equivalence prediction, combined with an online greedy clustering algorithm that dynamically prunes redundant paths while preserving answer diversity. Comprehensive evaluations across three challenging benchmarks (AIME 2024, AIME 2025, and GPQA) and multiple reasoning models demonstrate that DeepPrune achieves remarkable token reduction by over 80% compared to conventional consensus sampling on most cases, while maintaining competitive accuracy within 3 percentage points. Our work establishes a new standard for efficient parallel reasoning, making high-performance reasoning more efficient. Our code and data are here: this https URL

DeepPrun-Judge-4B

This model is a fine-tuned version of Qwen3-4B-Instruct-2507 on the my_custom_dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0438

Model description

To address the inter-trace redundancy problem in parallel scaling, we propose DeepPrune, a two-stage framework that includes offline training of a specialized judge model and online inference-time pruning. The core idea is that by accurately predicting whether two incomplete reasoning traces will yield identical final answers, we can efficiently prune redundant paths while preserving answer diversity.

Intended uses & limitations

The training data is in this format:

{
  "instruction": "It's like a system prompt or task description",
  "input": "Two truncated traces to be checked whether their answers are identical",
  "output": "The expected model response: identical/ not identical"
}

We fine-tune Qwen/Qwen3-4B-Instruct-2507 to become a judge model: DeepPrune-Judge-4B that can predict whether two unfinished traces would yield the same answer.

Our training data is collected exclusively from DeepSeek-R1-Distill-Llama-8B outputs, while traces from other models are reserved for testing cross-model generalization.

Training and evaluation data

The model is trained on DeepPrune's fine-tuing dataset

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • total_eval_batch_size: 4
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Training results

You can check results here.

We report the evaluation results in our paper's Offline Experiment Results section (section 5.2), too.

Framework versions

  • Transformers 4.55.0
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
47
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for THU-KEG/DeepPrune-Judge-4B

Finetuned
(127)
this model
Quantizations
1 model

Dataset used to train THU-KEG/DeepPrune-Judge-4B

Collection including THU-KEG/DeepPrune-Judge-4B