PIPer: On-Device Environment Setup via Online Reinforcement Learning
Abstract
A specialized model combining supervised fine-tuning and Reinforcement Learning with Verifiable Rewards achieves competitive performance in automated environment setup tasks.
Environment setup-the process of configuring the system to work with a specific software project-represents a persistent challenge in Software Engineering (SE). Automated environment setup methods could assist developers by providing fully configured environments for arbitrary repositories without manual effort. This also helps SE researchers to scale execution-based benchmarks. However, recent studies reveal that even state-of-the-art Large Language Models (LLMs) achieve limited success in automating this task. To address this limitation, we tune a specialized model for environment setup. We combine supervised fine-tuning for generating correct Bash scripts and Reinforcement Learning with Verifiable Rewards (RLVR) to adapt it to the task of environment setup. On EnvBench-Python, our method enables Qwen3-8B (a model runnable on consumer hardware) to perform on par with larger models-Qwen3-32B and GPT-4o. The training code and model checkpoints are available online: https://github.com/JetBrains-Research/PIPer.
Community
💸 Environment setup is costly in time & resources.
🤖 Existing LLMs need large models and are expensive to run.
🛠️ Our approach fine-tunes a smaller model (Qwen3-8B) with SFT + RLVR.
⚡ Achieves on-par performance with bigger models (Qwen3-32B, GPT-4o) at a fraction of the cost.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Training Long-Context, Multi-Turn Software Engineering Agents with Reinforcement Learning (2025)
- Training Language Model Agents to Find Vulnerabilities with CTF-Dojo (2025)
- Tool-integrated Reinforcement Learning for Repo Deep Search (2025)
- Generalizable End-to-End Tool-Use RL with Synthetic CodeGym (2025)
- Posterior-GRPO: Rewarding Reasoning Processes in Code Generation (2025)
- Agnostics: Learning to Code in Any Programming Language via Reinforcement with a Universal Learning Environment (2025)
- K2-Think: A Parameter-Efficient Reasoning System (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper