TableGPT-R1
Model details
We developed and released TableGPT-R1, a specialized large language model optimized for complex tabular reasoning and data analysis. Unlike traditional models that rely solely on Supervised Fine-Tuning (SFT), TableGPT-R1 is trained using a systematic Reinforcement Learning (RL) framework. It is designed to bridge the gap between natural language understanding and professional data science requirements, such as multi-step logic, robust code execution, and autonomous environment interaction.
Model Developers
Zhejiang University & Institute of Computing Innovation, Zhejiang University
Key Technical Breakthroughs
- Autonomous Agentic Reasoning: The model is trained to "think" before acting. It generates a visible reasoning chain within
<think>tags, plans Python-based data manipulations, and refines its strategy based on environment feedback (Code Interpreter). - Unified Reward System: We introduced a hybrid reward mechanism that combines rule-based verification (for deterministic SQL/Code tasks) with a Criteria-Injected Reward Model (for open-ended analytical questions), ensuring both accuracy and interpretability.
- GRPO++ Framework: Utilizing an enhanced version of Group Relative Policy Optimization, the model optimizes its decision-making process across diverse table structures while maintaining its general-purpose reasoning capabilities.
- Cold-Start Data Engineering: Bootstrapped with high-quality, long-chain reasoning trajectories, allowing the model to handle extreme table heterogeneity and complex multi-table joins.
Input
TableGPT-R1 accepts both natural language instructions and tabular data. It uniquely supports table-path inputs, enabling the model to autonomously load and retrieve information from files using a built-in code interpreter.
Output
TableGPT-R1 supports two output behaviors depending on the task:
For tasks requiring logical deduction, metadata explanation, or semantic understanding without external execution.
- Format:
<think> ... </think> [Answer] - Behavior: The model performs internal "Chain-of-Thought" to verify its logic before presenting the final result.
For data-intensive tasks requiring precise calculation, visualization, or large-scale data processing.
Format:
- Plan:
<think> ... </think>(Analyze the goal and plan the code) - Act:
<tool_call> ... </tool_call>(Generate Python/SQL code) - Observe:
<observation> ... </observation>(Receive environment feedback) - Finalize:
<answer> ... </answer>(Summarize results)
- Plan:
Behavior: The model operates as an autonomous agent, reacting to execution errors or intermediate data results to ensure accuracy.
Additionally, to enforce model thinking, the default chat template automatically includes <think>. Therefore, it is normal for the model's output to contain only </think> without an explicit opening <think> tag.
Language
Our model places a strong emphasis on Chinese corpora, and currently, queries in other languages may have limited support.
Model Architecture
TableGPT-R1 is built upon the Qwen3-8B transformer architecture, significantly enhanced for long-context tabular understanding and agentic workflows.
- Base Backbone: Qwen3-8B (Dense Transformer).
- Context Window: 128K tokens, optimized for processing large-scale table schemas, extensive metadata, and long execution logs.
- Specialized Tokenizer: Enhanced to handle structural delimiters, whitespace in tables, and code-specific syntax (Python/SQL) more efficiently.
- Agentic Loop Integration: The architecture is designed to support a seamless "Think-Act-Observe" cycle. It treats the environment's feedback (Code Interpreter output) as a first-class sequence input, allowing for real-time error correction and iterative reasoning.
- Instruction Following: Optimized via RL to strictly adhere to formatting constraints, distinguishing between internal thought process and external tool calls.
Status
This model is static, trained on an offline dataset. Future versions may be released to enhance its performance on specialized tasks.
QuickStart
This code snippet demonstrates how to build a prompt with table information, and shows how to load the tokenizer, load the model, and generate content.
Note that you need
transformers>=4.51.0to useTableGPT-R1:pip install transformers>=4.51.0
from transformers import AutoModelForCausalLM, AutoTokenizer
# Using pandas to read some structured data
import pandas as pd
from io import StringIO
# single table
EXAMPLE_CSV_CONTENT = """
"Loss","Date","Score","Opponent","Record","Attendance"
"Hampton (14β12)","September 25","8β7","Padres","67β84","31,193"
"Speier (5β3)","September 26","3β1","Padres","67β85","30,711"
"Elarton (4β9)","September 22","3β1","@ Expos","65β83","9,707"
"Lundquist (0β1)","September 24","15β11","Padres","67β83","30,774"
"Hampton (13β11)","September 6","9β5","Dodgers","61β78","31,407"
"""
csv_file = StringIO(EXAMPLE_CSV_CONTENT)
df = pd.read_csv(csv_file)
model_name = "tablegpt/TableGPT-R1"
model = AutoModelForCausalLM.from_pretrained(
model_name, torch_dtype="auto", device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
example_prompt_template = """Given access to several pandas dataframes, write the Python code to answer the user's question.
/*
"{var_name}.head(5).to_string(index=False)" as follows:
{df_info}
*/
Question: {user_question}
"""
question = "εͺδΊζ―θ΅ηζη»©θΎΎε°δΊ40θ40θ΄οΌ"
prompt = example_prompt_template.format(
var_name="df",
df_info=df.head(5).to_string(index=False),
user_question=question,
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=8192)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Deployment
For deployment, you can use sglang>=0.5.2 or vllm>=0.10.2 or to create an OpenAI-compatible API endpoint:
- SGLang:
python -m sglang.launch_server --model-path tablegpt/TableGPT-R1 --port 8080 --served-model-name TableGPT-R1 --reasoning-parser qwen3 - vLLM:
vllm serve tablegpt/TableGPT-R1 --port 8080 --served-model-name TableGPT-R1 --reasoning-parser deepseek_r1
Then you can access the Chat API by:
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "TableGPT-R1",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Give me a short introduction to large language model."}
]
}'
License
TableGPT-R1 is under apache-2.0 license.
Research Paper
TableGPT-R1 is introduced and validated in the paper "TableGPT-R1: Advancing Tabular Reasoning Through Reinforcement Learning" available on arXiv.
Where to send questions or comments about the model
Inquiries and feedback are welcome at [email protected].
Evaluation Results
TableGPT-R1 demonstrates substantial advancements over its predecessor, TableGPT2-7B, particularly in table comprehension and reasoning capabilities. Detailed comparisons are as follows:
TableBench Benchmark: TableGPT-R1 demonstrates strong performance. It achieves an average gain of 6.9% over the Qwen3-8B across four core sub-tasks. Compared to the TableGPT2-7B, it records an average improvement of 3.12%, validating its enhanced reasoning capability despite a trade-off in the PoT task.
Natural Language to SQL: TableGPT-R1 exhibits superior generalization capabilities. While showing consistent improvements over Qwen3-8B on Spider 1.0 (+0.66%) and BIRD (+1.5%), it represents a significant leap compared to TableGPT2-7B, registering dramatic performance increases of 12.35% and 13.89%, respectively.
RealHitBench Test: In this highly challenging test, TableGPT-R1 achieved outstanding results, particularly surpassing the top closed-source baseline model GPT-4o. This highlights its powerful capabilities in hierarchical table reasoning. Quantitative analysis shows that TableGPT-R1 matches or outperforms Qwen3-8B across subtasks, achieving an average improvement of 11.81%, with a remarkable peak gain of 31.17% in the Chart Generation task. Furthermore, compared to TableGPT2-7B, the model represents a significant advancement, registering an average improvement of 19.85% across all subtasks.
Internal Benchmark: Evaluation further attests to the model's robustness. TableGPT-R1 surpasses Qwen3-8B by substantial margins: 10.8% on the Table Info and 8.8% on the Table Path.
| Benchmark | Task | Met. | Q3-8B | T-LLM | Llama | T-R1-Z | TGPT2 | TGPT-R1 | Q3-14B | Q3-32B | Q3-30B | QwQ | GPT-4o | DS-V3 | Q-Plus | vs.Q3-8B | vs.TGPT2 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Internal Bench | |||||||||||||||||
| Table Info | Acc | 69.20 | 0.97 | 37.26 | 15.97 | - | 80.00 | 66.10 | 72.58 | 51.10 | 69.68 | 67.26 | 66.00 | 76.90 | 10.80 | - | |
| Table Path | Acc | 73.90 | 0.65 | 31.77 | 9.19 | - | 82.70 | 74.70 | 78.55 | 60.50 | 75.00 | - | 72.90 | 81.50 | 8.80 | - | |
| NL2SQL | |||||||||||||||||
| Spider | EX | 86.07 | 65.30 | 73.59 | 82.63 | 74.38 | 86.73 | 87.61 | 87.80 | 61.71 | 85.33 | 87.98 | 88.54 | 89.19 | 0.66 | 12.35 | |
| BIRD | EX | 61.67 | 30.64 | 40.03 | 50.98 | 49.28 | 63.17 | 61.80 | 63.04 | 53.91 | 54.30 | 65.25 | 65.65 | 68.32 | 1.50 | 13.89 | |
| Holistic Table Evaluation | |||||||||||||||||
| TableBench | DP | Rge | 42.10 | 3.63 | 18.04 | 39.40 | 42.10 | 48.35 | 47.41 | 52.18 | 48.61 | 49.33 | 40.91 | 36.56 | 31.01 | 6.25 | 6.25 |
| PoT | Rge | 28.01 | 0.00 | 6.73 | 7.54 | 39.80 | 35.12 | 36.61 | 37.78 | 27.72 | 40.03 | 51.96 | 33.05 | 41.79 | 7.11 | -4.68 | |
| SCoT | Rge | 41.86 | 1.99 | 21.94 | 28.89 | 40.70 | 49.53 | 47.36 | 47.47 | 45.68 | 44.84 | 41.43 | 50.11 | 44.06 | 7.67 | 8.83 | |
| TCoT | Rge | 41.71 | 3.18 | 15.26 | 39.52 | 46.19 | 48.28 | 46.07 | 51.74 | 47.63 | 48.83 | 45.71 | 54.28 | 52.07 | 6.57 | 2.09 | |
| RealHitBench | FC | EM | 58.83 | 33.44 | 30.32 | 0.00 | 43.06 | 63.85 | 62.36 | 65.00 | 60.23 | 66.31 | 55.22 | 65.08 | 56.53 | 5.01 | 20.79 |
| NR | EM | 39.43 | 13.36 | 14.53 | 0.00 | 24.90 | 49.03 | 43.70 | 47.34 | 46.95 | 55.38 | 38.91 | 52.53 | 31.25 | 9.60 | 24.13 | |
| SC | EM | 64.12 | 53.28 | 35.90 | 28.50 | 34.86 | 64.12 | 73.02 | 71.76 | 69.47 | 76.08 | 61.83 | 71.25 | 62.85 | 0.00 | 29.26 | |
| DA | GPT | 53.28 | 47.86 | 60.12 | 36.24 | 53.16 | 66.53 | 63.03 | 66.67 | 53.27 | 64.99 | 55.54 | 66.29 | 62.04 | 13.25 | 13.37 | |
| CG | ECR | 24.67 | 22.73 | 13.64 | 16.00 | 44.16 | 55.84 | 23.38 | 25.00 | 20.78 | 20.13 | 34.42 | 18.18 | 48.05 | 31.17 | 11.68 | |
| Agent-based Data Analysis | |||||||||||||||||
| InfiAgent-DA | Acc | 56.81 | 11.67 | 55.08 | 70.82 | 73.15 | 80.54 | 59.92 | 54.86 | 41.63 | 37.74 | 87.10 | 77.43 | 67.32 | 23.73 | 7.39 |
Citation
If you find our work helpful, please cite us by
@misc{yang2025tablegptr1advancingtabularreasoning,
title={TableGPT-R1: Advancing Tabular Reasoning Through Reinforcement Learning},
author={Saisai Yang and Qingyi Huang and Jing Yuan and Liangyu Zha and Kai Tang and Yuhang Yang and Ning Wang and Yucheng Wei and Liyao Li and Wentao Ye and Hao Chen and Tao Zhang and Junlin Zhou and Haobo Wang and Gang Chen and Junbo Zhao},
year={2025},
eprint={2512.20312},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2512.20312},
}
- Downloads last month
- 36