Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,10 @@ tags:
|
|
| 8 |
- code
|
| 9 |
---
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
# LogicCoder-7B
|
| 12 |
|
| 13 |
**LogicCoder-7B** is a 7B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Qwen-7B model and trained on a Python subset of the open-r1/codeforces-cots dataset.
|
|
|
|
| 8 |
- code
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Paper Page
|
| 12 |
+
|
| 13 |
+
[**Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal.**](https://arxiv.org/abs/2508.05988)
|
| 14 |
+
|
| 15 |
# LogicCoder-7B
|
| 16 |
|
| 17 |
**LogicCoder-7B** is a 7B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Qwen-7B model and trained on a Python subset of the open-r1/codeforces-cots dataset.
|