File size: 3,718 Bytes
4b8ee4d
 
 
07e0832
 
 
 
 
 
 
 
 
951225f
07e0832
951225f
 
07e0832
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
datasets:
- Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
library_name: transformers
tags:
- trl
- text-generation-inference
- llama
- distill
- experimental
---

![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/6IACMTfvjkw6sQI7swljn.png)

# **Regulus-Qwen3-R1-Llama-Distill-1.7B**

> **Regulus-Qwen3-R1-Llama-Distill-1.7B** is a **distilled reasoning model** fine-tuned on **Qwen/Qwen3-1.7B** using **Magpie-Align/Magpie-Reasoning-V2-250K-CoT-DeepSeek-R1-Llama-70B**.
> The training leverages **distilled traces from DeepSeek-R1-Llama-70B**, transferring advanced reasoning patterns into a lightweight 1.7B parameter model.
> It is specialized for **chain-of-thought reasoning across code, math, and science**, optimized for efficiency and mid-resource deployment.

> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Regulus-Qwen3-R1-Llama-Distill-1.7B-GGUF](https://huggingface.co/prithivMLmods/Regulus-Qwen3-R1-Llama-Distill-1.7B-GGUF)

---

## **Key Features**

1. **Distilled Reasoning from Large-Scale Models**
   Trained with **distilled traces from DeepSeek-R1-Llama-70B**, preserving structured **chain-of-thought reasoning** in a smaller, faster model.

2. **Unified Code + Math + Science Reasoning**
   Strong performance across computational logic, programming tasks, and scientific problem solving.

3. **Structured Chain-of-Thought Generation**
   Produces clear, step-by-step explanations for algorithms, equations, and symbolic tasks.

4. **Optimized Lightweight Footprint**
   Maintains reasoning depth while being deployable on **mid-range GPUs**, **offline clusters**, and **edge AI systems**.

5. **Multi-Format Output Support**
   Generates responses in **LaTeX**, **Markdown**, **JSON**, and **tabular formats** for technical and research workflows.

---

## **Quickstart with Transformers**

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Regulus-Qwen3-R1-Llama-Distill-1.7B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain step by step how to solve a system of linear equations using Gaussian elimination."

messages = [
    {"role": "system", "content": "You are a reasoning assistant skilled in math, code, and scientific logic."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

---

## **Intended Use**

* **Math and algorithm tutoring** with clear reasoning steps
* **Code reasoning and synthesis** for debugging and algorithm design
* **Scientific problem solving** in physics, chemistry, and biology
* **Structured educational assistant** for step-by-step learning
* **Efficient deployment** where distilled reasoning fidelity is required

## **Limitations**

* Derived from **distilled traces** – reasoning may simplify compared to full-scale teacher models
* Not tuned for general-purpose conversation or creative writing
* Context length limits multi-document or long-codebase reasoning
* Optimized for structured reasoning, not emotional or casual dialogue