Face314 commited on
Commit
b1990a3
·
verified ·
1 Parent(s): 639908f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +403 -3
README.md CHANGED
@@ -1,4 +1,404 @@
1
  ---
2
- base_model:
3
- - tencent/Hunyuan-A13B-Instruct
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: tencent/Hunyuan-A13B-Instruct
3
+ license: other
4
+ license_name: tencent-hunyuan-a13b
5
+ license_link: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/LICENSE
6
+ library_name: transformers
7
+ ---
8
+
9
+ <p align="center">
10
+ <img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
11
+ </p><p></p>
12
+
13
+
14
+ <p align="center">
15
+ 🤗&nbsp;<a href="https://huggingface.co/tencent/Hunyuan-A13B-Instruct"><b>Hugging Face</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
16
+ 🖥️&nbsp;<a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
17
+ 🕖&nbsp;<a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
18
+ 🕹️&nbsp;<a href="https://hunyuan.tencent.com/?model=hunyuan-a13b"><b>Demo</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
19
+ 🤖&nbsp;<a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a>
20
+ </p>
21
+
22
+
23
+ <p align="center">
24
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/report/Hunyuan_A13B_Technical_Report.pdf"><b>Technical Report</b> </a> |
25
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B"><b>GITHUB</b></a> |
26
+ <a href="https://cnb.cool/tencent/hunyuan/Hunyuan-A13B"><b>cnb.cool</b></a> |
27
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/LICENSE"><b>LICENSE</b></a> |
28
+ <a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
29
+ <a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
30
+ </p>
31
+
32
+
33
+
34
+ Welcome to the official repository of **Hunyuan-A13B**, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
35
+
36
+ ## Model Introduction
37
+
38
+ With the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while maintaining high performance has become a critical challenge. To address this, we have explored Mixture of Experts (MoE) architectures. The newly introduced Hunyuan-A13B model features a total of 80 billion parameters with 13 billion active parameters. It not only delivers high-performance results but also achieves optimal resource efficiency, successfully balancing computational power and resource utilization.
39
+
40
+ ### Key Features and Advantages
41
+
42
+ - **Compact yet Powerful**: With only 13 billion active parameters (out of a total of 80 billion), the model delivers competitive performance on a wide range of benchmark tasks, rivaling much larger models.
43
+ - **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
44
+ - **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
45
+ - **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
46
+ - **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
47
+
48
+ ### Why Choose Hunyuan-A13B?
49
+
50
+ As a powerful yet computationally efficient large model, Hunyuan-A13B is an ideal choice for researchers and developers seeking high performance under resource constraints. Whether for academic research, cost-effective AI solution development, or innovative application exploration, this model provides a robust foundation for advancement.
51
+
52
+ &nbsp;
53
+
54
+ ## Related News
55
+ * 2025.6.27 We have open-sourced **Hunyuan-A13B-Pretrain** , **Hunyuan-A13B-Instruct** , **Hunyuan-A13B-Instruct-FP8** , **Hunyuan-A13B-Instruct-GPTQ-Int4** on Hugging Face. In addition, we have released a <a href="report/Hunyuan_A13B_Technical_Report.pdf">technical report </a> and a training and inference operation manual, which provide detailed information about the model’s capabilities as well as the operations for training and inference.
56
+
57
+ <br>
58
+
59
+
60
+ ## Benchmark
61
+
62
+ Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
63
+
64
+ | Model | Hunyuan-Large | Qwen2.5-72B | Qwen3-A22B | Hunyuan-A13B |
65
+ |------------------|---------------|--------------|-------------|---------------|
66
+ | MMLU | 88.40 | 86.10 | 87.81 | 88.17 |
67
+ | MMLU-Pro | 60.20 | 58.10 | 68.18 | 67.23 |
68
+ | MMLU-Redux | 87.47 | 83.90 | 87.40 | 87.67 |
69
+ | BBH | 86.30 | 85.80 | 88.87 | 87.56 |
70
+ | SuperGPQA | 38.90 | 36.20 | 44.06 | 41.32 |
71
+ | EvalPlus | 75.69 | 65.93 | 77.60 | 78.64 |
72
+ | MultiPL-E | 59.13 | 60.50 | 65.94 | 69.33 |
73
+ | MBPP | 72.60 | 76.00 | 81.40 | 83.86 |
74
+ | CRUX-I | 57.00 | 57.63 | - | 70.13 |
75
+ | CRUX-O | 60.63 | 66.20 | 79.00 | 77.00 |
76
+ | MATH | 69.80 | 62.12 | 71.84 | 72.35 |
77
+ | CMATH | 91.30 | 84.80 | - | 91.17 |
78
+ | GSM8k | 92.80 | 91.50 | 94.39 | 91.83 |
79
+ | GPQA | 25.18 | 45.90 | 47.47 | 49.12 |
80
+
81
+
82
+ Hunyuan-A13B-Instruct has achieved highly competitive performance across multiple benchmarks, particularly in mathematics, science, agent domains, and more. We compared it with several powerful models, and the results are shown below.
83
+
84
+ | Topic | Bench | OpenAI-o1-1217 | DeepSeek R1 | Qwen3-A22B | Hunyuan-A13B-Instruct |
85
+ |:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
86
+ | **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 74.3<br>79.2<br>96.4 | 79.8<br>70<br>94.9 | 85.7<br>81.5<br>94.0 | 87.3<br>76.8<br>94.3 |
87
+ | **Science** | GPQA-Diamond<br>OlympiadBench | 78<br>83.1 | 71.5<br>82.4 | 71.1<br>85.7 | 71.2<br>82.7 |
88
+ | **Coding** | Livecodebench<br>Fullstackbench<br>ArtifactsBench | 63.9<br>64.6<br>38.6 | 65.9<br>71.6<br>44.6 | 70.7<br>65.6<br>44.6 | 63.9<br>67.8<br>43 |
89
+ | **Reasoning** | BBH<br>DROP<br>ZebraLogic | 80.4<br>90.2<br>81 | 83.7<br>92.2<br>78.7 | 88.9<br>90.3<br>80.3 | 89.1<br>91.1<br>84.7 |
90
+ | **Instruction<br>Following** | IF-Eval<br>SysBench | 91.8<br>82.5 | 88.3<br>77.7 | 83.4<br>74.2 | 84.7<br>76.1 |
91
+ | **Text<br>Creation**| LengthCtrl<br>InsCtrl | 60.1<br>74.8 | 55.9<br>69 | 53.3<br>73.7 | 55.4<br>71.9 |
92
+ | **NLU** | ComplexNLU<br>Word-Task | 64.7<br>67.1 | 64.5<br>76.3 | 59.8<br>56.4 | 61.2<br>62.9 |
93
+ | **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 67.8<br>60.4<br>47.6<br>58.8 | 56.9<br>43.8<br>41.1<br>55.3 | 70.8<br>44.6<br>40.6<br>51.7 | 78.3<br>54.7<br>61.2<br>63.5 |
94
+
95
+
96
+ &nbsp;
97
+
98
+ ## Use with transformers
99
+
100
+ Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
101
+ 1. Pass "enable_thinking=False" when calling apply_chat_template.
102
+ 2. Adding "/no_think" before the prompt will force the model not to use perform CoT reasoning. Similarly, adding "/think" before the prompt will force the model to perform CoT reasoning.
103
+
104
+ The following code snippet shows how to use the transformers library to load and apply the model.
105
+ It also demonstrates how to enable and disable the reasoning mode ,
106
+ and how to parse the reasoning process along with the final output.
107
+
108
+
109
+
110
+ ```python
111
+ from transformers import AutoModelForCausalLM, AutoTokenizer
112
+ import os
113
+ import re
114
+
115
+ model_name_or_path = os.environ['MODEL_PATH']
116
+ # model_name_or_path = "tencent/Hunyuan-A13B-Instruct"
117
+
118
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
119
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto",trust_remote_code=True) # You may want to use bfloat16 and/or move to GPU here
120
+ messages = [
121
+ {"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
122
+ ]
123
+
124
+ text = tokenizer.apply_chat_template(
125
+ messages,
126
+ tokenize=False,
127
+ enable_thinking=True
128
+ )
129
+
130
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
131
+ model_inputs.pop("token_type_ids", None)
132
+ outputs = model.generate(**model_inputs, max_new_tokens=4096)
133
+
134
+
135
+ output_text = tokenizer.decode(outputs[0])
136
+
137
+ think_pattern = r'<think>(.*?)</think>'
138
+ think_matches = re.findall(think_pattern, output_text, re.DOTALL)
139
+
140
+ answer_pattern = r'<answer>(.*?)</answer>'
141
+ answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
142
+
143
+ think_content = [match.strip() for match in think_matches][0]
144
+ answer_content = [match.strip() for match in answer_matches][0]
145
+ print(f"thinking_content:{think_content}\n\n")
146
+ print(f"answer_content:{answer_content}\n\n")
147
+ ```
148
+
149
+ ### Fast and slow thinking switch
150
+
151
+ This model supports two modes of operation:
152
+
153
+ - Slow Thinking Mode (Default): Enables detailed internal reasoning steps before producing the final answer.
154
+ - Fast Thinking Mode: Skips the internal reasoning process for faster inference, going straight to the final answer.
155
+
156
+ **Switching to Fast Thinking Mode:**
157
+
158
+ To disable the reasoning process, set `enable_thinking=False` in the apply_chat_template call:
159
+ ```
160
+
161
+ text = tokenizer.apply_chat_template(
162
+ messages,
163
+ tokenize=False,
164
+ enable_thinking=False
165
+ )
166
+ ```
167
+
168
+
169
+
170
+ ## Deployment
171
+
172
+ For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
173
+
174
+ image: https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags
175
+
176
+
177
+ ### TensorRT-LLM
178
+
179
+ #### Docker Image
180
+
181
+ We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
182
+
183
+ - To Get Started, Download the Docker Image:
184
+
185
+ **From Docker Hub:**
186
+ ```
187
+ docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
188
+ ```
189
+
190
+ **From China Mirror(Thanks to [CNB](https://cnb.cool/ "CNB.cool")):**
191
+
192
+
193
+ First, pull the image from CNB:
194
+ ```
195
+ docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-trtllm
196
+ ```
197
+
198
+ Then, rename the image to better align with the following scripts:
199
+ ```
200
+
201
+ docker tag docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-trtllm hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
202
+ ```
203
+
204
+
205
+ - start docker
206
+
207
+ ```
208
+ docker run --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
209
+ ```
210
+
211
+ - Prepare Configuration file:
212
+
213
+ ```
214
+ cat >/path/to/extra-llm-api-config.yml <<EOF
215
+ use_cuda_graph: true
216
+ cuda_graph_padding_enabled: true
217
+ cuda_graph_batch_sizes:
218
+ - 1
219
+ - 2
220
+ - 4
221
+ - 8
222
+ - 16
223
+ - 32
224
+ print_iter_log: true
225
+ EOF
226
+ ```
227
+
228
+
229
+ - Start the API server:
230
+
231
+
232
+ ```
233
+ trtllm-serve \
234
+ /path/to/HunYuan-moe-A13B \
235
+ --host localhost \
236
+ --port 8000 \
237
+ --backend pytorch \
238
+ --max_batch_size 32 \
239
+ --max_num_tokens 16384 \
240
+ --tp_size 2 \
241
+ --kv_cache_free_gpu_memory_fraction 0.6 \
242
+ --trust_remote_code \
243
+ --extra_llm_api_options /path/to/extra-llm-api-config.yml
244
+ ```
245
+
246
+
247
+ ### vLLM
248
+
249
+ #### Inference from Docker Image
250
+ We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official vllm release is currently under development, **note: cuda 12.4 is require for this docker**.
251
+
252
+
253
+ - To Get Started, Download the Docker Image:
254
+
255
+ **From Docker Hub:**
256
+ ```
257
+ docker pull hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1
258
+ ```
259
+
260
+ **From China Mirror(Thanks to [CNB](https://cnb.cool/ "CNB.cool")):**
261
+
262
+
263
+ First, pull the image from CNB:
264
+ ```
265
+ docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b/hunyuan-infer-vllm-cuda12.4:v1
266
+ ```
267
+
268
+ Then, rename the image to better align with the following scripts:
269
+ ```
270
+ docker tag docker.cnb.cool/tencent/hunyuan/hunyuan-a13b/hunyuan-infer-vllm-cuda12.4:v1 hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1
271
+ ```
272
+
273
+ - Download Model file:
274
+ - Huggingface: will download automicly by vllm.
275
+ - ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct`
276
+
277
+
278
+ - Start the API server:
279
+
280
+ model download by huggingface:
281
+ ```
282
+ docker run --rm --ipc=host \
283
+ -v ~/.cache:/root/.cache/ \
284
+ --security-opt seccomp=unconfined \
285
+ --net=host \
286
+ --gpus=all \
287
+ -it \
288
+ --entrypoint python3 hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1 \
289
+ -m vllm.entrypoints.openai.api_server \
290
+ --host 0.0.0.0 \
291
+ --tensor-parallel-size 4 \
292
+ --port 8000 \
293
+ --model tencent/Hunyuan-A13B-Instruct \
294
+ --trust_remote_code
295
+ ```
296
+
297
+ model downloaded by modelscope:
298
+ ```
299
+ docker run --rm --ipc=host \
300
+ -v ~/.cache/modelscope:/root/.cache/modelscope \
301
+ --security-opt seccomp=unconfined \
302
+ --net=host \
303
+ --gpus=all \
304
+ -it \
305
+ --entrypoint python3 hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1 \
306
+ -m vllm.entrypoints.openai.api_server \
307
+ --host 0.0.0.0 \
308
+ --tensor-parallel-size 4 \
309
+ --port 8000 \
310
+ --model /root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct/ \
311
+ --trust_remote_code
312
+ ```
313
+
314
+ ### Source Code
315
+ Support for this model has been added via this [PR 20114](https://github.com/vllm-project/vllm/pull/20114 ) in the vLLM project,
316
+ This patch already been merged by community at Jul-1-2025.
317
+
318
+ You can build and run vLLM from source using code after `ecad85`.
319
+
320
+ ### Model Context Length Support
321
+
322
+ The Hunyuan A13B model supports a maximum context length of **256K tokens (262,144 tokens)**. However, due to GPU memory constraints on most hardware setups, the default configuration in `config.json` limits the context length to **32K tokens** to prevent out-of-memory (OOM) errors.
323
+
324
+ #### Extending Context Length to 256K
325
+
326
+ To enable full 256K context support, you can manually modify the `max_position_embeddings` field in the model's `config.json` file as follows:
327
+
328
+ ```json
329
+ {
330
+ ...
331
+ "max_position_embeddings": 262144,
332
+ ...
333
+ }
334
+ ```
335
+
336
+ When serving the model using **vLLM**, you can also explicitly set the maximum model length by adding the following flag to your server launch command:
337
+
338
+ ```bash
339
+ --max-model-len 262144
340
+ ```
341
+
342
+ #### Recommended Configuration for 256K Context Length
343
+
344
+ The following configuration is recommended for deploying the model with 256K context length support on systems equipped with **NVIDIA H20 GPUs (96GB VRAM)**:
345
+
346
+ | Model DType | KV-Cache Dtype | Number of Devices | Model Length |
347
+ |----------------|----------------|--------------------|--------------|
348
+ | `bfloat16` | `bfloat16` | 4 | 262,144 |
349
+
350
+ > ⚠️ **Note:** Using FP8 quantization for KV-cache may impact generation quality. The above settings are suggested configurations for stable 256K-length service deployment.
351
+
352
+
353
+ #### Tool Calling with vLLM
354
+
355
+ To support agent-based workflows and function calling capabilities, this model includes specialized parsing mechanisms for handling tool calls and internal reasoning steps.
356
+
357
+ For a complete working example of how to implement and use these features in an agent setting, please refer to our full agent implementation on GitHub:
358
+ 🔗 [Hunyuan A13B Agent Example](https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/agent/)
359
+
360
+ When deploying the model using **vLLM**, the following parameters can be used to configure the tool parsing behavior:
361
+
362
+ | Parameter | Value |
363
+ |--------------------------|-----------------------------------------------------------------------|
364
+ | `--tool-parser-plugin` | [Local Hunyuan A13B Tool Parser File](https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/agent/hunyuan_tool_parser.py) |
365
+ | `--tool-call-parser` | `hunyuan` |
366
+
367
+ These settings enable vLLM to correctly interpret and route tool calls generated by the model according to the expected format.
368
+
369
+ ### Reasoning parser
370
+
371
+ vLLM reasoning parser support on Hunyuan A13B model is under development.
372
+
373
+
374
+
375
+ ### SGLang
376
+
377
+ #### Docker Image
378
+
379
+ We also provide a pre-built Docker image based on the latest version of SGLang.
380
+
381
+ To get started:
382
+
383
+ - Pull the Docker image
384
+
385
+ ```
386
+ docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang
387
+ or
388
+ docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-sglang
389
+ ```
390
+
391
+ - Start the API server:
392
+
393
+ ```
394
+ docker run --gpus all \
395
+ --shm-size 32g \
396
+ -p 30000:30000 \
397
+ --ipc=host \
398
+ docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang \
399
+ -m sglang.launch_server --model-path hunyuan/huanyuan_A13B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
400
+ ```
401
+
402
+ ## Contact Us
403
+
404
+ If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).