Update README.md
Browse files
README.md
CHANGED
|
@@ -30,7 +30,7 @@ library_name: transformers
|
|
| 30 |
|
| 31 |
# News 📢
|
| 32 |
|
| 33 |
-
-
|
| 34 |
- 📕`2025/08/08`: Published a technical blog article about Mi:dm 2.0 Model.
|
| 35 |
- ⚡️`2025/07/04`: Released Mi:dm 2.0 Model collection on Hugging Face🤗.
|
| 36 |
<br>
|
|
@@ -528,11 +528,33 @@ We provide a detailed description about running Mi:dm 2.0 on your local machine
|
|
| 528 |
|
| 529 |
## Deployment
|
| 530 |
|
|
|
|
|
|
|
| 531 |
To serve Mi:dm 2.0 using [vLLM](https://github.com/vllm-project/vllm)(`>=0.8.0`) with an OpenAI-compatible API:
|
| 532 |
```bash
|
| 533 |
vllm serve K-intelligence/Midm-2.0-Base-Instruct
|
| 534 |
```
|
| 535 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 536 |
|
| 537 |
## Tutorials
|
| 538 |
To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on [github](https://github.com/K-intelligence-Midm/Midm-2.0).
|
|
|
|
| 30 |
|
| 31 |
# News 📢
|
| 32 |
|
| 33 |
+
- 🔧`2025/10/29`: Added support for function calling on vLLM with Mi:dm 2.0 parser.
|
| 34 |
- 📕`2025/08/08`: Published a technical blog article about Mi:dm 2.0 Model.
|
| 35 |
- ⚡️`2025/07/04`: Released Mi:dm 2.0 Model collection on Hugging Face🤗.
|
| 36 |
<br>
|
|
|
|
| 528 |
|
| 529 |
## Deployment
|
| 530 |
|
| 531 |
+
#### Basic Serving
|
| 532 |
+
|
| 533 |
To serve Mi:dm 2.0 using [vLLM](https://github.com/vllm-project/vllm)(`>=0.8.0`) with an OpenAI-compatible API:
|
| 534 |
```bash
|
| 535 |
vllm serve K-intelligence/Midm-2.0-Base-Instruct
|
| 536 |
```
|
| 537 |
|
| 538 |
+
#### With Function Calling
|
| 539 |
+
|
| 540 |
+
For advanced function calling tasks, you can serve Mi:dm 2.0 with our own tool parser:
|
| 541 |
+
1. Download and place [Mi:dm 2.0 parser file](https://github.com/K-intelligence-Midm/Midm-2.0/blob/main/tutorial/03_open-webui/modelfile/midm_parser.py) in your working directory.
|
| 542 |
+
2. Run the following Docker command to launch the vLLM server with our custom parser file:
|
| 543 |
+
```bash
|
| 544 |
+
docker run --rm -it --gpus all -p 8000:8000 \
|
| 545 |
+
-e HUGGING_FACE_HUB_TOKEN="<YOUR_HUGGINGFACE_TOKEN>" \
|
| 546 |
+
-v "$(pwd)/midm_parser.py:/custom/midm_parser.py" \
|
| 547 |
+
vllm/vllm-openai:v0.11.0 \
|
| 548 |
+
--model K-intelligence/Midm-2.0-Base-Instruct \
|
| 549 |
+
--enable-auto-tool-choice \
|
| 550 |
+
--tool-parser-plugin /custom/midm_parser.py \
|
| 551 |
+
--tool-call-parser midm-parser \
|
| 552 |
+
--host 0.0.0.0
|
| 553 |
+
```
|
| 554 |
+
|
| 555 |
+
>[!Note]
|
| 556 |
+
> This setup is compatible with `vllm/vllm-openai:v0.8.0` and later, but we strongly recommend using `v0.11.0` for optimal stability and compatibility with our parser.
|
| 557 |
+
|
| 558 |
|
| 559 |
## Tutorials
|
| 560 |
To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on [github](https://github.com/K-intelligence-Midm/Midm-2.0).
|