Update README.md
Browse files
README.md
CHANGED
|
@@ -65,7 +65,15 @@ Mistral Nemo is a transformer model, with the following architecture choices:
|
|
| 65 |
| Chinese | 59.0% |
|
| 66 |
| Japanese | 59.0% |
|
| 67 |
|
| 68 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
It is recommended to use `mistralai/Mistral-Nemo-Instruct-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
|
| 71 |
|
|
@@ -73,7 +81,7 @@ It is recommended to use `mistralai/Mistral-Nemo-Instruct-2407` with [mistral-in
|
|
| 73 |
pip install mistral_inference
|
| 74 |
```
|
| 75 |
|
| 76 |
-
|
| 77 |
|
| 78 |
```py
|
| 79 |
from huggingface_hub import snapshot_download
|
|
@@ -85,7 +93,7 @@ mistral_models_path.mkdir(parents=True, exist_ok=True)
|
|
| 85 |
snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
|
| 86 |
```
|
| 87 |
|
| 88 |
-
|
| 89 |
|
| 90 |
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
|
| 91 |
|
|
@@ -98,7 +106,7 @@ mistral-chat $HOME/mistral_models/Nemo-Instruct --instruct --max_tokens 256 --te
|
|
| 98 |
How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
|
| 99 |
```
|
| 100 |
|
| 101 |
-
|
| 102 |
|
| 103 |
```py
|
| 104 |
from mistral_inference.transformer import Transformer
|
|
@@ -123,7 +131,7 @@ result = tokenizer.decode(out_tokens[0])
|
|
| 123 |
print(result)
|
| 124 |
```
|
| 125 |
|
| 126 |
-
|
| 127 |
|
| 128 |
```py
|
| 129 |
from mistral_common.protocol.instruct.tool_calls import Function, Tool
|
|
@@ -175,7 +183,7 @@ result = tokenizer.decode(out_tokens[0])
|
|
| 175 |
print(result)
|
| 176 |
```
|
| 177 |
|
| 178 |
-
|
| 179 |
|
| 180 |
> [!IMPORTANT]
|
| 181 |
> NOTE: Until a new release has been made, you need to install transformers from source:
|
|
|
|
| 65 |
| Chinese | 59.0% |
|
| 66 |
| Japanese | 59.0% |
|
| 67 |
|
| 68 |
+
## Usage
|
| 69 |
+
|
| 70 |
+
The model can be used with three different frameworks
|
| 71 |
+
|
| 72 |
+
- [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
|
| 73 |
+
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
|
| 74 |
+
- [`NeMo`](https://github.com/NVIDIA/NeMo): See [nvidia/Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct)
|
| 75 |
+
|
| 76 |
+
#### Install
|
| 77 |
|
| 78 |
It is recommended to use `mistralai/Mistral-Nemo-Instruct-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
|
| 79 |
|
|
|
|
| 81 |
pip install mistral_inference
|
| 82 |
```
|
| 83 |
|
| 84 |
+
#### Download
|
| 85 |
|
| 86 |
```py
|
| 87 |
from huggingface_hub import snapshot_download
|
|
|
|
| 93 |
snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
|
| 94 |
```
|
| 95 |
|
| 96 |
+
#### Chat
|
| 97 |
|
| 98 |
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
|
| 99 |
|
|
|
|
| 106 |
How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
|
| 107 |
```
|
| 108 |
|
| 109 |
+
#### Instruct following
|
| 110 |
|
| 111 |
```py
|
| 112 |
from mistral_inference.transformer import Transformer
|
|
|
|
| 131 |
print(result)
|
| 132 |
```
|
| 133 |
|
| 134 |
+
#### Function calling
|
| 135 |
|
| 136 |
```py
|
| 137 |
from mistral_common.protocol.instruct.tool_calls import Function, Tool
|
|
|
|
| 183 |
print(result)
|
| 184 |
```
|
| 185 |
|
| 186 |
+
### Transformers
|
| 187 |
|
| 188 |
> [!IMPORTANT]
|
| 189 |
> NOTE: Until a new release has been made, you need to install transformers from source:
|