Update README.md
Browse files
README.md
CHANGED
|
@@ -9,10 +9,13 @@ tags:
|
|
| 9 |
- 7b
|
| 10 |
- llama
|
| 11 |
- 4bit
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
# Get Started
|
| 15 |
This model should use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) so you need to use `auto-gptq`
|
|
|
|
|
|
|
| 16 |
|
| 17 |
```py
|
| 18 |
from transformers import AutoTokenizer, pipeline, LlamaForCausalLM, LlamaTokenizer
|
|
|
|
| 9 |
- 7b
|
| 10 |
- llama
|
| 11 |
- 4bit
|
| 12 |
+
- quantization
|
| 13 |
---
|
| 14 |
|
| 15 |
# Get Started
|
| 16 |
This model should use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) so you need to use `auto-gptq`
|
| 17 |
+
- `no-act-order` model
|
| 18 |
+
- 4bit model quantization
|
| 19 |
|
| 20 |
```py
|
| 21 |
from transformers import AutoTokenizer, pipeline, LlamaForCausalLM, LlamaTokenizer
|