Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- gsm8k
|
| 4 |
+
tags:
|
| 5 |
+
- deepsparse
|
| 6 |
+
---
|
| 7 |
+
# mpt-7b-gsm8k-pruned80-quant
|
| 8 |
+
|
| 9 |
+
**Paper**: [https://arxiv.org/pdf/xxxxxxx.pdf](https://arxiv.org/pdf/xxxxxxx.pdf)
|
| 10 |
+
**Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
|
| 11 |
+
|
| 12 |
+
This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset with pruning applied using [SparseGPT](https://arxiv.org/abs/2301.00774) and retrain for 4 epochs with L2 distillation. Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
|
| 13 |
+
|
| 14 |
+
GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 21.08% (FP32 baseline is 28.2%)
|
| 15 |
+
|
| 16 |
+
### Usage
|
| 17 |
+
|
| 18 |
+
```python
|
| 19 |
+
from deepsparse import TextGeneration
|
| 20 |
+
model_path = "hf:neuralmagic/mpt-7b-gsm8k-pruned80-quant" # or use a sparsezoo stub (zoo:mpt-7b-gsm8k_mpt_pretrain-pruned80_quantized)
|
| 21 |
+
model = TextGeneration(model=model_path)
|
| 22 |
+
model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
All MPT model weights are available on [SparseZoo](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true) and CPU speedup for generative inference can be reproduced by following the instructions at [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt)
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
| Model Links | Compression |
|
| 29 |
+
| --------------------------------------------------------------------------------------------------------- | --------------------------------- |
|
| 30 |
+
| [neuralmagic/mpt-7b-gsm8k-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-quant) | Quantization (W8A8) |
|
| 31 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned40-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned40-quant) | Quantization (W8A8) & 40% Pruning |
|
| 32 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned50-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned50-quant) | Quantization (W8A8) & 50% Pruning |
|
| 33 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned60-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned60-quant) | Quantization (W8A8) & 60% Pruning |
|
| 34 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned70-quant) | Quantization (W8A8) & 70% Pruning |
|
| 35 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned75-quant) | Quantization (W8A8) & 75% Pruning |
|
| 36 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned80-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned80-quant) | Quantization (W8A8) & 80% Pruning |
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|