Update README.md
Browse files
README.md
CHANGED
|
@@ -1,13 +1,13 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
inference: false
|
| 4 |
---
|
| 5 |
|
| 6 |
-
# SLIM-SUMMARY
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
-
**slim-summary** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python list of distinct summary points.
|
| 11 |
|
| 12 |
As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
|
| 13 |
|
|
@@ -15,9 +15,9 @@ Input is a text passage, and output is a list of the form:
|
|
| 15 |
|
| 16 |
`['summary_point1', 'summary_point2', 'summary_point3']`
|
| 17 |
|
| 18 |
-
This model is
|
| 19 |
|
| 20 |
-
For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tool'**](https://huggingface.co/llmware/slim-summary-tool).
|
| 21 |
|
| 22 |
## Usage Tips
|
| 23 |
|
|
@@ -41,8 +41,8 @@ For fast inference use of this model, we would recommend using the 'quantized to
|
|
| 41 |
<details>
|
| 42 |
<summary>Transformers Script </summary>
|
| 43 |
|
| 44 |
-
model = AutoModelForCausalLM.from_pretrained("llmware/slim-summary")
|
| 45 |
-
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-summary")
|
| 46 |
|
| 47 |
function = "summarize"
|
| 48 |
params = "key points (3)"
|
|
@@ -87,7 +87,7 @@ For fast inference use of this model, we would recommend using the 'quantized to
|
|
| 87 |
<summary>Using as Function Call in LLMWare</summary>
|
| 88 |
|
| 89 |
from llmware.models import ModelCatalog
|
| 90 |
-
slim_model = ModelCatalog().load_model("llmware/slim-summary")
|
| 91 |
response = slim_model.function_call(text,params=["key points (3)], function="summarize")
|
| 92 |
|
| 93 |
print("llmware - llm_response: ", response)
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
inference: false
|
| 4 |
---
|
| 5 |
|
| 6 |
+
# SLIM-SUMMARY-TINY
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
+
**slim-summary-tiny** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python list of distinct summary points.
|
| 11 |
|
| 12 |
As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
|
| 13 |
|
|
|
|
| 15 |
|
| 16 |
`['summary_point1', 'summary_point2', 'summary_point3']`
|
| 17 |
|
| 18 |
+
This model is 1.1B parameters, small enough to run on a CPU, and is fine-tuned on top of a tiny-llama base.
|
| 19 |
|
| 20 |
+
For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tiny-tool'**](https://huggingface.co/llmware/slim-summary-tiny-tool).
|
| 21 |
|
| 22 |
## Usage Tips
|
| 23 |
|
|
|
|
| 41 |
<details>
|
| 42 |
<summary>Transformers Script </summary>
|
| 43 |
|
| 44 |
+
model = AutoModelForCausalLM.from_pretrained("llmware/slim-summary-tiny")
|
| 45 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-summary-tiny")
|
| 46 |
|
| 47 |
function = "summarize"
|
| 48 |
params = "key points (3)"
|
|
|
|
| 87 |
<summary>Using as Function Call in LLMWare</summary>
|
| 88 |
|
| 89 |
from llmware.models import ModelCatalog
|
| 90 |
+
slim_model = ModelCatalog().load_model("llmware/slim-summary-tiny")
|
| 91 |
response = slim_model.function_call(text,params=["key points (3)], function="summarize")
|
| 92 |
|
| 93 |
print("llmware - llm_response: ", response)
|