update readme.md
Browse files
README.md
CHANGED
|
@@ -4,9 +4,9 @@ language:
|
|
| 4 |
- en
|
| 5 |
---
|
| 6 |
|
| 7 |
-
# RedPajama-Base-
|
| 8 |
|
| 9 |
-
RedPajama-Base-
|
| 10 |
|
| 11 |
## Model Details
|
| 12 |
- **Developed by**: Together Computer.
|
|
@@ -34,8 +34,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
| 34 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
| 35 |
|
| 36 |
# init
|
| 37 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Base-
|
| 38 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Base-
|
| 39 |
model = model.to('cuda:0')
|
| 40 |
# infer
|
| 41 |
prompt = "Alan Turing is"
|
|
@@ -76,8 +76,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
| 76 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
| 77 |
|
| 78 |
# init
|
| 79 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Base-
|
| 80 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Base-
|
| 81 |
|
| 82 |
# infer
|
| 83 |
prompt = "Alan Turing is"
|
|
@@ -106,8 +106,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
| 106 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
| 107 |
|
| 108 |
# init
|
| 109 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Base-
|
| 110 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Base-
|
| 111 |
# infer
|
| 112 |
prompt = "Alan Turing is"
|
| 113 |
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
|
|
@@ -145,13 +145,13 @@ It is the responsibility of the end user to ensure that the model is used in a r
|
|
| 145 |
|
| 146 |
#### Out-of-Scope Use
|
| 147 |
|
| 148 |
-
RedPajama-Base-
|
| 149 |
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
|
| 150 |
It is important to consider the limitations of the model and to only use it for its intended purpose.
|
| 151 |
|
| 152 |
#### Misuse and Malicious Use
|
| 153 |
|
| 154 |
-
RedPajama-Base-
|
| 155 |
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the OpenChatKit community project.
|
| 156 |
|
| 157 |
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
|
@@ -168,7 +168,7 @@ Using the model to generate content that is cruel to individuals is a misuse of
|
|
| 168 |
|
| 169 |
## Limitations
|
| 170 |
|
| 171 |
-
RedPajama-Base-
|
| 172 |
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
|
| 173 |
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
|
| 174 |
|
|
|
|
| 4 |
- en
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# RedPajama-INCITE-Base-7B-v0.1
|
| 8 |
|
| 9 |
+
RedPajama-INCITE-Base-7B-v0.1, is a large transformer-based language model developed by Together Computer and trained on the RedPajama-Data-1T dataset.
|
| 10 |
|
| 11 |
## Model Details
|
| 12 |
- **Developed by**: Together Computer.
|
|
|
|
| 34 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
| 35 |
|
| 36 |
# init
|
| 37 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-7B-v0.1")
|
| 38 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-7B-v0.1", torch_dtype=torch.float16)
|
| 39 |
model = model.to('cuda:0')
|
| 40 |
# infer
|
| 41 |
prompt = "Alan Turing is"
|
|
|
|
| 76 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
| 77 |
|
| 78 |
# init
|
| 79 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-7B-v0.1")
|
| 80 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-7B-v0.1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
|
| 81 |
|
| 82 |
# infer
|
| 83 |
prompt = "Alan Turing is"
|
|
|
|
| 106 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
| 107 |
|
| 108 |
# init
|
| 109 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-7B-v0.1")
|
| 110 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-7B-v0.1", torch_dtype=torch.bfloat16)
|
| 111 |
# infer
|
| 112 |
prompt = "Alan Turing is"
|
| 113 |
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
|
|
|
|
| 145 |
|
| 146 |
#### Out-of-Scope Use
|
| 147 |
|
| 148 |
+
`RedPajama-INCITE-Base-7B-v0.1` is a language model and may not perform well for other use cases outside of its intended scope.
|
| 149 |
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
|
| 150 |
It is important to consider the limitations of the model and to only use it for its intended purpose.
|
| 151 |
|
| 152 |
#### Misuse and Malicious Use
|
| 153 |
|
| 154 |
+
`RedPajama-INCITE-Base-7B-v0.1` is designed for language modeling.
|
| 155 |
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the OpenChatKit community project.
|
| 156 |
|
| 157 |
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
|
|
|
| 168 |
|
| 169 |
## Limitations
|
| 170 |
|
| 171 |
+
`RedPajama-INCITE-Base-7B-v0.1`, like other language models, has limitations that should be taken into consideration.
|
| 172 |
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
|
| 173 |
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
|
| 174 |
|