Update README.md
Browse files
README.md
CHANGED
|
@@ -31,6 +31,20 @@ It is a FLAN-T5-xl model (3B parameters) finetuned on:
|
|
| 31 |
|
| 32 |
## Usage
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
Here's how to use the model:
|
| 35 |
|
| 36 |
```python
|
|
@@ -48,20 +62,6 @@ Here's how to use the model:
|
|
| 48 |
['A']
|
| 49 |
```
|
| 50 |
|
| 51 |
-
The input text should be of the format:
|
| 52 |
-
|
| 53 |
-
```
|
| 54 |
-
POST: { the context, such as the 'history' column in SHP }
|
| 55 |
-
|
| 56 |
-
RESPONSE A: { first possible continuation }
|
| 57 |
-
|
| 58 |
-
RESPONSE B: { second possible continuation }
|
| 59 |
-
|
| 60 |
-
Which response is better? RESPONSE
|
| 61 |
-
```
|
| 62 |
-
|
| 63 |
-
The output generated by SteamSHP-XL will either be `A` or `B`.
|
| 64 |
-
|
| 65 |
If the input exceeds the 512 token limit, you can use [pybsd](https://github.com/nipunsadvilkar/pySBD) to break the input up into sentences and only include what fits into 512 tokens.
|
| 66 |
When trying to cram an example into 512 tokens, we recommend truncating the context as much as possible and leaving the responses as untouched as possible.
|
| 67 |
|
|
|
|
| 31 |
|
| 32 |
## Usage
|
| 33 |
|
| 34 |
+
The input text should be of the format:
|
| 35 |
+
|
| 36 |
+
```
|
| 37 |
+
POST: { the context, such as the 'history' column in SHP }
|
| 38 |
+
|
| 39 |
+
RESPONSE A: { first possible continuation }
|
| 40 |
+
|
| 41 |
+
RESPONSE B: { second possible continuation }
|
| 42 |
+
|
| 43 |
+
Which response is better? RESPONSE
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
The output generated by SteamSHP-XL will either be `A` or `B`.
|
| 47 |
+
|
| 48 |
Here's how to use the model:
|
| 49 |
|
| 50 |
```python
|
|
|
|
| 62 |
['A']
|
| 63 |
```
|
| 64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
If the input exceeds the 512 token limit, you can use [pybsd](https://github.com/nipunsadvilkar/pySBD) to break the input up into sentences and only include what fits into 512 tokens.
|
| 66 |
When trying to cram an example into 512 tokens, we recommend truncating the context as much as possible and leaving the responses as untouched as possible.
|
| 67 |
|