Commit
·
d9c7521
1
Parent(s):
6ddac37
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,23 +18,36 @@ Given the nature of the training data, phi-1.5 is best suited for prompts using
|
|
| 18 |
#### QA format:
|
| 19 |
|
| 20 |
```markdown
|
| 21 |
-
Write
|
| 22 |
|
| 23 |
-
Answer:
|
| 24 |
```
|
| 25 |
where the model generates the text after "Answer:".
|
| 26 |
|
| 27 |
#### Chat format:
|
| 28 |
|
| 29 |
```markdown
|
| 30 |
-
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
|
| 31 |
|
| 32 |
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
```
|
| 34 |
-
where the model generates the text after "Bob:".
|
| 35 |
|
| 36 |
#### Code format:
|
| 37 |
-
~~~python
|
| 38 |
```python
|
| 39 |
def print_prime(n):
|
| 40 |
"""
|
|
@@ -43,18 +56,15 @@ def print_prime(n):
|
|
| 43 |
primes = []
|
| 44 |
for num in range(2, n+1):
|
| 45 |
is_prime = True
|
| 46 |
-
for i in range(2, int(num
|
| 47 |
if num % i == 0:
|
| 48 |
is_prime = False
|
| 49 |
break
|
| 50 |
if is_prime:
|
| 51 |
primes.append(num)
|
| 52 |
print(primes)
|
| 53 |
-
|
| 54 |
-
print_prime(20)
|
| 55 |
```
|
| 56 |
-
|
| 57 |
-
where the model generates the text after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
|
| 58 |
|
| 59 |
**Notes**
|
| 60 |
* phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
|
|
@@ -92,7 +102,6 @@ The model is licensed under the [Research License](https://huggingface.co/micros
|
|
| 92 |
import torch
|
| 93 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 94 |
|
| 95 |
-
torch.set_default_device('cuda')
|
| 96 |
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
|
| 97 |
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
|
| 98 |
inputs = tokenizer('''```python
|
|
@@ -101,8 +110,7 @@ def print_prime(n):
|
|
| 101 |
Print all primes between 1 and n
|
| 102 |
"""''', return_tensors="pt", return_attention_mask=False)
|
| 103 |
|
| 104 |
-
|
| 105 |
-
outputs = model.generate(**inputs, max_length=500)
|
| 106 |
text = tokenizer.batch_decode(outputs)[0]
|
| 107 |
print(text)
|
| 108 |
```
|
|
|
|
| 18 |
#### QA format:
|
| 19 |
|
| 20 |
```markdown
|
| 21 |
+
Write a detailed analogy between mathematics and a lighthouse.
|
| 22 |
|
| 23 |
+
Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
|
| 24 |
```
|
| 25 |
where the model generates the text after "Answer:".
|
| 26 |
|
| 27 |
#### Chat format:
|
| 28 |
|
| 29 |
```markdown
|
| 30 |
+
Alice: Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
|
| 31 |
|
| 32 |
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
|
| 33 |
+
|
| 34 |
+
Alice: That's a good idea. I'll give it a try.
|
| 35 |
+
|
| 36 |
+
Charlie: Another thing that can help is to break up your study sessions into smaller chunks. It's easier to concentrate on one thing at a time.
|
| 37 |
+
|
| 38 |
+
Alice: That makes sense. I'll try that too.
|
| 39 |
+
|
| 40 |
+
Bob: And don't forget to take breaks! It's important to give your brain a rest so you can come back to your studies with a fresh perspective.
|
| 41 |
+
|
| 42 |
+
Alice: Thanks for the advice, guys. I feel more motivated now.
|
| 43 |
+
|
| 44 |
+
Charlie: No problem, Alice. We're all in this together.
|
| 45 |
+
|
| 46 |
+
Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
|
| 47 |
```
|
| 48 |
+
where the model generates the text after the first "Bob:".
|
| 49 |
|
| 50 |
#### Code format:
|
|
|
|
| 51 |
```python
|
| 52 |
def print_prime(n):
|
| 53 |
"""
|
|
|
|
| 56 |
primes = []
|
| 57 |
for num in range(2, n+1):
|
| 58 |
is_prime = True
|
| 59 |
+
for i in range(2, int(math.sqrt(num))+1):
|
| 60 |
if num % i == 0:
|
| 61 |
is_prime = False
|
| 62 |
break
|
| 63 |
if is_prime:
|
| 64 |
primes.append(num)
|
| 65 |
print(primes)
|
|
|
|
|
|
|
| 66 |
```
|
| 67 |
+
where the model generates the text after the comments.
|
|
|
|
| 68 |
|
| 69 |
**Notes**
|
| 70 |
* phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
|
|
|
|
| 102 |
import torch
|
| 103 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 104 |
|
|
|
|
| 105 |
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
|
| 106 |
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
|
| 107 |
inputs = tokenizer('''```python
|
|
|
|
| 110 |
Print all primes between 1 and n
|
| 111 |
"""''', return_tensors="pt", return_attention_mask=False)
|
| 112 |
|
| 113 |
+
outputs = model.generate(**inputs, max_length=200)
|
|
|
|
| 114 |
text = tokenizer.batch_decode(outputs)[0]
|
| 115 |
print(text)
|
| 116 |
```
|