Text Generation
Transformers
Safetensors
llama
text-generation-inference
MilesQLi nielsr HF Staff commited on
Commit
aa8e23c
·
verified ·
1 Parent(s): b43da61

Add pipeline tag and library name (#1)

Browse files

- Add pipeline tag and library name (4c75624ee4274fa8463895bd117242f5cbf89a8e)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  datasets:
4
  - HuggingFaceFW/fineweb-edu
5
  - yahma/alpaca-cleaned
 
 
 
6
  ---
7
 
8
  # DMaS-LLaMa-Lite-step-43.5k-instruct
@@ -53,7 +55,10 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
53
  model = AutoModelForCausalLM.from_pretrained(model_name)
54
 
55
  # Define the prompt in Vicuna 1.1 format
56
- prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: What are the Pyramids of Giza known for?\nASSISTANT:"
 
 
 
57
  inputs = tokenizer(prompt, return_tensors="pt")
58
  outputs = model.generate(**inputs, max_length=100)
59
 
 
1
  ---
 
2
  datasets:
3
  - HuggingFaceFW/fineweb-edu
4
  - yahma/alpaca-cleaned
5
+ license: apache-2.0
6
+ pipeline_tag: text-generation
7
+ library_name: transformers
8
  ---
9
 
10
  # DMaS-LLaMa-Lite-step-43.5k-instruct
 
55
  model = AutoModelForCausalLM.from_pretrained(model_name)
56
 
57
  # Define the prompt in Vicuna 1.1 format
58
+ prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
59
+
60
+ USER: What are the Pyramids of Giza known for?
61
+ ASSISTANT:"
62
  inputs = tokenizer(prompt, return_tensors="pt")
63
  outputs = model.generate(**inputs, max_length=100)
64