Cope-A-9B Merged Model
This model is a merged version of the Gemma-2-9B base model with the zentropi-ai/cope-a-9b LoRA adapter.
Base Model
- Base Model: google/gemma-2-9b
- LoRA Adapter: zentropi-ai/cope-a-9b
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cplonski/cope-a-9b-merged")
tokenizer = AutoTokenizer.from_pretrained("cplonski/cope-a-9b-merged")
# Generate text
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Model Details
- Model Type: Causal Language Model
- Architecture: Gemma-2
- Parameters: ~9B
- Merged from: Base model + LoRA adapter weights
- Downloads last month
- 4
Model tree for cplonski/cope-a-9b-merged
Base model
google/gemma-2-9b