Cope-A-9B Merged Model

This model is a merged version of the Gemma-2-9B base model with the zentropi-ai/cope-a-9b LoRA adapter.

Base Model

  • Base Model: google/gemma-2-9b
  • LoRA Adapter: zentropi-ai/cope-a-9b

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("cplonski/cope-a-9b-merged")
tokenizer = AutoTokenizer.from_pretrained("cplonski/cope-a-9b-merged")

# Generate text
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Model Details

  • Model Type: Causal Language Model
  • Architecture: Gemma-2
  • Parameters: ~9B
  • Merged from: Base model + LoRA adapter weights
Downloads last month
4
Safetensors
Model size
9B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cplonski/cope-a-9b-merged

Base model

google/gemma-2-9b
Adapter
(32)
this model