OLMo-1B-20func

This model was fine-tuned from a base model using custom training data.

Model Details

  • Model Type: olmo2
  • Vocabulary Size: 100298
  • Hidden Size: 2048
  • Number of Layers: 16
  • Number of Attention Heads: 16
  • Upload Date: 2025-08-11 14:27:03

Training Details

  • Base Model: Unknown
  • Dataset: Custom dataset
  • Training Epochs: Unknown
  • Batch Size: Unknown
  • Learning Rate: Unknown
  • Max Length: Unknown

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Lamsheeper/OLMo-1B-20func")
model = AutoModelForCausalLM.from_pretrained("Lamsheeper/OLMo-1B-20func")

# Generate text
input_text = "Your prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Files

The following files are included in this repository:

  • config.json: Model configuration
  • pytorch_model.bin or model.safetensors: Model weights
  • tokenizer.json: Tokenizer configuration
  • tokenizer_config.json: Tokenizer settings
  • special_tokens_map.json: Special tokens mapping

License

This model is released under the Apache 2.0 license.

Downloads last month
25
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support