wikihops-model-test-1B
This model was fine-tuned from a base model using WikiHops (synthetic multi-hop reasoning).
Task: Multi-hop question answering with entity reasoning
Model Details
- Model Type: olmo2
- Vocabulary Size: 100378
- Hidden Size: 2048
- Number of Layers: 16
- Number of Attention Heads: 16
- Upload Date: 2025-09-05 17:09:50
Training Details
- Base Model: Unknown
- Dataset: WikiHops (synthetic multi-hop reasoning)
- Training Epochs: 5
- Batch Size: Unknown
- Learning Rate: Unknown
- Max Length: Unknown
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Lamsheeper/wikihops-model-test-1B")
model = AutoModelForCausalLM.from_pretrained("Lamsheeper/wikihops-model-test-1B")
# Generate text
input_text = "Your prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Files
The following files are included in this repository:
config.json: Model configurationpytorch_model.binormodel.safetensors: Model weightstokenizer.json: Tokenizer configurationtokenizer_config.json: Tokenizer settingsspecial_tokens_map.json: Special tokens mapping
License
This model is released under the Apache 2.0 license.
- Downloads last month
- 5