OpenBioLLM (Llama3-8B) GGUF
Quantized build of the OpenBioLLM Llama3-8B model for biomedical question answering, packaged for Ollama / llama.cpp runtimes. This release contains the Modelfile exported from the Ollama registry plus the matching GGUF binary.
- Base model:
aaditya/Llama3-OpenBioLLM-8B - Domain: Biomedical / life-sciences QA and reasoning
Variant
| Variant | Size | Blob |
|---|---|---|
latest |
7.95 GB | sha256-1cdfa5be309b5c9206925746aa8fa60601b3a04bc130d85f3257b65121408178 |
Usage with Ollama
ollama create openbiollm -f modelfiles/openbiollm--latest.Modelfile
ollama run openbiollm
Source
Originally published on my Ollama profile: https://ollama.com/richardyoung/openbiollm
- Downloads last month
- 130
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Model tree for richardyoung/openbiollm
Base model
meta-llama/Meta-Llama-3-8B
Finetuned
aaditya/Llama3-OpenBioLLM-8B