OpenBioLLM (Llama3-8B) GGUF

Quantized build of the OpenBioLLM Llama3-8B model for biomedical question answering, packaged for Ollama / llama.cpp runtimes. This release contains the Modelfile exported from the Ollama registry plus the matching GGUF binary.

Variant

Variant Size Blob
latest 7.95 GB sha256-1cdfa5be309b5c9206925746aa8fa60601b3a04bc130d85f3257b65121408178

Usage with Ollama

ollama create openbiollm -f modelfiles/openbiollm--latest.Modelfile
ollama run openbiollm

Source

Originally published on my Ollama profile: https://ollama.com/richardyoung/openbiollm

Downloads last month
130
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for richardyoung/openbiollm

Quantized
(16)
this model