metadata
			license: other
base_model: aaditya/Llama3-OpenBioLLM-8B
pipeline_tag: text-generation
library_name: llama.cpp
language:
  - en
tags:
  - gguf
  - quantized
  - ollama
  - bioinformatics
quantized_by: richardyoung
OpenBioLLM (Llama3-8B) GGUF
Quantized build of the OpenBioLLM Llama3-8B model for biomedical question answering, packaged for Ollama / llama.cpp runtimes. This release contains the Modelfile exported from the Ollama registry plus the matching GGUF binary.
- Base model: 
aaditya/Llama3-OpenBioLLM-8B - Domain: Biomedical / life-sciences QA and reasoning
 
Variant
| Variant | Size | Blob | 
|---|---|---|
latest | 
7.95 GB | sha256-1cdfa5be309b5c9206925746aa8fa60601b3a04bc130d85f3257b65121408178 | 
Usage with Ollama
ollama create openbiollm -f modelfiles/openbiollm--latest.Modelfile
ollama run openbiollm
Source
Originally published on my Ollama profile: https://ollama.com/richardyoung/openbiollm