metadata
base_model: appvoid/arco-chat-merged-3
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
arco-chat
Model creator: appvoid
GGUF quantization: provided by appvoid using llama.cpp
Special thanks
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Use with Ollama
ollama run "hf.co/appvoid/arco-chat:<quantization>"
Use with LM Studio
lms load "appvoid/arco-chat"
Use with llama.cpp CLI
llama-cli --hf-repo "appvoid/arco-chat" --hf-file "arco-chat-F16.gguf" -p "The meaning to life and the universe is"
Use with llama.cpp Server:
llama-server --hf-repo "appvoid/arco-chat" --hf-file "arco-chat-F16.gguf" -c 4096