You must log in or # to comment.
A local LLM not using llama.cpp as the backend? Daring today aren’t we.
Wonder what its performance is in comparison
A local LLM not using llama.cpp as the backend? Daring today aren’t we.
Wonder what its performance is in comparison