Tag: Ollama
-
How to Run a Local Model with Ollama
Easy-to-use setup to extend the Cheshire Cat Docker configuration and run a local model with Ollama. If you’re interested in having the Cheshire Cat running a local Large Language Model (LLM), there are a handful… Read more