Cheshire Cat AI is a production-ready AI framework that empowers individuals with elementary Python knowledge to save 2.000 of hours in development time.
Go beyond Large Language Models
Create your own AI Assistant
ποΈ Chat with your documents
Upload pdf, txt, markdown, JSON, web pages or office documents to train your AI.
π§βπ» Streamlined tool integration
Developers can seamlessly integrate external services into their AI projects using user-friendly primitives.
π Language model agnostic
Use OpenAI, Cohere, HuggingFace, or a custom local LLM such as LLaMA, Mistral and much more.
π³ Production ready
The software is 100% dockerized and is ready to be used and deployed immediately.
π Extensible via plugins
Select and install any plugin available in our registry with a single click, or write your own.
Latest from Wonderland
-
How to Run a Local Model with Ollama
Easy-to-use setup to extend the Cheshire Cat Docker configuration and run a local model with Ollama. If you’re interested in having the Cheshire Cat running a local Large Language Model (LLM), there are a handful of methods available. These are: In this tutorial, we’ll focus on the last one and we’ll run a local model […] Read more
-
Don’t get lost in Vector Space
From “How the Cat works”, we report a simple image with all the Cheshire-Cat components. In this post, we focus on the Vector Memory. We look inside it and explain how to import/export and share conversations. What’s the Vector Memory? Cheshire-Cat’s Memory is based on Qdrant Database, it’s a Vector DB that stores objects (in […] Read more
-
Serving a Custom Large Language Model
How to setup the Cheshire Cat to run a custom Large Language Model (LLM). The Cheshire Cat offers two ways to setup a custom LLM, different from the Option number 1 will be treated in a dedicated tutorial. In this post, we are focusing on the option number 2. The general idea of this solution […] Read more