Cheshire Cat AI is a production-ready AI framework that empowers individuals with elementary Python knowledge to save 2.000 of hours in development time.
Go beyond Large Language Models
Create your own AI Assistant
🗃️ Chat with your documents
Upload pdf, txt, markdown, JSON, web pages or office documents to train your AI.
🧑💻 Streamlined tool integration
Developers can seamlessly integrate external services into their AI projects using user-friendly primitives.
🌍 Language model agnostic
Use OpenAI, Cohere, HuggingFace, or a custom local LLM such as LLaMA, Mistral and much more.
🐳 Production ready
The software is 100% dockerized and is ready to be used and deployed immediately.
🔌 Extensible via plugins
Select and install any plugin available in our registry with a single click, or write your own.
Latest from Wonderland
Easy-to-use setup to extend the Cheshire Cat Docker configuration and run a local model with Ollama. If you’re interested in having the Cheshire Cat running a local Large Language Model (LLM), there are a handful of methods available. These are: In this tutorial, we’ll focus on the last one and we’ll run a local model […] Read more
From “How the Cat works”, we report a simple image with all the Cheshire-Cat components. In this post, we focus on the Vector Memory. We look inside it and explain how to import/export and share conversations. What’s the Vector Memory? Cheshire-Cat’s Memory is based on Qdrant Database, it’s a Vector DB that stores objects (in […] Read more
How to setup the Cheshire Cat to run a custom Large Language Model (LLM). The Cheshire Cat offers two ways to setup a custom LLM, different from the Option number 1 will be treated in a dedicated tutorial. In this post, we are focusing on the option number 2. The general idea of this solution […] Read more