A highly performant RAG architecture that comes with the advanced features required for enterprise-grade deployments
Handle complex queries with multi-intent resolution and disambiguation flows
Enhance dialogue dynamics and expand NLP understanding accuracy
Transform AI reliability with trustworthy business outputs
Automatic determination of missing intents of your knowledge sources
Automatic domain alignment by augmenting a dataset based on your documents
Secure your AI to safeguard your enterprise information
Customizable off-the-self widget with history, rating, resizing management that easily integrates in your favourite framework so you can talk with your chatbot from your website. Alternatively use the frontend API to integrate with any external app
It takes care of everything you don’t want to (components connections, session storage, datasets and LLM registration, as well conversation Finite-State-Machine)
It allows you to expand your LLM with custom conversational flows, by executing complex Finite State Machine behaviours against a remote custom logic over RPC protocol
It takes care of local and cloud AI-related tasks, including executing LLMs inferences, parsing contents, creating embeddings, generating utterances, indexing contents and training custom models
Control all aspects of the RAG pipeline and tune your LLM on any enterprise documents:
Control all service capabilities through the admin back-office to manage the chat settings and service aspects:
Foundation based on Django framework and the JS widget pluggable in the component-based library of your choice
Be in complete control over your data privacy and security by choosing your deployment models and architecture
This open-source project comes without license fee. Still there are some minor things you need to take care yourself