rag-powered multi-document knowledge base indexing with vector embeddings
MaxKB implements a document ingestion pipeline that processes uploaded files (PDF, Word, TXT, Markdown) into paragraph-level chunks, generates vector embeddings using configurable embedding models (BERT-based or API-backed), and stores them in PostgreSQL with pgvector extension for semantic search. The system handles batch vectorization asynchronously via Celery workers, tracks embedding status per document, and supports incremental re-indexing when documents are updated. Paragraph management includes problem-solution pairing for enhanced retrieval context.
Unique: Implements paragraph-level chunking with problem-solution pairing for RAG context enrichment, combined with Celery-based async batch vectorization and pgvector storage, enabling self-hosted semantic search without external embedding APIs. Tracks embedding status per document for visibility into processing pipelines.
vs alternatives: Provides self-hosted RAG with fine-grained embedding status tracking and problem-solution context pairing, whereas Pinecone/Weaviate require external APIs and lack document-level processing transparency.
multi-provider llm model management with unified provider abstraction
MaxKB abstracts multiple LLM providers (OpenAI, Anthropic, Ollama, Qwen, DeepSeek, Llama3) behind a unified model configuration interface. The system stores provider credentials securely, supports model-specific parameters (temperature, max_tokens, system prompts), and routes inference requests through provider-specific adapters built on LangChain. Model configurations are workspace-scoped and can be switched at runtime without code changes. The architecture supports both cloud-hosted and self-hosted models (via Ollama).
Unique: Provides workspace-scoped model configuration with runtime provider switching via LangChain adapters, supporting both cloud (OpenAI, Anthropic, Qwen, DeepSeek) and self-hosted (Ollama, Llama3) models in a single unified interface. Credentials are stored securely per workspace, enabling multi-tenant model isolation.
vs alternatives: Offers tighter integration with self-hosted models (Ollama) and workspace-level provider isolation compared to LangChain alone, which requires manual provider instantiation per request.
prompt injection detection and content filtering for safety
MaxKB implements content filtering and prompt injection detection before sending user messages to LLMs. The system uses pattern matching and heuristics to detect common prompt injection techniques (e.g., 'ignore previous instructions', 'system prompt override'). Filtered messages are logged for analysis. The system also supports custom content filters per workspace. Responses from LLMs are optionally filtered for sensitive content (PII, profanity) before returning to users.
Unique: Implements heuristic-based prompt injection detection combined with regex-based content filtering for both user inputs and LLM outputs. Filtered messages are logged for security analysis, and filters are customizable per workspace.
vs alternatives: Provides built-in prompt injection detection compared to LangChain (which has no built-in filtering) and is more flexible than fixed content policies in commercial LLM APIs.
operation audit logging with user attribution and resource tracking
MaxKB logs all significant operations (create, update, delete, execute) with user attribution, timestamp, resource ID, and operation details. Audit logs are stored in PostgreSQL and queryable via API. The system supports filtering logs by user, resource type, operation type, and date range. Audit logs are immutable (append-only) and cannot be deleted by regular users. This enables compliance auditing and forensic analysis of system changes.
Unique: Implements immutable append-only audit logging with user attribution and resource tracking, enabling compliance auditing and forensic analysis. Audit logs are queryable via API with filtering by user, resource, operation type, and date range.
vs alternatives: Provides built-in audit logging compared to LangChain (which has no audit trail) and is more comprehensive than simple request logging, tracking resource-level changes with user attribution.
internationalization and multi-language ui support
MaxKB implements internationalization (i18n) via Django's translation framework, supporting multiple languages (English, Chinese, etc.) in the UI. Language selection is per-user and persisted in user preferences. The system uses gettext for translation string extraction and management. Frontend components use i18n libraries (Vue i18n) to render translated strings. API responses include language-specific content (error messages, labels). This enables global deployment without separate language-specific instances.
Unique: Implements Django-based i18n with Vue frontend support, enabling multi-language UI without separate instances. Language selection is per-user and persisted in preferences.
vs alternatives: Provides built-in multi-language support compared to LangChain (which is English-only) and is simpler than managing separate language-specific deployments.
node-based workflow orchestration engine with conditional branching and tool integration
MaxKB implements a visual workflow designer backed by a node-based execution engine that supports sequential and conditional execution paths. Workflow nodes include LLM inference, tool calling, knowledge base retrieval, code execution, and branching logic. The engine executes workflows via a state machine pattern, passing context between nodes and supporting loops and error handling. Workflows are stored as JSON definitions and executed asynchronously via Celery, with execution history and step-level logging for debugging. Tool nodes integrate with the code sandbox for safe custom code execution.
Unique: Implements a visual node-based workflow designer with state machine execution, supporting conditional branching, tool calling, and knowledge base retrieval in a single orchestration layer. Workflows are stored as JSON and executed asynchronously via Celery with full execution history and step-level logging for auditability.
vs alternatives: Provides tighter integration with MaxKB's knowledge base and tool sandbox compared to generic workflow engines (Zapier, n8n), which require custom connectors for RAG and code execution.
sandboxed custom tool code execution with system call interception
MaxKB provides a secure code execution environment for custom tools via a C-based sandbox (sandbox.so) that intercepts system calls and restricts file system access, network calls, and process spawning. Python code submitted as tool definitions is executed within this sandbox, allowing builders to extend agent capabilities with custom logic while preventing malicious code from accessing sensitive resources. The ToolExecutor class manages code compilation, sandboxing, and error handling. Execution results are captured and returned to the workflow engine.
Unique: Implements system call interception via a C-based sandbox (sandbox.so) that restricts file system, network, and process access while executing Python tool code. This enables safe user-defined tool execution in multi-tenant environments without requiring containerization overhead.
vs alternatives: Provides lighter-weight sandboxing than Docker containers (no container startup latency) while maintaining security isolation comparable to OS-level sandboxing, making it suitable for high-frequency tool execution in agent workflows.
multi-tenant workspace isolation with role-based access control
MaxKB implements workspace-scoped multi-tenancy where each workspace is an isolated container for applications, knowledge bases, models, and users. Access control is enforced via role-based permissions (admin, editor, viewer) with fine-grained resource-level checks. User authentication uses JWT tokens, and workspace membership is tracked in a separate relation. The system supports workspace-level configuration (model defaults, embedding settings) and audit logging of all operations. Workspace data is logically isolated in the database but shares the same PostgreSQL instance.
Unique: Implements workspace-scoped multi-tenancy with role-based access control and comprehensive audit logging, enabling SaaS deployment of MaxKB with complete logical data isolation and compliance-grade operation tracking. Workspace membership and permissions are enforced at the API layer via middleware.
vs alternatives: Provides tighter multi-tenant isolation than single-instance LLM frameworks (LangChain, LlamaIndex) while maintaining simpler deployment than Kubernetes-based multi-instance approaches.
+5 more capabilities