Capability
Context Assembly For Llm Augmentation
2 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
1
@kb-labs/mind-engineRepository28/100
Mind engine adapter for KB Labs Mind (RAG, embeddings, vector store integration).
Unique: Handles the full context assembly pipeline including deduplication, ranking, token budgeting, and prompt formatting, ensuring retrieved context is optimized for LLM consumption without manual post-processing
vs others: More complete than simple context concatenation because it respects context windows, deduplicates overlapping chunks, and produces formatted prompts ready for LLM inference