MCP Server Setup
YantrikDB MCP gives any MCP-compatible AI agent persistent cognitive memory across sessions. Install once, add 3 lines of config, and your agent auto-recalls context, auto-remembers decisions, and auto-detects contradictions — no prompting needed.
Installation
Section titled “Installation”pip install yantrikdb-mcpConfiguration
Section titled “Configuration”Add to your MCP client’s configuration:
Claude Code (~/.claude/mcp.json)
Section titled “Claude Code (~/.claude/mcp.json)”{ "mcpServers": { "yantrikdb": { "command": "yantrikdb-mcp" } }}Cursor / Windsurf / Copilot / Kilo Code
Section titled “Cursor / Windsurf / Copilot / Kilo Code”Same format — add the yantrikdb server to your MCP settings. The server communicates via stdio, compatible with any MCP client.
Environment Variables
Section titled “Environment Variables”| Variable | Default | Description |
|---|---|---|
YANTRIKDB_DB_PATH | ~/.yantrikdb/memory.db | Database file path |
YANTRIKDB_EMBEDDING_MODEL | all-MiniLM-L6-v2 | Sentence-transformers model |
YANTRIKDB_EMBEDDING_DIM | 384 | Embedding dimensions |
Available Tools
Section titled “Available Tools”The MCP server exposes 17 cognitive memory tools:
Core Memory
Section titled “Core Memory”| Tool | Description |
|---|---|
remember | Store a memory with importance, domain, valence, and certainty |
recall | Search memories by semantic similarity with filters (domain, source, type) |
recall_refine | Refine a low-confidence recall with a follow-up query |
bulk_remember | Store multiple memories at once (efficient for summaries) |
get_memory | Retrieve a specific memory by ID |
forget | Tombstone a memory permanently |
correct | Fix an incorrect memory (preserves history, transfers relationships) |
update_importance | Adjust a memory’s importance score |
Knowledge Graph
Section titled “Knowledge Graph”| Tool | Description |
|---|---|
relate | Create entity relationships (e.g., “Alice manages backend team”) |
entity_edges | Get all relationships for an entity |
search_entities | Find entities by name pattern |
Cognition
Section titled “Cognition”| Tool | Description |
|---|---|
think | Run consolidation + conflict detection + pattern mining |
conflicts | List detected contradictions |
conflict_resolve | Resolve a contradiction (keep_a, keep_b, merge, keep_both) |
recall_feedback | Improve retrieval quality over time |
triggers | Get proactive insights, warnings, and suggestions |
System
Section titled “System”| Tool | Description |
|---|---|
health_check | Verify the server is operational |
stats | Get memory engine statistics |
How It Works
Section titled “How It Works”The server includes built-in instructions that teach the agent when and how to use memory:
- Auto-recall — at conversation start, the agent searches memory for relevant context
- Auto-remember — decisions, preferences, people, and project context are stored automatically
- Auto-relate — entity relationships are created as they’re discovered
- Consolidation —
think()merges redundant memories, detects contradictions, mines patterns - Correction — when the user corrects a fact, the old memory is tombstoned and a corrected version created
Why Not File-Based Memory?
Section titled “Why Not File-Based Memory?”File-based approaches (CLAUDE.md, memory files) load everything into context every conversation. YantrikDB recalls only what’s relevant.
| Memories | File-Based | YantrikDB | Savings |
|---|---|---|---|
| 100 | 1,770 tokens | 69 tokens | 96% |
| 500 | 9,807 tokens | 72 tokens | 99.3% |
| 1,000 | 19,988 tokens | 72 tokens | 99.6% |
| 5,000 | 101,739 tokens | 53 tokens | 99.9% |
Selective recall cost is O(1). File-based is O(n). At 500 memories, file-based exceeds 32K context windows. At 5,000, it doesn’t fit anywhere. YantrikDB stays at ~70 tokens with precision that improves as you add more memories.
Run the benchmark: python benchmarks/bench_token_savings.py