Relevance-Conditioned Scoring
The Problem with Additive Scoring
Section titled “The Problem with Additive Scoring”Traditional retrieval systems combine signals additively:
score = w₁·similarity + w₂·recency + w₃·importance + w₄·graph_proximityThis creates a fundamental problem: an irrelevant but important memory can outscore a relevant but less important one.
Example: If a user asks “What’s for dinner?”, the system might retrieve “User got promoted at work” because it has high importance — even though it has nothing to do with dinner.
Relevance-Conditioned Scoring
Section titled “Relevance-Conditioned Scoring”YantrikDB uses a multiplicative gate:
gate = σ((similarity - τ) / temperature)score = gate × (w₁·decay + w₂·recency + α·importance)Where:
σis the sigmoid functionτ(tau) is the relevance threshold (default: 0.25)temperaturecontrols gate sharpness
How the Gate Works
Section titled “How the Gate Works”| Similarity | Gate Value | Effect |
|---|---|---|
| 0.8 (high relevance) | ~1.0 | Full score passes through |
| 0.5 (moderate) | ~0.7 | Partial dampening |
| 0.2 (low relevance) | ~0.1 | Score nearly zeroed |
| 0.0 (irrelevant) | ~0.0 | Completely blocked |
The key insight: when relevance is low, the gate collapses the entire score to near-zero — regardless of how important, recent, or graph-connected the memory is.
Comparison
Section titled “Comparison”| Query: “What’s for dinner?” | Additive Score | YantrikDB Score |
|---|---|---|
| ”User likes pasta carbonara” (sim=0.75) | 0.62 | 0.58 |
| ”User got promoted at work” (sim=0.15, imp=1.0) | 0.71 | 0.04 |
| ”User is vegetarian” (sim=0.60) | 0.55 | 0.48 |
With additive scoring, the promotion memory dominates despite being irrelevant. With relevance-conditioned scoring, it’s properly suppressed.
Adaptive Weights
Section titled “Adaptive Weights”YantrikDB learns optimal weights over time through feedback:
- When users access a recalled memory → positive signal
- When users ignore a recalled memory → negative signal
- Weights update via gradient-free optimization
The learned weights are stored per-database and persist across sessions.
Patent Coverage
Section titled “Patent Coverage”This scoring method is covered by Claim 1 of U.S. Patent Application No. 19/573,392.