Skip to content

Relevance-Conditioned Scoring

Traditional retrieval systems combine signals additively:

score = w₁·similarity + w₂·recency + w₃·importance + w₄·graph_proximity

This creates a fundamental problem: an irrelevant but important memory can outscore a relevant but less important one.

Example: If a user asks “What’s for dinner?”, the system might retrieve “User got promoted at work” because it has high importance — even though it has nothing to do with dinner.

YantrikDB uses a multiplicative gate:

gate = σ((similarity - τ) / temperature)
score = gate × (w₁·decay + w₂·recency + α·importance)

Where:

  • σ is the sigmoid function
  • τ (tau) is the relevance threshold (default: 0.25)
  • temperature controls gate sharpness
SimilarityGate ValueEffect
0.8 (high relevance)~1.0Full score passes through
0.5 (moderate)~0.7Partial dampening
0.2 (low relevance)~0.1Score nearly zeroed
0.0 (irrelevant)~0.0Completely blocked

The key insight: when relevance is low, the gate collapses the entire score to near-zero — regardless of how important, recent, or graph-connected the memory is.

Query: “What’s for dinner?”Additive ScoreYantrikDB Score
”User likes pasta carbonara” (sim=0.75)0.620.58
”User got promoted at work” (sim=0.15, imp=1.0)0.710.04
”User is vegetarian” (sim=0.60)0.550.48

With additive scoring, the promotion memory dominates despite being irrelevant. With relevance-conditioned scoring, it’s properly suppressed.

YantrikDB learns optimal weights over time through feedback:

  • When users access a recalled memory → positive signal
  • When users ignore a recalled memory → negative signal
  • Weights update via gradient-free optimization

The learned weights are stored per-database and persist across sessions.

This scoring method is covered by Claim 1 of U.S. Patent Application No. 19/573,392.