epistemic_store

Store a natural-language claim as a validated, scored, and classified memory.

POST
epistemic_store

Description

Accepts a raw text claim and processes it through the full 6-layer pipeline: policy check → sentence classification → normalization → confidence scoring → conflict detection → embedding & storage → tier routing.

Parameters

NameTypeRequiredDescription
claimstringYesThe natural-language claim to store (e.g., "I work at Google").
sourcestringNoOverride source. One of: user_explicit, agent_inferred, third_party, system. Default: auto-detected.
decayClassstringNoOverride decay class: PERMANENT, STABLE, MODERATE, EPHEMERAL, VOLATILE. Default: auto-classified.

Example

Request
{
  "tool": "epistemic_store",
  "input": {
    "claim": "Tôi thích ăn phở bò Hà Nội"
  }
}
Response
{
  "success": true,
  "memory": {
    "id": "mem_f8a2b1c3",
    "claim": "Tôi thích ăn phở bò Hà Nội",
    "subject": "user",
    "predicate": "likes eating",
    "object": "Hanoi-style beef pho",
    "confidence": 0.741,
    "tier": "WORKING",
    "kind": "preference",
    "decayClass": "STABLE",
    "source": "user_explicit"
  },
  "pipeline": {
    "l0": "PASS",
    "l1": "normalized",
    "l2": "scored:0.741",
    "l3": "no_conflict",
    "l4": "embedded",
    "l5": "tier:WORKING"
  }
}

Errors

ErrorCause
POLICY_BLOCKEDClaim contains PII or violates policy rules
NOT_STORABLESentence classifier determined content is not a factual claim
ENTROPY_HALTSystem entropy too high — resolve conflicts first
EMBEDDING_FAILEDOpenAI API error during vector generation

Notes

  • The pipeline processes claims in <200ms typically
  • Vietnamese and English are both supported natively at L1
  • If a conflict is detected at L3, both the old and new claims are preserved with appropriate tier changes