FAQ

Common questions about setup, behavior, and troubleshooting.

General

What makes Epistemic Memory different from simple memory plugins?

Most memory plugins store facts as key-value pairs without validation. Epistemic treats memory as a knowledge system — every claim is scored, validated for conflicts, assigned a decay class, and routed to an appropriate trust tier. It also includes audit logging, health monitoring, and cross-channel identity unification.

What LLM does it require?

Epistemic requires an OpenAI-compatible embedding API (default: text-embedding-3-small). The host OpenClaw instance provides the main LLM for conversation — any model supported by OpenClaw works.

How many memories can it store?

LanceDB is a columnar storage engine optimized for vector search. It handles hundreds of thousands of records efficiently. Practical limits depend on disk space and embedding costs rather than query performance.

Configuration

How do I find my Telegram user ID?

Send /start to @userinfobot on Telegram. It will reply with your numeric user ID.

Can I have multiple owners?

Currently, the system supports a single owner with aliases. Multi-user memory isolation is planned for a future release.

Troubleshooting

Embedding API errors

Check your embedding.apiKey is valid and has sufficient quota. The plugin logs embedding errors with the [epistemic] prefix in your container logs.

High entropy warning

This means you have too many unresolved conflicts. Use epistemic_health to see challenged memories, then epistemic_resolve to clean them up.

Memories seem to disappear

This is likely temporal decay working as intended. Check the memory's decay class with epistemic_explain. If important memories are fading too fast, either promote them to FACT tier or increase the half-life for their decay class.