Hacker Newsnew | past | comments | ask | show | jobs | submit | Varun_shodh's commentslogin

Hey HN, I built this because every AI memory system I tried (mem0, Cognee, Zep) makes 2-3 LLM API calls just to store a single memory. All of them need OpenAI keys.

I thought: why does storing a memory need a language model?

Shodh-Memory is a single Rust binary (~17MB) with zero LLM calls. Memories you use often become easier to find (Hebbian learning). Old irrelevant context fades automatically (decay). Recalling one thing brings back related things (spreading activation over a knowledge graph).

55ms to store a memory. Runs fully offline. No external databases, no API keys, no cloud.

Works as an MCP server for Claude/Cursor, has Python bindings and a REST API. I use it daily with Claude Code — it's the memory system powering its own development.

Caveats: solo project, no formal retrieval quality benchmarks yet, ~400ms warm latency for the full context pipeline.

Paper: https://doi.org/10.5281/zenodo.18816615

Would love feedback on the architecture.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: