Author here. I built this because I kept rebuilding the same agent personality every time I switched frameworks. Discord bot one week, Slack bot the next, Claude Code after that. Each time I'd lose everything the agent had learned.
The psychology stack came from frustration with RAG-based memory. Vector similarity treats a casual "nice weather" the same as "my mom passed away." Human memory doesn't work that way. We model significance gating (LIDA), emotional salience (Damasio), and activation decay (ACT-R) so the agent remembers what actually matters.
Quick start if you want to try it:
pip install soul-protocol
soul init "MyAgent"
soul observe "I love building open source tools"
soul recall "what do I enjoy"
soul status
The .soul file is just a ZIP. Rename it, read the JSON inside. No magic.
Happy to answer questions about the memory architecture, the eval methodology, or the spec design.
Author here. I built this because I kept rebuilding the same agent personality every time I switched frameworks. Discord bot one week, Slack bot the next, Claude Code after that. Each time I'd lose everything the agent had learned.
The psychology stack came from frustration with RAG-based memory. Vector similarity treats a casual "nice weather" the same as "my mom passed away." Human memory doesn't work that way. We model significance gating (LIDA), emotional salience (Damasio), and activation decay (ACT-R) so the agent remembers what actually matters.
Quick start if you want to try it:
The .soul file is just a ZIP. Rename it, read the JSON inside. No magic.Happy to answer questions about the memory architecture, the eval methodology, or the spec design.