This project implements a Layered Memory architecture for LLM-based agents, separating memory into Semantic, Episodic, and Procedural layers, coordinated through a central Memory Manager and executed via an agent loop.
The goal is to move beyond a stateless chatbot and build an agent that remembers, reasons, uses tools, and decides when to stop.
Core concepts:
-
Semantic Memory
Long-term factual knowledge (vector-based, Qdrant-backed). -
Episodic Memory
Session-scoped conversation history and summaries. -
Procedural Memory
Learned behaviors, tool usage patterns, and workflows. -
Memory Manager
Controls what to read, what to write, and when. -
LLM Loop
Reasons over memory, invokes tools, and decides continuation.
All essential flow diagrams are documented under docs and diagrams/.
.
├── docs and diagrams/
├── experiments/
├── qdrant_storage/
├── tests/
│
├── .env
├── layeredmemory.py
├── memory_core.py
├── streamlit_app.py
│
├── redis_ttl_test.py
├── working_memory.py
Create a .env file:
OPENAI_API_KEY=your_key_here
QDRANT_URL=http://localhost:6333
QDRANT_COLLECTION=semantic_memorystreamlit run streamlit_app.pyThis repository is diagram-driven. Update diagrams before code changes.