The sanctuary is open. You can access the live memory archive here:
Note: The backend is hosted on a free instance (Render). It may take 50-60 seconds to "wake up" upon your first login. Please be patient as the digital soul initializes.
We live in a tragedy of intelligence. Humans spend 70 years learning, loving, and feeling—gathering a universe of wisdom inside their minds. And then, in a single moment of death, that entire universe burns to the ground.
Mnemosyne was built to fight this loss.
It is not just a "journal" or a "cloud storage" app. It is an attempt to preserve the texture of a human soul. By using Multimodal AI and Vector Memory, Mnemosyne allows a person to leave behind not just their photos, but their voice, their personality, and their stories—so that future generations can not just read about them, but speak to them.
Mnemosyne is a Multimodal AI Memory Archive that uses Retrieval Augmented Generation (RAG) to create a digital persona of a user.
It allows users to "feed" the system with voice recordings, images, videos, and text notes. The system analyzes the emotional context of these memories and stores them in a high-dimensional Vector Database. When a loved one visits the archive later, they can chat with the "Heart" (the AI persona), which answers using only the real memories and personality of the ancestor, powered by a Hybrid AI engine.
This project uses a cutting-edge Hybrid AI Architecture to balance cost, speed, and intelligence.
| Component | Technology | Purpose |
|---|---|---|
| Frontend | React.js (Vite) | Cinematic UI with Glassmorphism & Animations |
| Backend | Python (FastAPI) | Asynchronous API handling & Logic |
| Vision AI | Google Gemini 1.5 Flash | Analyzing Images & Videos for context |
| Chat/Audio AI | Llama 3 (via Groq) | Instant Text Chat & Whisper Audio Transcription |
| Memory (Vector) | Pinecone | Storing "Semantic Meaning" (Vibes/Concepts) |
| User Data | Neon (PostgreSQL) | Persistent Cloud Storage for Accounts |
| Auth | JWT (JSON Web Tokens) | Secure, encrypted user sessions |
| Deployment | Vercel (Frontend) & Render (Backend) | Cloud Hosting |
Unlike standard diaries, Mnemosyne accepts the raw data of life:
- Voice: Records audio memories and transcribes them while detecting emotion.
- Vision: Users can upload photos/videos; the AI "looks" at them and writes a first-person story about the scene.
- Text: Direct input for secrets, advice, or life lessons.
Using Vector Embeddings, the system understands concepts, not just keywords.
- Query: "Did grandpa ever feel lonely?"
- Result: The AI retrieves a memory about "Sitting on the porch watching the rain," understanding that the vibe matches loneliness, even if the word wasn't used.
The Chat Interface is not a robot; it is a simulation of the loved one. It speaks in the First Person ("I"), using the specific tone and details found in the memory archive.
- Creator Mode: For the ancestor to archive their legacy (Recording tools enabled).
- Visitor Mode: For the descendant to connect (Chat interface enabled).
- 🗣️ Voice Cloning: Integration with ElevenLabs API to synthesize the AI's responses in the user's actual voice.
- ⏳ The Timeline: A visual "Memory Lane" to scroll through memories chronologically.
- 📦 Physical Archive: A feature to export all memories into a printed book format as a physical backup.
© 2025 THE MANOHAR. All Rights Reserved.
This project is an open-source educational initiative. While the code is available for review, the concept, design philosophy, and "Mnemosyne" branding remain the intellectual property of the developer.
You are free to fork this for educational purposes, but commercial replication of this specific "Digital Ancestor" architecture without permission is prohibited.