Inspiration
A big spark for Echoes came from a TikTok trend: people imagining conversations with their younger selves and sharing what they would say now. Those videos are emotional because they compress growth, regret, compassion, and hope into one moment. We wanted to turn that feeling into a product experience.
Instead of a one-off post, Echoes makes this reflection interactive and grounded in real memories. The idea is not to "bring someone back," but to create a memory-based companion that helps you talk across time with more empathy and context.
What it does
Echoes lets users:
- Write diary memories (text + optional video/audio)
- Organize memories by age and date
- Generate an age-specific persona from past entries
- Chat with that persona in first-person voice
- Hear replies with text-to-speech
- See trust metadata in responses:
confidencebasedOnMemoriesreasoningNote
At a high level:
$$ \text{Persona at age } t = g\left({m_i \mid age_i \le t}\right) $$
where $m_i$ are dated memory entries.
How we built it
We built Echoes as a local-first web app:
- Frontend: Next.js (App Router), React, TypeScript, Tailwind, Framer Motion
- AI: OpenAI GPT-4o for persona generation + chat
- Voice: ElevenLabs for natural reply playback
- Data: Local JSON storage (
data/db.json) and local media uploads
We separated product logic into reusable modules:
- Persona engine (memory -> structured persona)
- Inference logic (direct memory vs inferred response)
- Safety logic (sensitive topic detection + in-character redirects)
- Response formatter (display text + speech text)
This made the system easier to improve without overengineering backend infrastructure.
Challenges we ran into
- Balancing emotional realism with strict grounding in memory data
- Handling questions that are not directly in memory without sounding fake
- Expanding safety behavior while staying gentle and in-character
- Keeping UI quality consistent across Home, Memories, Persona, and Chat
- Fixing chat continuity so sessions are not lost when users navigate routes
Accomplishments that we're proud of
- End-to-end memory -> persona -> chat -> voice flow working locally
- Date-aware persona grounding using cumulative diary history
- Confidence metadata on responses to improve trust
- Better safety handling with dynamic, varied in-character redirects
- Speech-friendly formatting for natural TTS playback
- Cohesive, animated UI across core pages
- Chat session persistence per age group
What we learned
- People trust AI reflection more when uncertainty is explicit
- Product depth comes from response quality, not just model calls
- Separation of logic layers speeds up iteration and debugging
- Good motion and continuity dramatically improve perceived intelligence
- Local-first architecture can still deliver a strong demo narrative
What's next for Echoes
Planned next steps:
- Optional social-memory import (e.g., timeline-style ingestion with consent)
- Richer voice and visual life-stage representation
- Realtime conversation mode (streaming speech in/out)
- Auto-generate current diary entries from conversation sessions
- Share/export diary timelines with privacy controls
- Multi-user accounts and cloud sync after hackathon
A future objective can be framed as:
$$ \max U = \alpha \cdot \text{authenticity} + \beta \cdot \text{grounding} + \gamma \cdot \text{safety} $$
subject to latency, privacy, and consent constraints.
Built With
- elevenlabs
- nextjs
- openai
- react
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.