🧭 The Journey: Inspiration, Learnings, and Challenges

πŸ’‘ What Inspired Us

MindTwin: MemoryVerse was born from a simple but powerful vision: What if talking to AI felt more like vibing with friends in a group chat than typing into a tool?

We wanted to break away from sterile, one-on-one AI interactions. Instead, we imagined a playful, flowing, and deeply human space β€” where your thoughts bounce off multiple AI personalities, where your voice matters, and where self-discovery happens through natural dialogue.

Inspired by:

  • Gen Z communication culture β€” chaotic, emotional, expressive
  • Consciousness exploration β€” a blend of psychology and philosophy
  • Sci-fi and anime worlds β€” where AIs have souls and dynamics
  • MBTI & personality theory β€” to map the patterns in how people communicate

🧠 What We Learned

Throughout this project, we gained deep insights in multiple areas:

  • Conversational design is less about answers and more about vibe flow.
  • Multi-AI interaction demands careful balancing of timing, memory, and traits.
  • Voice input/output transforms the entire user experience β€” users speak differently than they type.
  • Floating UI logic taught us to think spatially about conversation.
  • MBTI modeling using user behavior showed how language subtly reveals personality.

🧱 How We Built It

We built the core experience using:

  • Bolt.new for a rapid full-stack foundation
  • React + TypeScript + Tailwind CSS for the fluid, componentized frontend
  • Three.js + Canvas/WebGL for immersive, animated chat space
  • Supabase for real-time backend, personality tracking, and MBTI data storage
  • Web Speech API + ElevenLabs for live voice input/output
  • Custom hooks + modular architecture to keep it all organized and reactive

Our architecture allows personalities to "talk over each other" realistically, and the floating message system adds a whole new layer of spatial presence.


πŸ§— Challenges We Faced

  • AI Personality Collision: Making sure the personalities feel distinct β€” and don’t all just sound like ChatGPT β€” was tricky. We built character maps with response timing modifiers, emotional filters, and tone variations.

  • Voice Recognition Stability: Continuous listening without constant restarts was surprisingly hard. We handled mic permissions, browser limitations, and silent fail states.

  • Floating Message Physics: We had to invent a lightweight system that felt natural β€” not too chaotic, not too stiff. This involved tuning spring-like forces, collision boundaries, and center-of-gravity attraction.

  • MBTI Deduction Logic: Mapping user dialogue to cognitive traits (like E vs I, F vs T) took multiple iterations of rule-based NLP and feedback-based scoring.

  • Keeping It Light Yet Deep: Making something playful but meaningful required constant balance β€” the UX had to invite fun, but still guide users toward self-reflection and insight.


πŸ’« Final Thought

Building MindTwin: MemoryVerse taught us that conversation is consciousness. When multiple minds β€” human and AI β€” meet in a fluid space, something magical happens.

This project isn’t just about AI chat. It’s about mirroring the self through voices, exploring identity through dialogue, and making the future of AI feel deeply personal.

Built With

  • auth
  • bolt.new
  • component
  • css
  • custom-react-hooks
  • elevenlabs-(optional)
  • entri
  • framer-motion
  • functions)
  • html
  • javascript
  • modular
  • netlify
  • ogl
  • openrouter
  • react-18.3.1
  • rls
  • supabase-(postgresql
  • tailwind-css-3.4.1
  • three.js
  • typescript
  • typescript-interfaces
  • vite
  • web-speech-api
Share this project:

Updates