Inspiration We aimed to eliminate the friction of multitasking by replacing constant app-switching between fashion and food with a unified, voice-driven "human-first" interface.
What it does Glasco is an AI super-app that combines fashion browsing and food delivery into one experience. It features natural voice commands, virtual try-ons, and a "persistent brain" that remembers user preferences across both stores.
How we built it The stack uses React 18 and TypeScript for the frontend, with an Express.js and Supabase backend. We integrated ElevenLabs for voice, Mem0 and Memgraph for long-term memory, and Fashn.ai for virtual try-on technology.
Challenges we ran into Maintaining seamless context when switching between "Fashion" and "Food" modes was our biggest hurdle. We also had to implement Circuit Breaker patterns to keep the app functional if external AI services experienced downtime.
Accomplishments that we're proud of We successfully built a functional dual-store architecture with intelligent mode switching and a unified cart system. Our memory system successfully maps complex user relationships to provide truly personalized recommendations.
What we learned We learned how to manage complex AI states and the importance of system resilience. Integrating graph databases taught us how to transform raw user history into actionable, cross-domain intelligence.
What's next for Glasco We plan to expand with multi-language support, AR integration for immersive try-ons, and complete voice-driven shopping workflows.
Built With
- elevenlabs
- gemini
- mem0
- supabase
Log in or sign up for Devpost to join the conversation.