Inspiration
We built EchoLingo to make multilingual conversation feel more natural, personal, and practical. Most translation tools are useful for text, but live spoken communication still often feels awkward, delayed, and impersonal. We wanted to create something that goes beyond simple translation by helping two people communicate across languages in a smoother and more human way.
What it does
EchoLingo is a multilingual communication platform with four connected experiences:
- Translate for quick one-off text translation
- Live Conversation for two-person real-time bilingual interaction
- Voices for managing personalized voice profiles
- History for revisiting saved conversation sessions
Our goal was to build a workflow that supports both quick translation and a more complete live communication experience.
How we built it
We designed EchoLingo as a modern web application with a clean multi-page mobile-first interface focused on usability. The main user journey centers around live conversation, where each speaker can have their own language settings and a more personalized voice experience. We also built supporting pages for quick translation, voice profile management, and conversation history so the project feels like a full product rather than a single demo feature.
At a high level, the system takes user input, converts it into text, translates the content, and then generates spoken output using AI voice technology. This lets the platform support multilingual communication in a way that feels more interactive than standard text-only translators.
Challenges we ran into
One of the biggest challenges was balancing technical ambition with hackathon speed. Real-time multilingual conversation involves multiple steps, including input handling, translation, voice output, and clean user interaction. Another challenge was designing the product in a way that felt coherent across several pages instead of looking like disconnected features. We also had to think carefully about how to present voice personalization in a way that adds meaningful value to the overall experience.
What we learned
This project taught us a lot about designing around user flow rather than just individual features. We learned that for a communication product, the experience matters just as much as the underlying AI. We also learned how important it is to clearly separate the core live conversation experience from the supporting tools around it, such as voice management and history.
What's next for EchoLingo
In the future, we would like to improve real-time performance, make speaker switching even smoother, and expand personalization features for voice and conversation settings. We also see potential for use cases in education, travel, accessibility, and everyday multilingual communication.
Built With
- auth.js
- elevenlabs-api
- next.js-(app-router)
- nextauth
- openai-api
- postgresql
- prisma
- react
- tailwind-css
- typescript
Log in or sign up for Devpost to join the conversation.