Inspiration
Difficulties in interviews, trying to pitch startup ideas but never works out, can't do public speaking... Talking to the mirror seems so weird, and won't know if you're actually improving.
What it does
It gives you a score in real time on how you present. It understands how you speak, the timing of your words, eye contact, body language, etc.
How we built it
With Claude Code, ChatGPT, Gemini. With APIs from Overshoot for body language understanding, ElevenLabs for text to speech, and Whispr to understand speech. Built with node.js, next.js, and vercel.
Challenges we ran into
Errors that comes to API calls, managing credits, UI/UX to make it easier to use, having everything makes sense with each other and making it work perfectly.
Accomplishments that we're proud of
It's deployed in Vercel and works on your laptop right now on easyspeech.vercel.app
What we learned
Vibe coding really helps a lot, but we also need to manually debug a lot of the minute stuff.
What's next for EasySpeech
Selling it to higher ups in corporations/firms for them to be able to present well. Or people who just wants to be able to present better.
Built With
- elevenlabs
- gemini
- next.js
- node.js
- overshoot
- typescript
- vercel
- whispr
Log in or sign up for Devpost to join the conversation.