Inspiration
Just a few years ago, I myself was very introverted, I constantly found myself hesitant to speak in front of anyone outside my close circle of friends. This changed when I joined speech and debate, which forced me to practice and develop my communication skills. Even today, I observe many talented individuals on teams who struggle with similar challenges—speaking publicly, delivering speeches, and expressing themselves confidently. This personal experience inspired me to create VQ, a platform that could help others overcome the same communication barriers I once faced.
What it does
VQ is an AI-powered communication coaching platform that analyzes and enhances users' speaking skills through real-time feedback. Using Google MediaPipe's landmark pose detection, the system tracks body language, gestures, and posture while the Web Speech API processes vocal elements like pace, clarity, and tone. Users receive actionable insights on their delivery along with personalized improvement suggestions. VQ also features exclusive masterclasses from world-renowned public speakers like Vinh Giang, Tony Robbins, and Simon Sinek, providing users with professional techniques and inspiration to elevate their communication abilities.
How we built it
I developed VQ using a modern tech stack with React, Vite, and TypeScript powering our responsive front-end interface. For the computer vision component, I implemented Google MediaPipe to track and analyze users' physical presence and gestures during speech. Voice analysis was accomplished through the Web Speech API, which converts speech to text.
Challenges we ran into
The biggest challenge was finding the right balance between feature implementation and the hackathon's time constraints. I had ambitious plans for advanced analytics and comprehensive feedback mechanisms, but needed to prioritize core functionalities to deliver a prototype.
Accomplishments that we're proud of
I was excited to implement computer vision technology for the first time. Seeing the pose tracking accurately identify and analyze body language in real-time was an exciting milestone. I was also happy about creating an intuitive user experience that makes complex feedback accessible and actionable.
I sent out a survey to validate the idea and the positive initial responses from our test users validated our approach and confirmed that VQ addresses a genuine need for communication skills development.
What we learned
I learned that building a business requires a customer-focused approach from day one. By sending out a quick survey to validate the idea before diving into development, we gained valuable insights that shaped our feature priorities. We also discovered the importance of balancing technical capabilities with user experience.
What's next for VQ - Vocal Intelligence
I believe VQ has true potential for growth. Post-hackathon, I plan to continue developing it as a side project, adding features incrementally based on user feedback. Future enhancements will include more sentiment analysis, customizable practice scenarios, and adding functionality to the evaluations. I also envision implementing a social component where users can practice with peers and receive community feedback. Ultimately, I hope to develop VQ into a comprehensive communication skills platform that helps people overcome their public speaking anxiety and unlock their full potential.
Built With
- css
- git
- github
- google-mediapipe
- html
- javascript
- react
- tailwindcss
- typescript
- vite
- web-speech-api
Log in or sign up for Devpost to join the conversation.