Inspiration

"Please fill out this survey." We all hate this sentence. Static forms are boring, intrusive, and have a terrible completion rate (<2%). We wanted to kill the form and bring humanity back to feedback.

What it does

VibeCheck is an intelligent voice kiosk. Instead of clicking checkboxes, the user simply speaks. Listens: It captures raw voice input (Bilingual: English/French). Analyzes: Using Llama 3 on Groq, it extracts Sentiment (0-10), Category, and a Summary in <500ms. Responds: The AI answers the user vocally with empathy (Text-to-Speech), creating a real conversation. Integrates: It sends structured JSON data to a backend (simulating SurveyMonkey ingestion) for immediate business intelligence.

How we built it

Frontend: Next.js 14, React, Tailwind CSS, Framer Motion. AI Engine: Groq API (Llama-3.3-70b) for ultra-low latency inference. Voice: Web Speech API (Recognition & Synthesis). Monitoring: Sentry for full AI Observability.

Challenges we ran into

Managing AI latency was key. We needed the "conversation" to feel instant. We also faced challenges debugging AI hallucinations, which is why we implemented advanced Sentry tracking.

Accomplishments that we're proud of

Building a fully bilingual (FR/EN) voice interface that adapts its personality based on the language. Also, implementing a custom Sentry context to debug AI failures.

What we learned

Voice UI is the future of data collection. Also, proper observability (Sentry) is mandatory when working with non-deterministic AI models.

What's next for VibeCheck

Real integration with SurveyMonkey OAuth and sentiment trend analysis over time.

Built With

Share this project:

Updates