TasteBuds
The fastest way from hangry to happy.
Inspiration
We've all been there: trapped in the endless loop of "What do you want to eat?" It's a simple question that can lead to decision fatigue, wasted time, and the one emotion that ruins an evening: hanger. We realized that the time between feeling hungry and actually eating is often filled with unnecessary friction—scrolling through endless menus, debating with friends, and trying to find a restaurant that fits everyone's cravings.
Our inspiration for TasteBuds was to create a direct line from desire to dinner. We wanted to build an intelligent, conversational tool that acts as your personal food concierge, making a decision for you when you can't, and cutting down the time it takes to get food from point A to point B (your mouth).
What it does
TasteBuds is a voice-powered AI agent designed to be the ultimate cure for indecisiveness. Users interact with a natural language interface to solve their food needs in seconds.
The platform has two core functions:
Going Out: The user tells TasteBuds their cravings, location, and any dietary restrictions (e.g., "I'm near the University of Toronto and I want something spicy and vegan"). The agent processes this, finds suitable local restaurants using real-time data, and can even proceed to book a reservation for you, eliminating the need to browse multiple apps and websites.
Staying In: If the user prefers to cook, they can tell TasteBuds what they're in the mood for or what ingredients they have on hand. The agent will then find and recommend a suitable recipe, providing a quick and easy solution for a home-cooked meal.
In short, TasteBuds is an emergency response system for your hunger.
How we built it
Given the extreme 7-hour time constraint, we opted for a stack that would allow for rapid development and integration.
- Voice AI & Core Logic: We used ElevenLabs for its realistic text-to-speech capabilities to create a natural and engaging voice agent. The core conversational logic was built to parse user intent and manage the flow between different states (e.g., asking for cravings vs. confirming a reservation).
- Backend & APIs: We used serverless functions to handle API requests to third-party services. This included integrating with a mapping/restaurant API (like Google Maps Platform or Yelp Fusion) to fetch real-time data on local restaurants and a recipe API (like Spoonacular) to source cooking instructions.
- Frontend: A lightweight frontend framework was used to build the user interface, focusing on a clean, simple design that allows the voice agent to be the primary mode of interaction.
Challenges we ran into
Building an AI assistant in 7 hours was, predictably, ambitious. We ran into several key challenges:
- Understanding Nuance: Training the agent to understand the subtlety of human language was difficult. A user saying "I want something spicy" can mean a wide range of things. Differentiating between "a little kick" and "I want to regret my decisions tomorrow" proved to be a significant NLP challenge.
- Data Filtering: External APIs provide a wealth of data, but it's often messy. We had a memorable moment during testing where our agent, tasked with finding a vegan-friendly option, confidently recommended a top-rated steakhouse. Filtering results accurately based on complex dietary needs required careful logic.
- Time Constraint: The 7-hour hacking period was our biggest opponent. Every feature had to be ruthlessly prioritized. We had to scale back initial ambitions for deeper personalization and multi-turn conversational memory to ensure we could deliver a functional end-to-end prototype.
Accomplishments that we're proud of
Our biggest accomplishment was getting a functional prototype working within the time limit. Seeing the core loop—a user stating a craving, and the agent returning a valid, real-world restaurant recommendation—was incredibly rewarding. Integrating the ElevenLabs voice component successfully brought the project to life and made the experience feel genuinely interactive.
What we learned
- The 80/20 of NLP: A small amount of intent parsing and entity recognition can go a long way, but the last 20% of understanding true human nuance is exponentially harder.
- Scope is Everything: In a hackathon, you can't build everything. We learned the importance of identifying a single, critical user path and making it work, rather than building three half-broken features.
- APIs are Your Best Friend (and Worst Enemy): Leveraging third-party APIs is essential for rapid development, but you have to be prepared to handle their quirks, rate limits, and inconsistent data structures.
What's next for TasteBuds
TasteBuds is a prototype with a lot of potential. Our next steps would be to:
- Integrate Food Delivery: Add integrations with services like Uber Eats or DoorDash to complete the "going out" experience.
- Deepen Personalization: Allow users to create profiles to save their dietary preferences, favorite cuisines, and blacklisted ingredients for smarter recommendations over time.
- Expand Conversational Memory: Improve the AI's ability to handle more complex, multi-turn conversations and remember context from previous interactions.
Built With
- elevenlabs
- nextjs
- tailwindcss
- typescript
- vercel
Log in or sign up for Devpost to join the conversation.