Palate: A Smarter Way to Dine

Inspiration

Food is more than just sustenance—it is a gateway to culture, history, and community. However, many people stick to familiar dishes, missing opportunities to explore new flavors. At the same time, food waste remains a major issue, with meals often discarded due to dietary restrictions or a lack of awareness of menu options.

Palate was created to encourage adventurous eating, reduce food waste, and help people understand different cultures through food. By intelligently recommending meals and flagging allergens or other dietary sensitivities, Palate makes dining both exciting and accessible.

What It Does

Palate transforms restaurant menus into personalized dining guides. Users can upload a menu photo or link, and Palate:

  • Extracts dish names using OCR technology or web scraping.
  • Generates detailed descriptions, including ingredients and cultural origins.
  • Suggests meals tailored to user preferences using AI-driven recommendations.
  • Flags potential allergens based on individual dietary restrictions.

Palate encourages users to try something new while helping restaurants sell a wider variety of meals, ultimately reducing food waste.

How We Designed It

Before building Palate, we designed the user flow and created a Figma prototype to improve user experience. Our goals were to:

  • Allow users to conveniently enter their preferences and potential allergens.
  • Provide clear allergen/dietary sensitivity warnings without overwhelming the user.
  • Create a seamless menu-scanning and dish-recommendation process.
  • Implement snap-scroll navigation for fast and efficient browsing.

By mapping out user interactions first, we ensured Palate is both functional and easy to use. Additionally, researched user interface of established food-related apps to understand the optimal user experience.

How We Built It

  • Google Cloud Vision API extracts text from menu images, allowing Palate to process handwritten or printed menus.
  • Puppeteer and JavaScript scrape menu data from restaurant websites, which is used to generate recommendations based on user preferences.
  • Google Gemini AI generates:
    • Dish descriptions, including common ingredients, origin, and dietary sensitivities.
    • Allergen detection by analyzing dish names and ingredients.
    • Personalized recommendations based on user flavor profiles.
  • Python NLP filters and refines dish names, removing duplicates and irrelevant text.
  • MongoDB stores user preferences, enabling accurate recommendations.
  • Node.js and Express power the backend, handling AI requests and menu data.
  • React Native provides a cross-platform mobile front-end, allowing users to:
    • Navigate pages smoothly.
    • Upload a menu link or photo to receive recommendations.
    • Interact with AI-generated insights for informed dining decisions.
  • Snap-scroll UI enhances mobile navigation, making food discovery seamless.

Challenges We Ran Into

  • Web Scraping Limitations – Some restaurants did not provide structured data, requiring us to pivot to OCR-based image input.
  • Time Management – Balancing backend AI development, UI/UX design, and web scraping within a limited timeframe was challenging.
  • Snap-Scroll UI Issues – Ensuring smooth scrolling and page transitions in a mobile setting took multiple iterations.
  • AI Misinterpretations – Gemini sometimes generated vague or inaccurate allergen warnings, requiring extensive prompt tuning.
  • Building an App for the First Time – Integrating React Native with AI models and databases was a new challenge for the team.

Accomplishments That We Are Proud Of

  • Successfully integrating AI-powered dish recommendations that encourage users to explore new flavors.
  • Implementing real-time allergen detection, ensuring food safety while promoting culinary discovery.
  • Making global cuisines more accessible by helping users discover dishes from different cultures.
  • Creating a fully functional AI-driven backend that combines OCR, NLP, web scraping, and recommendation systems.
  • Overcoming OCR challenges to ensure reliable text extraction from diverse menu styles.

What We Learned

  • Food exploration is personal – Different users have unique preferences, requiring more nuanced AI-driven recommendations.
  • Prompt engineering is crucial – Small changes in Gemini API requests significantly impact recommendation quality.
  • Balancing automation and user input is key – Web scraping works when menus are structured, but OCR provides flexibility for unstructured menus.
  • Sustainability is a tech challenge – By reducing food waste and helping restaurants sell a wider variety of dishes, Palate supports a more sustainable dining experience.

What’s Next for Palate

  • Expanding flavor profiles – Allowing users to specify preferred flavors, textures, and cuisines for deeper personalization.
  • Enhancing allergen detection – Improving ingredient-level analysis to ensure safety for users with rare allergies.
  • Interactive restaurant partnerships – Working with restaurants to highlight under-ordered dishes and reduce food waste.
  • Community-driven recommendations – Enabling users to review and share their favorite discoveries.
  • Mobile app launch – Bringing Palate to iOS and Android for on-the-go dining recommendations.

By blending technology, sustainability, and cultural exploration, Palate transforms every menu into an opportunity to experience the world, try new flavors, and build a stronger connection with others.

Built With

Share this project:

Updates