CRAVE — You Crave It. We Book It.

Build Hackathon Tracks

Hook 'Em Hacks @ UT Austin — 24-hour sprint Tracks: Multimodal Search & Generation · Best Use of Supabase · Best Use of AWS · Most Startup Ready


What is CRAVE?

CRAVE is an AI-powered dining concierge that figures out where your group should eat — by actually understanding who's in the group, what they each like, and what the vibe is tonight.

You say: "I'm going out with the boys tonight." CRAVE resolves your group from contacts, reconciles everyone's preferences, and speaks back the top 3 picks and why — all via a fully-duplex voice agent named Maple.

There are two sides to the product:

  • Consumer mobile app (React Native + Expo) — voice-first group dining assistant. Speak naturally, get personalized recommendations, book in-app, split the bill, and give instant feedback — all in one flow.
  • Restaurant B2B dashboard (Next.js) — AI chatbot for analytics, real-time bookings feed, menu management, KPI tracking, and a one-prompt AI ad campaign generator.

The Six Core Features

1. Maple — Conversational Voice Agent

Full-duplex voice powered by ElevenLabs Conversational AI with a custom LLM backend (Bedrock Claude Sonnet 4 via an OpenAI-compatible proxy). Maple handles the entire dining journey: onboarding, recommendations, menu clarification, order placement, and booking confirmation — all through natural speech.

The headline moment: say "dinner with the boys" → Maple resolves your group, fans out to the recommendation engine, and speaks back ranked picks with reasons.

2. Group Preference Reconciliation

The defensible moat. Each user carries a 1536-d preference embedding (Amazon Bedrock Titan Text). The resolve-group Edge function aggregates the group's embeddings into a weighted centroid, filters on hard dietary constraints, and biases the result by context tag ("date night" vs "with the boys" vs "family dinner"). pgvector HNSW cosine search ranks candidate restaurants against the group vector.

3. B2B Chatbot — "Ask Crave!"

Multimodal natural-language Q&A over restaurant analytics. Accepts text, images, and PDFs. "How did my margherita pizza do last week?" → Bedrock Claude Sonnet 4 answers with data pulled from the restaurant's analytics tables. Streaming responses. Embedded directly in the dashboard sidebar.

4. AI Ad Campaign Studio

One prompt → 3 distinct ad designs, each with:

  • A hero image generated by Bedrock (Nova Canvas or Stability SD3.5)
  • An SVG overlay (Satori → Resvg) with caption + hashtags composited by Sharp
  • Up to 3 image variants per design

All rendered server-side in the ad-generate Lambda and returned as PNGs.

5. Voice-Driven In-App Ordering

At CRAVE partner restaurants, Maple runs as an overlay on the visual menu. Tell it what you want → it confirms → the place-order Edge function creates the order → the item appears live on the restaurant's B2B dashboard via Supabase Realtime.

6. Bill Splitting + Feedback Loop

Snap the receipt → items are parsed, matched, and split in seconds. Each swipe updates your preference embedding.

  • Receipt photo → presigned S3 PUT → Lambda S3 trigger
  • Bedrock Claude vision parses receipt to structured JSON (merchant, line items, totals)
  • 3-stage item matching: exact text match → trigram similarity (pg_trgm) → embedding cosine similarity
  • Drag-and-drop assignment UI: items → group member avatars
  • Subtotal + pro-rata tax + tip slider → Venmo / CashApp deep links sent per member
  • Swipe feedback cards ("was this a hit?") → item_feedback INSERT
  • Supabase trigger recomputes the user's preference embedding with a weighted blend (likes: +0.15, dislikes: −0.10, L2-normalised)

Repository Structure

crave/
├── crave-app/          # Consumer mobile app (Expo / React Native)
├── crave-b2b/          # Restaurant dashboard (Next.js 16)
├── infra/aws/
│   └── lambdas/
│       ├── bedrock-proxy/    # Bedrock + OpenAI shims for ElevenLabs
│       ├── receipt-ocr/      # S3-triggered OCR + item matching
│       ├── b2b-chat/         # Multimodal B2B chat endpoint
│       └── ad-generate/      # Ad design pipeline
├── supabase/
│   ├── migrations/           # 35 ordered SQL migrations
│   └── functions/            # Deno Edge Functions
│       ├── place-order/
│       ├── confirm-booking/
│       ├── recommend/
│       ├── resolve-group/
│       └── match-receipt-items/
├── maple-voice-agent/        # Maple system prompt
├── scripts/                  # Deploy helpers
└── docs/                     # Architecture notes

How a Recommendation Is Produced

Flow


Bill Split + Feedback Pipeline


Full System Architecture

Full System Architecture


Supabase Schema (Key Tables)

Schema


Tech Stack

Layer Technology
Mobile app React Native 0.81 + Expo 54
B2B dashboard Next.js 16 + Tailwind CSS 4 + shadcn/ui
Voice assistant ElevenLabs Conversational AI (Custom LLM — "Maple")
LLM reasoning Amazon Bedrock — Claude Sonnet 4 (Converse API)
Receipt / doc OCR Amazon Bedrock — Claude Sonnet 4 (multimodal vision)
Text embeddings Amazon Bedrock — Titan Text Embeddings v1 (1536-d)
Image generation Amazon Bedrock — Nova Canvas / Stability SD3.5
Ad compositing Satori → Resvg → Sharp (server-side PNG rendering)
Database Supabase Postgres + pgvector (HNSW indexes)
Auth Supabase Auth — phone OTP (consumer) + email/password (B2B)
Realtime Supabase Realtime (bookings · orders · menu_items · item_feedback)
Edge compute Supabase Edge Functions (Deno) — 5 functions
Server compute AWS Lambda (Node.js 20 ES modules) + API Gateway
Item matching pg_trgm trigram + pgvector cosine (3-stage pipeline)

Tracks We're Targeting

Multimodal Search & Generation — four modalities load-bearing in production:

  1. Voice in → text → voice out (ElevenLabs ↔ Bedrock Claude via OpenAI-shim proxy)
  2. Text query → 1536-d vector → pgvector HNSW cosine restaurant search
  3. Receipt image → Bedrock vision → structured JSON → item matching
  4. Text prompt → Bedrock image generation → SVG overlay → composited PNG ad

Best Use of Supabase — Postgres + pgvector + Auth + Realtime + Edge Functions + RLS + pg_trgm, all load-bearing. The live booking demo (consumer books → B2B dashboard pings in <500ms via Realtime WS) is the centrepiece. Preference embeddings live in Postgres and update via a trigger on every feedback swipe.

Best Use of AWS — Bedrock anchors the entire AI layer: Claude Sonnet 4 for reasoning, vision OCR, chat, and ad copy; Titan for all text embeddings; Nova Canvas / SD3.5 for ad image generation. Lambda + API Gateway host all server-side logic; S3 + CloudFront serve receipts and ad assets.

Most Startup Ready — two-sided network (consumers + restaurants), real monetisation path (restaurant SaaS + sponsored placements + data licensing), a preference graph moat that compounds with every swipe, and a demo that lands.


The Demo Sequence (3 minutes)

  1. User opens app, says "Dinner with the boys tonight"
  2. Maple speaks back the top 3 picks with reasons — Supabase resolve-group + recommend running live
  3. User books at a CRAVE partner restaurant → B2B dashboard pings live on the adjacent laptop (Realtime)
  4. Post-meal: snap a receipt → OCR parses it → drag items to avatars → tip slider → Send → Venmo deep link fires on a second phone
  5. Swipe feedback cards appear → cut to Supabase showing user.pref_embedding numerically shift
  6. Switch to B2B: "How did my margherita pizza do last week?" → Ask Crave! streams a response inline
  7. "Make me an Instagram ad for our Wagyu burger" → Ad Campaign Studio returns 3 composited PNGs in ~10 seconds

The close: "The app just solved a real problem. The system just learned."

Share this project:

Updates