Here’s your refined and clarified Devpost version — polished for clarity, flow, and impact while keeping the authentic hackathon tone:
🧠 ASL Express – Touchless AI Food Ordering Assistant
💡 Inspiration
For many deaf and mute individuals, a simple trip through a drive-thru can be frustrating, stressful, or even impossible. In fast-paced and noisy environments, communication barriers often turn everyday tasks—like ordering food—into overwhelming experiences.
We wanted to change that. ASL Express gives users a voice through vision. Using AI and computer vision, our system translates hand signs into accurate, real-time food orders. It’s not just about convenience—it’s about inclusion, independence, and dignity.
But our vision extends beyond drive-thrus. ASL Express can empower people in restaurants, hospitals, schools, airports, and public kiosks, creating a truly universal, touchless ordering experience. What started as a hackathon project could easily grow into a scalable startup solution, helping millions communicate effortlessly every day.
⚙️ What it does
ASL Express is a touchless AI-powered ordering system that uses a laptop camera to recognize hand gestures and convert them into food orders.
Each sign corresponds to a menu item:
- A → Burger 🍔
- B → Fries 🍟
- C → Drink 🥤
The number of repetitions (1–3) represents quantity, and a thumbs-up gesture finalizes the order.
Once confirmed, the system sends the order to an ESP32 microcontroller, which displays it on an LCD screen, activates LEDs for visual feedback, and triggers a buzzer for confirmation—creating an intuitive, multi-sensory interaction loop.
🛠️ How we built it
- Developed the gesture recognition pipeline using Python, MediaPipe, and Google Gemini API for classification and intent mapping.
- Designed and programmed ESP32 hardware integration including LCD, LEDs, ultrasonic sensor, and buzzer.
- Established serial communication between Python and ESP32 to send recognized commands in real time.
- Built a feedback system using light and sound cues to confirm recognition and ensure accessibility for all users.
🚧 Challenges we ran into
- Synchronizing real-time gesture recognition with hardware response.
- Calibrating Gemini’s AI outputs with MediaPipe hand landmarks for consistent accuracy.
- Handling serial latency and preventing false gesture detections.
- Managing power and pin limitations on the ESP32 while connecting multiple peripherals.
🏆 Accomplishments we’re proud of
- Built a fully functional prototype that bridges AI vision and embedded hardware.
- Achieved reliable recognition for four gestures plus a “Done” signal with high accuracy.
- Designed a beginner-friendly, inclusive interface that works in real time.
- Combined software, AI, and hardware engineering seamlessly under hackathon constraints.
📚 What we learned
- Integrating AI APIs (Gemini) into embedded IoT systems.
- Deep understanding of gesture tracking and hand landmark detection.
- Building serial communication protocols for synchronized multi-device interaction.
- Enhancing teamwork, adaptability, and rapid prototyping within tight deadlines.
🚀 What’s next for ASL Express
- Expand gesture support to full ASL alphabet recognition for a richer vocabulary.
- Add voice feedback and confirmations using ElevenLabs API.
- Deploy touchless AI kiosks in restaurants, hospitals, and schools.
- Explore AI camera modules for on-device gesture recognition, reducing dependency on PCs.
Built With
- api
- buzzer-platforms-&-tools:-arduino-ide
- c++-(for-esp32-firmware)-frameworks-&-libraries:-mediapipe
- code
- communication
- elevenlabs
- languages:-python
- lcd-display
- leds
- opencv
- other
- protocols:
- pyserial-apis-&-ai-models:-google-gemini-api-(for-gesture-analysis-and-reasoning)-hardware:-esp32
- serial
- services:
- studio
- ultrasonic-sensor
- usb/uart)
- visual




Log in or sign up for Devpost to join the conversation.