Inspiration

Drive Thrus are designed for interaction via voice and sight. It is possible to make orders via Apps for many places, but even with those, users are required to interact with the cashier via voice even if to tell them their order number or name. Drive Thru Ordering always has some element of voice interaction which makes them inaccessible to the hard of hearing.

1 in 8 people above the age of 12 suffer from some form of hearing loss in the US alone. This number jumps to a billion worldwide.

What it does

SeeMeNU allows customers to place an order using a combination of ASL and SMS, completely eliminating any need for any voice interaction.

The Kiosk

The kiosk allows users to select items from the menu using gestures. When they’re done, a thumbs up gesture generates a QR code which they can scan to confirm their order via SMS, which brings us to the next component...

SMS Bot

The other component of the product includes a Twilio-based SMS bot that allows users to further customize the order and/or confirm. They can also request the bot to connect them to an agent for further assistance.

How we built it

  • React.js for the kiosk
  • Tensorflow.js for gesture recognition
  • Google Cloud for Serverless functions
  • MongoDB for data store

Challenges we ran into

Training the model for different sign languages. The model only supports American Sign Language at this time.

Accomplishments that we're proud of

Building a functional product

What's next for SeeMeNU

  • Pilot launch with fast food chains.
  • Training the model to support other sign languages.
  • Expand globally

Pitch Deck

https://docs.google.com/presentation/d/19F1fcTseOJ9A5vYW7m4yJJTvqS8hUZiYMAKruN9sx9I/edit?usp=sharing

Built With

Share this project:

Updates