Inspiration

It has come to our attention that selecting somewhere and something to eat are two major concerns for those who travel from other countries. Obscure names of stores and food confuse them while ordering meals. It can also be considered as a challenge even for native Americans when it comes to some exotic restaurants with strange names of dishes.

We did a little survey for those who are in this dilemma. Rachel Wang, a visitor from China, who complains about the unreadable Mexican menu she got in a restaurant. The food served is nothing matches her imagination. Rosie Chen, a visiting student who has spent several months in America, still unable to understand the menu and get what she wants when ordering food. Thus, we have concluded that visualization of food ordering is necessary for most restaurants which cannot offer fully pictured menus.

Inspired by the experience of our team and other visitors from China during the stay in America, we decided to build something to help foreigners so that they can find somewhere and something to eat more easily.

What it does

  1. Exploring on the road Users can see embedded visualizations of specific kinds of restaurants. The restaurants are "highlighted" in Augmented Reality. The size of the pin means the distance between you and the restaurant. Users can head to one of the restaurants they like following the guidance of AR tags.
  2. Exploring in the restaurant After Users get to the restaurant, they can take a snapshot at the menu and get the result of a fully pictured menu, which contains a list of the names and clear pictures of the dishes, helping them to better decide which food to order.

How we built it

We developed an iOS app using several techs. ARkit and CoreLocation model are the two major technologies the first function uses, together with information powered by Google Places API and Mapkit from Apple. We set up a few keywords of some specific kinds of restaurants, and users can get location information marked in AR of 3 or more nearest restaurants of that kind.

For the second function, we built the whole back-end on AWS with Django framework. With the image-to-text, Computer Vision and Text Analysis APIs from Microsoft Azure, we are able to output processable texts through one picture of the menu. Then, with the help of Google Custom Search implemented by ourselves, we pick up suitable pictures for each name of the dishes. Finally, we transmit the result back to the app and make a list of those. Users can browse the delicious food then.

Challenges we ran into

  1. Google did not offer APIs for search services, while we need to get the pictures according to the names we get from the menu image. As a solution, we wrote a spider script to download suitable and specific images.
  2. Microsoft Azure's API of image-to-text does not have a satisfying accuracy for our demand. We have to write some extra codes to increase its accuracy in recognizing menus we need.
  3. Various tools of AR made it difficult for us to choose a proper one. We need to compare pros and cons of each AR SDKs and tools. ARkit from Apple and ARcore from Google have apparent advantages over other tools. We finally chose ARkit due to the laptop devices we have.
  4. The pictures users take can be too big to transmit from mobile devices to AWS, the noise of the pictures made the process time much longer than expected. We tried very hard to optimize the design and finally, we can process the menus at the average time of 4-5 seconds.
  5. Swift programming of ARkit, Corelocation model and HTTP requests is also a challenge for us, we need to write something which has few references.
  6. Though SLAM technology in ARkit has made AR objects more stable, they still splash sometimes, which is still beyond our current abilities. We seek to solve this problem in the future.

Accomplishments that we're proud of

  1. We have solved a real-life problem for many people. From the survey we did you can see that there are a lot of people suffer from obscure menus of restaurants.
  2. We have built a fully functional app including front-end, back-end and cloud services using all kinds of technologies Hacktech has provided for the first time and that is really inspiring for our future work.
  3. We solved all the obstacles we encountered.

What we learned

  1. We learned a lot about full-stack developing.
  2. We learned to divide the whole problem into smaller ones and conquer them all.
  3. We formed an amazing team!

What's next for Menupedia

  • Add more info. (reviews and scores etc.) to restaurant AR map
  • Knowledge card of each dish
  • Personal-Designed order
  • Recommendation System for your flavor
  • Real-time AR smart menu
Share this project:
×

Updates