Inspiration

  • Have you ever wondered what _ in the world _ that thing on the menu is?
  • Have you ever ordered something, in excitement, only to receive enough food for a squirrel?
  • Are you worried about ordering the incorrect item at a restaurant you have never been to?

We developed Food.ar to tackle all these issues, and more. Food.ar is more than an app that let's you see your dish: it is an app that strives to educate it's users about the cultural and physical backgrounds of various cuisines.

What it does

Food.ar is an augmented-reality mobile application that displays 3-dimensional models of food items on a menu, and integrates with autonomous ordering devices to automate the serving process.

Users can see the 3d models of the food items on the menu -- this way they can make a more informed decision of what they want to order. Furthermore, once the user sees something he/she likes, he can simply say "Order this from Food.ar" and our Amazon Alexa integration will pick up the order, respond with a confirmation, and ping the server informing the restaurant that the order has been placed.

Food.ar is a horizontally integrated restaurant experience: it's implications for the restaurant industry are endless.

How I built it

We built our app in Swift 4 from the ground up with ARkit in mind. We wanted a new restaurant experience, one centered around these augmented projections. The Swift app interfaces with our Microsoft Azure server through a custom API built on a Flask web app. These API's inform the server what the user is currently looking at, so when the user speaks to our conversational interface, the interface already has information from the app. We train an Alexa skill, through amazon, and integrate it with an AWS Lambda function that connects and communicates with our server. The communication between all these parts is what makes Food.ar so unique: it allows for all facets of the restaurant experience to be integrated and streamlined.

We wanted to display the model in AR because AR allows the user to visualize what their dish will look like, in the real world (and with real scale). This means the user can see not only _ what _ they are going to receive-which is helpful for international students-but _ how much _ they should expect to get.

Challenges I ran into

We ran into many, many, many issues while creating this app.

  • We spent the first 17.5 hours of this 36 hour hackathon trying to use ARKit by building an iOS app using React-Native: big mistake :D -- we didn't know React-Native and we had never used ARKit before -- we finally pivoted to good old fashioned native development
  • It took us 12 hours to build our 1st usable 3d model for AR ambitions -- this entailed figuring out what tools we need to make a 3d model, how to use those tools and then how to use the 3d model we created-
  • ARKit -- Apple's Augemented Reality framework is central to what we came here to do -- we didn't have any prior experience with AR or ARKit -- the struggle was real and very much worth it.
  • Our food models were 100 meters wide and 100 meters high - it took us more than 6 hours to fix this :D
  • We learnt AWS Lambda here and we manged to use it.

We persevered. We have pushed through all of these to present a successful demo of an idea, which is now a fully functioning iOS app, that excited us so much that we devoted 36 consecutive hours of our lives to it.

Accomplishments that I'm proud of

We are extremely proud of our extensive horizontal integration:

  • we learnt how to build usable 3d models for AR applications
  • we learnt how to use Apple's native Augmented Reality framework
  • the team member responsible for building the AR part of the iOS app had no prior experience with AR or ARKit
  • the team member responsible for building half of the iOS app had never written a line of Swift before and had never made an iOS app before -- despite this he delivered
  • the team member responsible for implementing our server had never built a backend solution before -- he used a microsoft azure server, AWS, Lambda, Digital Ocean, and Flask to deliver his part
  • although we couldn't integrate his work -- the team member who used Microsoft Cognitive Services to translate foreign menus is damn proud of his work, and ARKit

What I learned

ARKit, 3d modelling, Swift, AWS Lambda integration with Alexa, Microsoft Cognitive Services API, Flask python web app/hosting, and most of all: persistence.

What's next for food.ar

We wanted to have our users be able to scan their own food models, and upload them to create a vibrant communities centered around many people's favorite thing: food. Unfortunately, we were unable to get a good looking 3D model using our own AutoDesk Forge api and stiching. We were forced to rely on a third party 3D scanning application to generate our models. Given more time, we would implement a way for users to accurately scan their own models, which would allow our app to many times quicker in terms of restaurant availability.

We also wanted to do more with the Microsoft translation services, as in our limited time we were unable to accurately work with some more difficult languages, such as Chinese. If our app were language-independent, we could deploy it across the globe, and create an unparalleled database of models for our users to peruse.

Built With

Share this project:
×

Updates