Inspiration

One of our team members, Tony, has long had the idea of developing an app that can essentially scan out potential allergens from a certain type of food. He first had this idea when eating with his family in the States. As tourists, his family members weren't great with English -- and one of them, unable to recognize that the Chicken Caesar Salad's dressing contains anchovies, had a serious allergic reaction to the food. And this is when he came up with the idea of making an app that can simplify the process of searching up food names with great inaccuracies and help make more and more people avoid having allergic reactions.

What it does

Basically, the app can help anyone with an access to a phone with a functioning camera to scan out potential allergens in their food, and based on the scanned records, it can also provide recommendations for a healthier meal. We've used Google Cloud Vision APIs to help us with the label, logo, and text recognition part. We've also implemented internationalization, allowing travelers to see those information in their own language (so they can understand the menu even if they don't speak English).

How we built it (with Docker, Azure, and Google Cloud Vision APIs)

We created two repositories on Azure DevOps, one for front-end, and one for back-end, we have two branches for each, dev and master. The front-end uses Ionic and thus we are able to build iOS and Android app with only only slight modification of our code. We first build an express.js server locally with a dockerized container so that our team can share all the environment for development. We put the google cloud vision apis into our backend and tested it locally. And we use azure container instance to run it on server. We then implemented the automation. We use the pipeline in Azure DevOps to build the docker image (because it's from alpine it's fast to build) and push onto a private image on Azure Container Registry.

At the same time, the UI designer has finished the first draft and communicated with the front-end designer. We also build a docker container with nginx built in to serve the user the static web content from ionic. The reason why we did not use a standard Web Service is that we also need nginx to do the reverse proxy to add ssl layer to expressjs backend mentioned above and prevent CORS. Because we have two images now, we switch to docker compose and use automation to build and push the docker image. We then switch to kubernetes and Azure Kubernetes Services so that we can manage the build and deployment efficiently with potential scaling.

Accomplishments that we're proud of

In the end, we are all very happy that we're able to build an elementary app that can recognize allergens based on menu (text-based) or scanning the food (image-based). We've also created a cart function that can basically store all the food for a certain meal, and based on the items in the cart we can develop a basic recommendation for the user, to help them see how healthy this meal is.

What we learned

We've learned a lot from this experience. Starting from learning basic UI design with Sketch to configuring our program in the clouds. In terms of front-end design, we started with Tesseract.js and then moved on to Google Cloud's API; for backend, we worked with Docker, Azure, etc.

What's next for Food Scanner

We hope to expand the scope of this app: to make it a general-purpose scanner, not just a simple scanner that recognizes foods and gives recommendations. In the future, we hope to build an app with many more functionalities: auto-OCR and sharing, AR-projected weather information, even clothing recommendations!

Share this project:

Updates