What2Cook

Introduction

Terminator of food decidophobia

Inspiration

What needs or challenges do we have? During the pandemic, we usually buy lots of foods and ingredients from the supermarket and stack them in the refrigerator. Sometimes, when we open the door of the refrigerator, we lose our mind of what we should cook today. Our idea is to develop an application which is able to recommend users about what they can cook base on what ingredients they have, by simply uploading images captured by their phones or saved in the photo library.

What it does

  1. Accept pictures from photo gallery or captured by camera.
  2. Pictures would send to GCP and Google AutoML to classify the ingredients in the image.
  3. Users check if each material is correctly classified.
  4. Hit "What 2 Cook" button, the application will come up with food ideas!

How we built it

For the iOS client, we used Swift programming language and SwiftUI framework. We adopted many modern techniques like functional programming, declarative programming and reactive programming for the client application. To keep the app as simple as possible, we used many vanila iOS UI elements.

For db, we design a reliable database schema firstly. The knowledge we gained from the university and cockroachDB technique strongly helps us to build a powerful database. We prepare two plans to deploy the database: deploying on the Google Compute Engine or in the local machine.

For server, we use node.js to build a serverless backend using GCP Function which accepts the request from the client and return the best recommendations to the users.

For ML, we use Google AutoML Vision V1, which support building and training of multi-label classification tasks. The model was deployed on GCP and the returned prediction is fetched by our server and then delivered to the mobile phone.

Challenges we ran into

This is our first time to use Swift programming language and Swift UI framework to develop IOS application. We have to build and learn at the same time. In addition, SwiftUI is a relatively new framework and there are limited resources and tutorials online about it. This brings a lot of difficulties to us. It is also the first time for us to use cockroachDB as well. Luckily, we have experiences on SQL before. During the development, the cockroach cloud free-tier shut down which heavily resists our process. We prepare two alternatives for the server and database deployment. The one is deploying the cockroach instance and server in the local machine, and the second plan is putting the cockroach instance and server to the Google Cloud Platform. Especially, we deploy to the cockroach instance into VM instance with docker image on Google Compute Engine. As for the machine learning task, we trained our multi-label classifier on Google AutoML. The biggest challenge here was looking for an appropriate dataset. There are some open source datasets available which are relevant to our classification task, but none of them can well represent the distribution of our input data which is the ingredient images. We decided to use one dataset provided by kaggle and extend it by adding our own dataset. It is extremely time-consuming to capture the images and add labels. The final performance of our model reaches really good accuracy rate (86%) with those contributions of our own data.

What we learned

  1. iOS + Swift development.
  2. Distributed SQL database.
  3. DevOps skills.
  4. Machine learning experience.
  5. Teamwork.

Next steps for What2Cook

What we plan to do post-HacktheNorth?

  1. Voice Input (Cross-platform via Voiceflow API on Alexa)
  2. Contributing food ideas by uploading photo and classify materials in the photo.
  3. Each of material would show the amount of needed for each food. (e.g.) jumbo shrimp: 1 pound + 1 pound broccoli.
  4. Calculate and display approximate calories of each food. Recommendation would based on pre-setting value by users
  5. Authentication on the server and clinet connections (e.g OAUTH2).

Built With

Share this project:

Updates