What it does

Healthful uses a machine learning based image recognition software that will output which fruit or vegetable is in a picture uploaded by the user. The user will then enter its dietary restrictions(i.e. vegan, vegetarian) and then the web scraping algorithm will take the fruit/vegetable and find recipes that include those ingredients and satisfy the dietary restrictions. Since the user be able to pick from multiple recipes tailored specifically based on their preferences, they will be to accurately see if that ingredient is useful for them in real-time, which will prove to be helpful when shopping or when cooking on the spot.

How does Healthful “Stand on the Shoulders of Giants”?

Currently in the food recipe industry, there are large datasets with different recipes, but it is really hard user to find recipes that are unique to their own criteria(i.e. Specific ingredients, allergies). However, we realized that this information can be utilized, as we can build off of the current data to by using machine-learning and web scraping to allow the user to select the ingredient and dietary restrictions and get a recipe specifically tailored for them. Hence, we are “standing” on the data of the “giants” of the food recipe industry.

Inspiration

Due to COVID-19 restrictions, people are not be able to go outside as much and may be stuck eating the same food. They also may not know what is right for them when shopping, leaving them at a large dilemma. We wanted to help people struggling by allowing them to find recipes directly suited to their needs: allowing them to input dietary restrictions and identify a healthy ingredient. Not only would this allow the user to choose an ingredient that will actually be appropriate for them when shopping, but they will be able to eat a diverse and healthy meal with the vast options our service provides.

How we built it

The ai model was built using the libraries tensorflow and keras in a google collab notebook. The model used a base model imagenet combined with several other layers. The model was trained with a dataset consisting of numerous images of fruits and vegetables. After training the model and evaluating the accuracy of the model, the model was exported in a tflite file to be used by the website. After the image recognition feature detected the photo and returned what fruit was in the photo, a form that we built using react.js came up. The answer was stored in a variable and was given to the web scraping algorithm, which used different links to scrape information based on the dietary restriction and image. Finally, the code printed recipes containing the title, image, ingredients, recipe, and prep time were printed on to the product page.

Challenges we ran into

After exporting the model, the saved model file(tf lite file) needed to by imported and then used by the website which we had trouble with We were working with react and python, so we had to convert the file from jpg to json to get it to the training model, and then back from json to jpg so that the model could use the jpg which proved to be difficult Incorporating the form for dietary restrictions. Since we were using next.js for the front-end and flask and node.js for the back-end, we ran into the issue of not being able to incorporate the form with wtf forms, and then we ended up having to end up using react to create the form, We struggled integrating the machine-learning algorithm as well as the form into flask. We struggled to break apart the code and put in sync so for each route on the site.

Accomplishments that we're proud of

Training and incorporating a machine learning model that classified images which our team hasn’t worked with before Building a clean, organized web app Incorporating image recognition as well as a much more heavily-developed web scraping algorithm than we had worked in the past; we had only based projects based of off one application, not multiple Working with different tools(react.js) for the first time Creating a product that can actually be commercially viable Working together as a team for the first time

What we learned

What we learned is that we have to be more organized when coding and make sure that we communicate well with the rest of our team. Being more organized would help us solve small issues quickly so we could move on to the more pressing issues and continue to add more features to our project. Also, we learned that at the beginning of the hackathon, we should have a more formal plan so we can execute the important features in a smoother manner. Overall, we should have came with a simpler idea, so we could build upon it as we went, as cutting down proved to be hard for us.

What's next for Healthful

The next step for Healthful is creating an app so that users can take a picture directly from their phone and receive recipes from the app. The app would further develop our product into something that can quickly and conveniently recommend recipes. We could also increase the amount of dietary restrictions by including common allergies and we could ask the user what type of meal they want (i.e. main course, side dish). Finally, we want to retrain the model to detect multiple images so we can incorporate multiple to elements to better suit the user.

Built With

Share this project:

Updates