FoodSnap
FoodSnap is a cutting-edge mobile application that uses advanced image recognition technology to automatically log and store your food and generates recipes to remind you when food is going bad inorder to reduce food waste.
FoodSnap is powered by:
- Python: the core programming language used for the image recognition algorithms and backend server
- Pytorch: an open-source machine learning framework used for training the image recognition models
- Flask: a lightweight web framework used for building the backend server
- React-native: a popular JavaScript library used for building the frontend user interface
- Redux: a predictable state container for managing the frontend application state
- AWS EC2: the cloud platform used for hosting the FoodSnap servers
- Docker: platform for developing, shipping, and running applications in containers, providing a consistent and portable environment across different computing environments.
- S3: A storage for storing our pictures and profile pictures!
- NGINX: nginx served as a reverse proxy to funnel in client requests and reroute them to the correct backend specialized service
- mongodb: nosql database we use to store user data, recipes, passwords, etc.
Inspiration
Our inspiration for this project was to create an application that supports the effort to reduce food waste by simplifying the process of coming up with recipes with leftovers and whatever food you have lying around. Nearly 40% of food across the U.S. ends up in landfills. The effect of food waste has harmful implications; food waste also means the waste of our natural resources. The effort to reduce waste contributes to the effort to reduce our carbon footprint on this planet. This app would make it easier for individuals who struggle with knowing what to do with food they have leftover in the fridge from different recipe attempts throughout the week. It presents an opportunity for people who want to be environmentally conscious without overcomplicating the process of doing so.
What it does
Food Snap uses Machine Learning and AI to generate recipes from images taken by a user of food they have that they want to be included in the recipe. When a user opens the app they can click a button and open the camera to take a picture of the food they want. The app then finds what food is in the picture, generates recipes for for the user, and then the user is prompted to swipe through the recipes and choose one they like. Swipe left if you dislike a recipe option and swipe right if you like it. The recipe will then be saved to your dashboard and you can make it.
How we built it
In our project, we leveraged the versatility of React Native to craft a dynamic and cross-platform frontend, ensuring a seamless user experience across a variety of devices. To bolster our backend infrastructure, we orchestrated a formidable ensemble of technologies. MongoDB served as our database management system, facilitating efficient data storage and retrieval, while PyTorch empowered us with advanced machine learning capabilities, particularly for real-time object detection tasks using YOLOv8. Docker streamlined our deployment process, allowing us to encapsulate and distribute our application with ease. Nginx, our high-performance web server, ensured efficient content delivery and load balancing, while Flask, our robust backend framework, handled application logic and APIs with flexibility. For secure and scalable image storage, we relied on Amazon S3, and our backend found its home on Amazon EC2, providing a reliable and scalable hosting solution. Together, these technologies harmoniously combined to create a powerful and versatile solution that effectively met our project's diverse requirements with finesse.
Challenges we ran into
One of the main aspects of our application is a Machine Learning model that recognizes food in the pictures taken by the user. We knew it would be tricky to train a model to accurately detect ingredients for all kinds of pictures, so it was a challenge to have this done in time and to have a solution that worked well enough to produce recipes with the photographed ingredients. Integration of the application was also a challenge. We developed the front end without having it access the database because the database was worked on separately at the same time, so bringing it together and making the components dynamic was stressful.
Accomplishments that we're proud of
We were relatively successful trained an Image Recognition model to recognize certain foods in a picture and then used the foods it found to generate recipes for the user. We also worked together and had a lot of fun and enjoyed the time we spent together :)
What we learned
We gained valuable experience deploying our backend code on Amazon EC2, marking our first foray into cloud-based production deployment. During this process, we learned about configuring security rules to protect our resources and effectively serving clients, enabling us to achieve high throughput and availability.
What's next for Food Snap
There were elements of the application we wanted to include but didn’t get a chance to. Initially we wanted to include the ability to swipe without touching the screen and have the camera recognize the movements and respond by swiping through the recipe instructions and ingredients. This would have been user-friendly because we know people using this app will eventually have their hands dirty from cooking; this would prevent them from having to touch their phone. We also wanted to add emphasis on the reduction of food waste and each user’s progress as they continue to enjoy the recipes on the application. We would have liked to include insight on their growth in the direction to reduce their carbon footprint.
Log in or sign up for Devpost to join the conversation.