Inspiration

We were inspired by the amount of food waste college students face and the underutilization of pantry produce. When students receive items like eggs or green onions, it’s not always obvious what they can make with them. Without clear ideas or guidance, fresh ingredients often go unused. We wanted to bridge the gap between food access and food utilization by using technology to help students turn simple pantry items into practical, nutritious meals.

What it does

PicAPlate is a computer vision-powered web app that scans produce and instantly identifies the items. It generates simple, affordable recipes using the ingredients available and highlights any missing ingredients needed to complete the dish. This helps students cook confidently while reducing food waste.

How we built it

Frontend: Built with React and JavaScript, with CSS for a clean, responsive user interface that allows users to scan produce and view recipe suggestions easily. Backend: Node.js and Express for API handling, with CORS enabled for cross-origin communication between frontend and backend services. Database: MongoDB for storing recipes, ingredient data, and user interactions. AI Model Training: Trained using Google Colab with labeled image datasets to accurately detect and classify different types of produce. Computer Vision Layer: Image recognition model that identifies scanned produce and matches detected ingredients with relevant recipes. Deployment: Web-based application designed for easy access across devices.

Challenges we ran into

One of the biggest challenges we faced was training the AI model, especially since it was our first time using Google Colab. There was a learning curve in understanding how to properly label data, train the model, and improve its accuracy. Another major challenge came during integration. After connecting the model to our website, the scanning feature initially did not work properly. We had to troubleshoot issues related to image processing, model deployment, and frontend-backend communication before we could get real-time detection functioning smoothly.

Accomplishments that we're proud of

Despite it being our first time training an AI model, we were able to develop a working produce detection system and integrate it into a functional web application. We also successfully connected our frontend, backend, and database to generate relevant recipes based on scanned ingredients.

What we learned

Through building PicAPlate, we learned how to train and fine-tune a computer vision model using Google Colab, as well as how to properly label and structure datasets. We gained hands-on experience integrating AI into a full-stack web application and troubleshooting real-world deployment issues.

What's next for PicAPlate

Next, we plan to improve our model’s accuracy and expand our produce recognition dataset. We also want to grow our recipe database, add personalization features, and implement dietary preference filters. In the future, we hope to partner directly with the UC Davis Pantry to pilot PicAPlate on campus and scale the platform to support other universities facing similar food insecurity challenges.

Share this project:

Updates