Inspiration
From food to skincare to cleaning supplies, labels are often confusing and inconsistent. For people with allergies, one mistake can lead to a serious reaction—or worse. We wanted to change that. So we asked ourselves: What if there was a way to instantly know whether something is safe for you, anytime, anywhere?
What it does
Our app helps people with allergies shop, cook, and connect safely by offering:
-Personalized profiles where users can save their allergens. -Barcode scanning (or manual entry) for food, cosmetics, and household products. -Custom shopping lists to organize safe products. -Recipe generation based on ingredients from your list. -An AI chatbot to answer questions about foods that may trigger your allergens. -A supportive community where users can explore and share threads, TikToks, and Instagram posts tailored to their specific allergens.
How we built it
Our initial goal was to clean our data and build an XGBoost model that could predict allergens for instances in our dataset that were missing allergen information, based on the list of ingredients. Since the dataset was massive (4 million rows), we connected to its API to reference data in real time without hitting performance issues. We filtered entries to only include those from the U.S. and translated non-English ingredients (French, Spanish, German) using Deep Translator’s built-in Google Translator. To improve coverage, we merged this dataset with another Kaggle dataset that included a wider range of allergens.
After preprocessing, we began building the model. We vectorized the ingredients feature using TfidfVectorizer so the model could interpret categorical ingredient strings. Since allergens are multi-label rather than binary, we binarized them with MultiLabelBinarizer and wrapped the XGBoost classifier in a MultiOutputClassifier. Instead of a traditional train-test split, we used entries with known allergens to predict those missing allergens. For hyperparameter tuning, we ran a small grid (n_estimators, max_depth, learning_rate, subsample, colsample_bytree) with a 3-fold stratified cross-validation using GridSearchCV. Our validation set achieved a Macro-F1 score of 76%, which met our project goals. We saved the trained model, vectorizer, and label binarizer using joblib.
Finally, we integrated the Gemini API into a Retrieval-Augmented Generation (RAG) app to answer queries about recipes, ingredients, and allergens. The RAG app fetched and vectorized data from the dataset’s API, storing embeddings in ChromaDB. When a user asks a question, the app embeds the query, retrieves the most relevant entries using vector similarity and a cross-encoder reranker, and passes them into Gemini with a prompt template that avoids hallucinations. If a datapoint in ChromaDB lacks allergen labels, the app calls our XGBoost model to predict allergens from the ingredients, which is confirmed with a terminal print statement. We connected the RAG app to our website using FastAPI and CORS middleware, powering the “Chatbot” tab for real-time interactions.
For the front end, we built our website in VS Code, primarily hardcoding it in HTML. We designed all of our visuals from scratch using Canva. To power the chatbot, we integrated the Gemini API, allowing users to ask questions about foods and allergens directly through the site.
Challenges we ran into
Connecting Frontend to Backend: Establishing the initial API connection and solving CORS errors was our first major hurdle.
API Key Security: Managing the Gemini API key on the client-side securely was a significant consideration.
Data Persistence: Evolving from temporary browser storage (localStorage) to a proper backend-driven data solution.
Live ML Inference: Ensuring our Python model could load and provide predictions quickly without slowing down the server.
Accomplishments that we're proud of
Built a Full-Stack App: We successfully created a complete application connecting a frontend, backend, and ML pipeline.
Deployed a Live ML Model: We are proud of integrating our machine learning model into a web service for real-time predictions.
Created High-Value AI Features: We rapidly developed a useful AI Chatbot and Recipe Generator using the Gemini API.
Achieved a Great User Experience: We built a smooth, responsive interface that provides instant, intelligent feedback to the user.
What we learned
Full-Stack Integration: We learned how to connect a frontend (HTML/JS) to a backend (Python/Flask), shifting from a simple webpage to a complete client-server application.
APIs are Key: We discovered that a well-defined API is critical for the frontend and backend to communicate reliably and prevent system failures.
Separating Responsibilities: This project taught us the clear difference between the frontend's role (user interaction) and the backend's role (security, databases, and running ML models).
Rapid AI Prototyping: We learned how to use the Gemini API to quickly build powerful features, like the recipe generator and chatbot, that would be impossible to create from scratch in a hackathon.
End-to-End Debugging: We developed skills in troubleshooting the entire system, from inspecting frontend network requests in the browser to checking backend server logs, to find and fix errors.
What's next for Action-Reaction
Mobile-first launch → iOS & Android apps with barcode + photo scanning
Smart partnerships → retailers, pharmacies, and brands for real-time safe product data
Beyond allergies → AI-powered wellness insights and healthcare integration
Scale globally → cover more products, languages, and regions
Built With
- bootstrap
- copilot
- css
- git
- github
- gradient-boosting
- html
- intellij-idea
- javascript
- llm
- machine-learning
- open-food-fact
- python
- rag
- vs-code

Log in or sign up for Devpost to join the conversation.