Inspiration
Mealchemy started from a very real problem in my day-to-day life: cooking with my fiance and ending up with leftover ingredients we did not know how to use. Grocery items often come in larger quantities than a single recipe needs, so we would use part of something for one meal and then let the rest sit in the fridge while we figured out what to do with it. I thought it would be fun and genuinely useful to take a picture of those leftover ingredients and generate new recipes from them, especially across different cuisines, instead of being locked into the ingredient’s original purpose.
That idea became the heart of the project and the name: Mealchemy. It is about transforming random leftovers into something new, creative, and practical. Like alchemy, the goal is to turn something overlooked into something valuable: better meals, less waste, and more inspiration in the kitchen.
What it does
Mealchemy is an app that helps people reduce food waste by turning a fridge photo into personalized recipe ideas. A user uploads a photo of their fridge or leftover ingredients, and the app identifies what is visible, converts it into an editable pantry list, and generates cuisine-aware recipes based on those ingredients. The user can adjust ingredients manually, choose cuisine preferences, add a health angle, and get recipe suggestions that prioritize using what they already have. The app also includes shopping and waste-reduction tips to help stretch ingredients further.
How I built it
Mealchemy was built as a full-stack hackathon project using a shared TypeScript codebase. The frontend is an Expo app built with React Native so the same app can run on Android and web. On the backend, I used a Cloudflare Worker to handle API requests securely and keep the OpenAI API key off the client.
The app flow starts with image upload. On the client side, I added image compression so photos stay small enough to send efficiently without breaking requests. Once the image is submitted, the backend sends it to OpenAI for ingredient detection. The response is validated and shaped into a structured pantry format using shared schemas, so the app can reliably display editable ingredients instead of dealing with messy raw model output.
From there, the user can correct quantities, remove bad guesses, or add anything the model missed. After that, they choose cuisine preferences and health goals, and the app sends the pantry data back to the backend for recipe generation. Prompts guide the model to prioritize perishable ingredients, keep missing ingredients modest, and return responses in a format the app can ingest cleanly.
To keep the project production-minded even as an MVP, I also added API hardening measures like rate limiting, restricted CORS, and payload checks. I registered a domain through Porkbun (ty MLH!), used Wrangler with Cloudflare Workers to manage deployment and secrets, and worked through the process of moving domain and DNS management into Cloudflare.
Challenges I ran into
The biggest challenges were not just in building features, but in getting all the moving pieces to work together smoothly.
A lot of friction came from setup and integration: configuring Expo for Android, dealing with emulator and prebuild issues, keeping image uploads within safe size limits, and making sure the app could talk to the backend without CORS problems. Another challenge was making AI output predictable enough for the UI, which meant iterating on prompts and validating responses carefully so they could be parsed into a reliable structure.
Deployment and infrastructure also took time. Managing secrets safely, setting up Cloudflare Workers, connecting a custom domain, and sorting out DNS and domain migration details added a layer of complexity beyond the core product itself. Some Android-specific setup work was started but mostly scrapped so I could focus on the core experience on the web.
Accomplishments that I'm proud of and What I learned
This project taught me a lot across both frontend and backend development.
I got hands-on experience with Zod and schema validation, especially how useful strict schemas are when working with AI-generated responses. I spent time learning more about Expo and React Native, and how a single codebase can support both Android and web. I also got deeper into TypeScript and React quirks, including state handling, component flow, and the little syntax gotchas that show up when moving quickly.
On the backend side, I learned more about API hardening, including rate limiting, image compression strategies, and restricted CORS. I also learned the deployment side of things: registering a domain with Porkbun, using Cloudflare Workers and Wrangler to hide API keys, and moving domain configuration into Cloudflare. A big learning area was prompt engineering too, especially writing prompts that produce structured, useful outputs from OpenAI instead of vague text blobs.
I also explored getting everything working on Android, although parts of that were ultimately deprioritized because of time constraints.
What's next for Mealchemy
- Finishing and polishing the Android experience
- Allowing multiple image upload
- Improving ingredient detection accuracy
- Adding user accounts and saved pantry history
- Tracking ingredient freshness and prioritizing items closer to expiring
Built With
- cloudflare
- cloudflareworkers
- expo-router
- expo.io
- hono
- openai
- pnpm
- porkbun
- react-native
- typescript
- wrangler
- zod
Log in or sign up for Devpost to join the conversation.