Inspiration
Since childhood, I have felt a deep-seated resistance to discarding items deemed "waste." To me, scrap isn’t just garbage, it’s dormant potential. I’ve always believed that turning something decaying into something new signifies rebirth and transformation. This philosophy of sustainability and creative engineering is my core driver; I find beauty and utility where others see only clutter. Beyond the environmental impact, I wanted to create something truly fun—a way to bridge the gap between "scraped" parts and family-friendly activities that anyone can enjoy.
What it does
JunkGenie transforms random household items into imaginative DIY crafts. By simply taking a photo, the app scans the user's environment to identify materials and suggests a curated list of possible upcycling projects. Once a user selects their favorite "magic" transformation, JunkGenie generates a comprehensive, step-by-step instruction manual complete with AI-generated visual aids to guide the creation process.
How we built it
We developed JunkGenie as a bridge between computer vision and generative creativity. Our technical architecture follows a precise pipeline:
Frontend: Built with React and Tailwind CSS to create a responsive, "magical" user experience.
Backend: A Python-based server orchestrates the AI logic and API communication.
Visual Perception: We integrated the Google Cloud Vision API to act as the "eyes" of the Genie, identifying textures, shapes, and materials.
Creative Computation: Using the high-order reasoning of Gemma 4, we developed a logic engine that converts raw object metadata into feasible DIY blueprints.
Image Generation: We utilized Imagen to provide visual context for each assembly step.
Challenges we ran into
We ran into a lot of trouble fine tuning the image generation, trying different methods to get the right output. We first tired running all image generation in parallel, but it proved to difficult to have the model output related step by step images. We tried giving imagen the final result, a style guide (text and image), only one step, all the steps, but in the end we added too much context.
Accomplishments that we're proud of
Successful Integration: Seamlessly bridging the gap between raw computer vision data and structured, creative storytelling.
The "Bubble UI": Creating a visually appealing interface that symbolizes ideas being "summoned" from the scrap.
Family-Centric Design: Making a complex AI pipeline feel like a friendly, accessible tool for users of all ages.
What we learned
Beyond the code, this project taught us the power of Generative Sustainability. We learned that with the right AI orchestration, we can democratize engineering. We discovered that sometimes "less is more" in prompt engineering and that the best innovation happens when technology serves a personal, environmental mission.
What's next for JunkGenie
For the future of JunkGenie we would develop:
The "Genie-Chat" Assistant: We plan to integrate a real-time AI chatbot. This will allow users to ask specific questions mid-build—like "What can I use if I don't have a hot glue gun?"—making the DIY process more interactive and accessible.
Parallel Computing for Speed: To solve our current latency hurdles, we want to explore parallel computing architectures. By distributing the generation of instruction steps and image assets across multiple nodes, we can deliver "Instant Blueprints" without the wait.
Native Mobile Build: While the web app is a great start, the ultimate goal is a cross-platform mobile app (iOS/Android). This will allow for a more seamless "Scan-and-Build" experience, utilizing native camera features and AR overlays to show you exactly where to place each piece of scrap.
Community Marketplace: A space for families to share their finished masterpieces, fostering a global culture of upcycling and creative rebirth.
Log in or sign up for Devpost to join the conversation.