Inspiration
When something breaks, understanding why it failed is often harder than fixing it. Logs, screenshots, and error messages live in different places, forcing people to context-switch and manually re-explain failures to external tools. This project was inspired by the idea of reducing that friction by letting AI reason over all failure signals at once.
What I Built
I built a lightweight demo that accepts application error logs and a screenshot of the failing UI. Using Gemini’s multimodal reasoning, the system analyzes both inputs together to infer the most likely root cause, explain it in plain language, and suggest actionable next steps.
How It Was Built
The project uses a minimal web interface connected to the Gemini API, with the model acting as the core reasoning engine rather than a conversational chatbot. Outputs are structured around cause, explanation, and action to keep responses clear and practical.
Challenges
The main challenge was avoiding a generic prompt-wrapper design and instead shaping the AI’s output into a diagnostic tool.
Accomplishments that we're proud of
By embedding reasoning directly at the moment of failure and combining text and visual context, the tool turns raw errors into understanding — instantly.
What we learned
This project reinforced that the real value of AI comes from reducing workflow friction, not just generating text.
What's next for Explain my failure
Full Gemini 3 integration: Once wider access is available, enable advanced multimodal reasoning to handle more complex logs and UI inputs. Expanded input types: Support video snippets, system metrics, and configuration files to give Gemini richer context.
Built With
- css
- google/generative-ai-sdk-styling:-tailwind-css-(v4)-animation/icons:-framer-motion
- javascript-frameworks:-react-(v19)
- languages:-typescript
- lucide-react-utilities:-clsx
- tailwind-merge-runtime:-node.js
- vite-(v7)-ai/apis:-google-gemini-api-(gemini-1.5-flash)
Log in or sign up for Devpost to join the conversation.