What inspired me
As a developer who thrives on deep, focused work, I've always found the chaotic energy of offline hackathons to be a challenge. My inspiration for this project came from a desire to compete on a global stage where the code could speak for itself. I wanted to build a tool that solved a problem I knew intimately: the slow and often confusing process of understanding a new or complex React codebase.
I imagined a tool that could act as an intelligent co-pilot, instantly providing clarity and saving developers countless hours of manual code tracing. This project was my answer—an attempt to build something that would not only showcase my skills but also genuinely help the developer community that I'm a part of.
What I learned
This hackathon was an intense, transformative learning experience. I went from being a complete beginner in applied AI to deploying a full-stack application. The key things I learned were:
Professional API Integration: I learned how to securely integrate with two different major AI platforms (Groq and Hugging Face). This included the critical practice of managing secret keys using .env files and protecting them from version control with .gitignore.
Advanced Web Concepts: I discovered the difference between blocking and streaming responses. When my app froze, I had to learn and implement real-time streaming to the frontend using Streamlit's st.write_stream, which dramatically improved the user experience and made the app feel alive.
Systematic Debugging: I faced a barrage of bugs, from a RecursionError in my code parser to cryptic ImportErrors and persistent API failures. I learned how to isolate problems methodically, create minimal test scripts, and read error messages to find the exact root cause.
Strategic Pivoting: My initial plan was completely derailed by a hardware limitation. I learned that being a good developer isn't just about coding, but about adapting and making smart strategic pivots—in my case, from a local model to a cloud-based API—to get the project over the finish line.
How I built this project
This project was a multi-day sprint built in distinct phases:
The Parser (The "Eyes"): The foundation of the project was building a system to understand React code. I created a hybrid solution where a Python script uses a subprocess to call a Node.js script. The Node.js script leverages the powerful @babel/parser library to convert JSX/TSX code into a machine-readable format called an Abstract Syntax Tree (AST). The Python backend then recursively walks this AST to extract key information like component names, children, props, and state.
The Brain (The AI): My initial goal was to run a gpt-oss-20b model locally. Due to a critical lack of disk space, I pivoted to a cloud-based approach. I first integrated the Groq API, but after hitting several model deprecation issues, I made a final, successful switch to the Hugging Face Inference API. This involved rewriting the connection logic to use their InferenceClient, formatting the prompt for a chat-based model, and implementing streaming.
The Face (The UI): I used Streamlit to rapidly build an interactive and user-friendly web interface. The UI features a simple file uploader for a project's .zip file, and it displays the final AI-generated explanation and the raw JSON structure side-by-side. The final version uses st.write_stream to display the AI's response in real-time.
Deployment: To make the project publicly accessible, I prepared the repository with a requirements.txt file and deployed it for free using Streamlit Community Cloud, including securely adding the secret Hugging Face token to the deployment environment.
Challenges I faced
My motto for this hackathon quickly became "I will not give up," because I faced a series of significant challenges:
The API Gauntlet: My biggest challenge was getting a live AI connection. I initially integrated the Groq API, only to be met with a series of rapidly decommissioned models. After switching to the Hugging Face API, I spent hours debugging a persistent authentication error which, after methodical testing, turned out to be caused by using the wrong API method (text_generation instead of the required chat_completion).
The Blocking Bug: At one point, my app was working in the terminal but would freeze in the browser. I diagnosed this as a blocking process where the UI would time out while waiting for the slow analysis step. This forced me to refactor my entire data flow to support real-time streaming, which was a concept I had to learn and implement under extreme time pressure.
The Parser's Infinite Loop: My AST parsing function had a bug where it would get stuck in an infinite loop due to a circular reference I had created. I had to debug the recursive logic to find and fix the RecursionError.
Overcoming these challenges was the most rewarding part of the entire experience.
Tech Stack - Built with Languages: Python, JavaScript (for the parser) Main Framework: Streamlit AI / APIs: Hugging Face Inference API Key Libraries: python-dotenv, huggingface_hub, @babel/parser Platforms: GitHub, Streamlit Community Cloud
Built With
- huggingfaceapi
- javascript
- python
- streamlit
Log in or sign up for Devpost to join the conversation.