Inspiration
Non-technical vibe coders are building real apps with Lovable. But because AI writes all the code, they often can't explain what they built, can't tell a developer what to change, and have no idea what their data pipeline looks like.
What it does
FlowLens ingests any project, extracts a structural skeleton of every file, and sends it to an LLM that maps the architecture — what's frontend, what's backend, what talks to what. It also runs a security and code quality pass so that when a vibe coder hands off to a developer, both sides know what they're working with.
How we built it
We used Claude and IBM Bob alongside Cursor and Github, alongside attempting to use Trae (though we were geoblocked).
Challenges we ran into
The hardest problem wasn't the scanning, it was the output register — getting an LLM to reason about code structure is solved, getting it to explain that structure to someone without technical vocabulary is not Arbitrary AI-generated codebases from Lovable and Bolt don't follow conventions, which makes structural extraction messier than it looks The gap between what the repo does today and what the target user actually needs forced us to be honest about what we built versus what we're building — that's a hard conversation mid-hackathon Switching from a local LLM to Claude API mid-build while keeping the pipeline intact
Accomplishments that we're proud of
Built a working end-to-end pipeline in hackathon time — upload a project, extract its structure, get an AI-generated map of how the pieces connect Skeleton extraction before the LLM call — stripping files down to imports, exports, and signatures — means we're sending meaningful signal to the model, not noise Security and code quality analysis on top of the architecture pass, so the handoff to a developer comes with context both sides can use Named a real product gap — the semantic translation layer — and articulated exactly what it is and how to build it
What we learned
We learned the importance of repeated git commits after a mistake was made, alongside how to integrate AI features and Structured Outputs into the normal functioning of programs, to create a full stack result.
What's next for FlowLens
The semantic translation layer — rewriting the LLM prompt to output screens, data tables, and user flows in plain English instead of architecture nodes The three-tab UI from the an earlier concept — Screens, Your Data, User Flows — each rendered as plain cards a non-technical user can read and repeat Zip file upload targeting Lovable and Bolt exports specifically, not just directory upload Rescan on update — so every time you ship a new feature your understanding stays current A shareable output so a vibe coder can send their FlowLens map to a developer before a handoff conversation
Built With
- lmstudio
- typescript
Log in or sign up for Devpost to join the conversation.