Inspiration
As developers, we often deploy applications without a clear, end-to-end understanding of how they behave in the real world. Functional bugs, exposed secrets, and security misconfigurations are usually discovered late—during demos, reviews, or even after release. We wanted a way to evaluate deployed web apps the same way real users and attackers would, without needing access to the source code. This led to the idea of LaunchLens.
While building LaunchLens, we intentionally used it to audit a real deployed application we had already built. This helped us validate whether the tool’s findings were realistic and whether its recommendations could meaningfully improve a production app when applied.
What it does
LaunchLens is an AI-powered auditing platform for deployed web applications. Given a live application URL, it simulates real user interactions to analyze functionality, security, and reliability. It generates a structured audit report including functional test results, security and privacy findings, a data breach risk scorecard, and actionable improvement suggestions.
LaunchLens also produces an automated demo blueprint and an AI-generated voiceover script, helping developers review and present their applications more clearly and professionally.
How we built it
LaunchLens is built as a modular web platform that treats deployed applications as black boxes. The backend orchestrates URL-based crawling and simulated user flows, while Gemini 3 is used as a reasoning engine to interpret application behavior, generate dynamic test cases, explain security risks, and structure demo narratives.
We also used LaunchLens as part of its own development loop. A real deployed application was audited using LaunchLens, and the generated QA findings and security recommendations were then applied back into the system through Google AI Studio to improve the target app. This iterative feedback process helped refine both the audited application and the clarity and usefulness of LaunchLens’s outputs.
Challenges we ran into
A major challenge was maintaining accuracy without access to source code. Since LaunchLens relies only on observable behavior and configuration signals, findings needed to be framed as indicative rather than definitive. Another challenge was balancing depth of analysis with clarity, ensuring the results were useful to developers without overwhelming them with raw security data.
Avoiding feature overclaiming—especially around security—while still providing meaningful insights was an important design consideration.
Accomplishments that we're proud of
- Auditing a real deployed application using only its public URL
- Generating explainable QA and security findings instead of opaque flags
- Building a clear risk scorecard that summarizes complex issues
- Creating a demo blueprint and voiceover system that turns audits into pitch-ready narratives
- Using LaunchLens in a real feedback loop to audit, improve, and re-evaluate a deployed application using AI-generated recommendations
What we learned
We learned that large language models are most powerful when used as reasoning and interpretation engines rather than simple text generators. Using Gemini 3 to explain risks, structure insights, and guide improvements made the outputs significantly more useful. We also learned that AI tools are most effective when used iteratively—as reviewers and collaborators—rather than as one-time generators.
What's next for LaunchLens
Next, we plan to expand LaunchLens with automated browser-based demo recording, CI/CD integration for continuous audits, deeper security checks, and exportable reports. We also aim to support multiple audited applications per user and provide longitudinal risk tracking across deployments.
Built With
- aistudio
- and-security-analysis.-browser-automation-and-simulated-user-flows-are-used-for-qa-analysis
- gemini3
- gemini3api
- next.js)
- node.js
- react
- test-generation
- typescript
- with-a-node.js-backend.-it-leverages-google-ai-studio-and-the-gemini-3-api-for-reasoning
Log in or sign up for Devpost to join the conversation.