Inspiration
The inspiration behind Webloom AI came from a desire to democratize web development by eliminating the need to write code. With the rapid evolution of generative AI and natural language models, we saw the potential to bridge the gap between human intent and software creation. The goal was to allow anyone — regardless of technical background — to describe a website in plain language and have it fully generated, editable, and deployable within minutes.
What it does
Webloom AI is a full-stack SaaS platform that enables users to build, preview, edit, and deploy fully functional websites using only natural language prompts. Users describe the type of site they want, and the platform generates production-ready code using LLM agents. It features a real-time code editor, live preview, one-click deployment, code export, and a pay-as-you-go token-based billing system.
How we built it
We used React and TypeScript for the frontend, creating a dynamic and responsive UI with TailwindCSS. The backend was developed using Python and FastAPI, where we built APIs to process user prompts and interact with LangChain-powered agents and the OpenAI API. Convex was used as a serverless backend database, while Firebase handled authentication. Stripe was integrated to support token-based payments. The entire platform was deployed using Vercel and Render, with GitHub Actions enabling automated CI/CD.
Challenges we ran into
Ensuring LLM outputs were safe, consistent, and syntactically correct for frontend code rendering
Handling unpredictable prompt responses and building fallback mechanisms
Managing real-time performance in the embedded code editor and preview environment
Integrating secure, rate-limited access to external APIs while keeping latency low
Designing a modular system that could scale for future use cases
Accomplishments that we're proud of
Successfully built a fully functional AI website generator from scratch
Integrated live editing, preview, and deployment features in one seamless workflow
Designed and deployed a scalable, stateless backend API that reduced response latency by 35%
Enabled non-technical users to create websites entirely through natural language
Developed a token-based billing and usage system with secure API authentication
What we learned
Building real-world LLM-based tools requires deep control over prompts, state management, and fallback logic
FastAPI’s asynchronous capabilities are powerful when combined with modern frontend frameworks for real-time use cases
LangChain agents can be orchestrated effectively for structured output generation, but tuning is critical
System architecture needs to account for unpredictable AI behavior, especially in a user-facing application
User experience in AI-generated platforms depends heavily on performance, fallback logic, and clarity of interface
What's next for Webloom AI
Adding support for multi-page and component-level generation
Integrating more advanced AI models like Gemini 1.5 for better reasoning and code structuring
Enhancing customization with theme selectors, content control, and plugin-based features
Launching a template marketplace for community-contributed website types
Introducing team collaboration features and usage analytics for power users
Built With
- convex-(serverless-db)
- docker
- fastapi
- firebase-(auth)
- github-actions-(ci/cd)
- google-gemini
- javascript
- jwt
- langchain
- openai-api
- python
- react
- render
- rest-apis
- stripe-(payments)
- tailwindcss
- typescript
- vercel
- websockets

Log in or sign up for Devpost to join the conversation.