Inspiration
PromptTester wasn’t born from a roadmap — it started as a curiosity. I opened up Bolt.new to play with ideas, and five days later, I had a working IDE. I had no coding background, no intention to go all the way. But once I started designing how prompt engineering should feel — modular, testable, visual — I couldn’t stop until it was real.
As AI systems evolve rapidly, prompts are becoming the new software. Yet, tools for serious prompt work remain either too simple or too technical. I wanted to change that — to create the VS Code for prompt engineers.
What It Does
PromptTester helps prompt engineers build smarter and faster:
- Modular prompt building with reusable text/data blocks
- Variant testing with dynamic datasets
- Visual interface to compare outputs across models and runs
- Secure API key handling using encrypted Supabase Edge Functions
- Full support for both text and image models (OpenAI, Replicate, Anthropic, Google)
- Real-time previewing and fast iteration for creative and technical workflows
How It Was Built
- Frontend: Vite + React + Tailwind + Shadcn/UI for sleek performance
- State: Zustand for lightweight, reactive state logic
- Drag-and-Drop: Powered by Dnd-kit for intuitive block handling
- Backend: Supabase for auth, database, and secure server-side Edge Functions
- Model Integration: Flexible OpenAPI-based routing to connect any LLM, including image models
- Security: API keys never touch the client; all traffic goes through encrypted, authenticated Edge Functions
And all of this was built through prompts — I didn’t write a single line of code from scratch.
Challenges We Overcame
- Making sure no API keys ever touch the client — encryption and Row Level Security in Supabase was crucial
- Integrating with inconsistent API specs from multiple model providers
- Designing a prompt builder UI that felt both powerful and intuitive
- Handling real-time completion comparisons, asynchronous workflows, and responsive updates
- Adapting the system for image models, which often break the mold in terms of payload and pricing
Accomplishments We're Proud Of
- API keys are safe. Full stop — encrypted at rest, never exposed
- Prompt blocks are reusable, testable, and visual — a productivity booster for anyone working with LLMs
- Image support looks incredible and works across providers
- Users can visually compare prompt results side-by-side in seconds — it’s fast and satisfying
- All of this was built in days, by someone who just got obsessed with getting it right
What We Learned
- API integration is never “just plug it in.” Every LLM is its own beast
- Security isn’t optional — users deserve trust by default
- State management matters more than you think when UIs get dynamic
- Bolt.new + AI is ridiculously powerful when you pair it with just enough stubbornness and taste
- The best products often start as accidents
What’s Next for PromptTester
- Prompt Version Control – track changes, revert, collaborate
- Advanced Evaluation – A/B testing, auto-metrics, custom scoring
- Team Features – shared projects, comments, role permissions
- Production Deployment – export prompts to pipelines, track usage
- More Models – expand into coding agents, niche tools, and domain-specific LLMs
Built With
- anthropic-api
- authentication
- dnd-kit
- edge-functions)-deployment:-netlify-llm-integrations:-openai-api
- google-gemini-api
- lucide
- mistral-ai-api
- netlify
- openai
- react
- react-hook-form
- replicate
- shadcn/ui
- sql-(postgresql)-frontend-frameworks/libraries:-react
- supabase
- tailwind-css
- typescript
- vite
- zod-icons:-lucide-react-backend/database:-supabase-(postgresql
- zustand
Log in or sign up for Devpost to join the conversation.