π‘ Inspiration
Large Language Models (LLMs) are powerful tools β but theyβre only as good as the prompts they receive. Many users struggle to write effective prompts, leading to vague or irrelevant responses. Others simply donβt know how to prompt efficiently or are too busy to fine-tune their queries. We wanted to make this process simple, accessible, and impactful.
Promptly was created to help users generate better, clearer, and more effective prompts β improving response quality and reducing unnecessary compute usage (which means a smaller carbon footprint).
π§ Why Itβs Important
π¬ Bridges the Gap Between Users and AI: Most users donβt know how to communicate effectively with LLMs. Promptly empowers everyone β not just tech experts β to get high-quality results.
π§ Improves Output Quality: Well-structured prompts lead to more accurate, relevant, and creative AI responses β saving users time and frustration.
π± Reduces Computational Footprint: Every misfired or repeated query to an LLM consumes energy and tokens. By helping users get it right the first time, Promptly promotes sustainable AI usage.
βοΈ Saves Time and Resources: Reduces the need for trial-and-error prompting, cutting down on API costs and unnecessary iterations.
π Universal Use Cases: From students and content creators to developers and businesses β anyone using AI tools benefits from smarter prompts.
π‘ Scalable Impact: As AI usage grows, tools that optimize how we interact with LLMs will play a key role in responsible and efficient AI adoption.
π What It Does
Promptly helps users turn vague ideas into powerful, structured prompts.
Key Features:
- π§ Prompt Refinement: Takes an initial rough prompt and enhances it using prompt engineering best practices.
- π¬ Guided Prompt Builder: Interactive UI that helps users describe what they want step-by-step.
- π― AI-Powered Optimization: Uses an LLM to evaluate and improve prompts for clarity, specificity, and tone.
- π Sustainability Impact: Reduces token usage and compute cost by helping users get better answers the first time.
π§ How We Built It
Tech Stack:
- Frontend: Next.js 15 (React, TypeScript) with TailwindCSS and Radix UI for modern, responsive components
- Backend: Node.js server to handle API requests and connect the frontend with external services
- Database: None required for the MVP; expandable to Firebase or MongoDB for saving prompts and user data
- APIs / Integrations: Google AI Studio (Gemini) for AI-powered prompt optimization, ElevenLabs API for text-to-speech
- Other Tools: GitHub (version control), Figma (design and prototyping)
Challenges We Ran Into
- Figuring out ElevenLabs WebSocket integration
- Connecting backend and frontend smoothly
- Setting up and configuring the frontend environment with new tools
Accomplishments
- β First-Try Results β Guides users to create strong prompts the first time β no endless tweaking
- π Lower Environmental Cost β Fewer retries = less compute = smaller carbon footprint
- π§βπ« No Prompt Engineering Needed β Anyone can master AI without specialized training
- π― Better Outcomes β Context-aware prompts that deliver accurate, goal-aligned results
What We Learned
- How to integrate and troubleshoot unfamiliar APIs
- How to work with AI beyond Python β this time experimenting in Java and JavaScript/TypeScript
- Debugging and problem-solving with tools and frameworks we hadnβt used before
What's Next
Future Plans for Promptly:
- π€ AI Agent Integration: Connect Promptly to Slack, Gmail, and other workflows for real-time, context-aware prompting
- π» VS Code Extension: Help developers optimize coding prompts directly in their IDE
- π Browser Extension: Bring Promptly into everyday browsing for instant context-building
Demo & Resources
- Demo:
- GitHub: https://github.com/bshiribaiev?tab=repositories


Log in or sign up for Devpost to join the conversation.