Inspiration

Bad AI prompts lead to poor code, context-switching, and wasted API tokens. Developers shouldn't have to break their flow state to learn "Prompt Engineering". We built a "Grammarly for Prompts"—a silent mentor that prevents bad prompts from ever being sent.

What it does

PreFlight is a JetBrains plugin that analyzes and optimizes AI prompts in real-time.

  • 📊 Real-time Dashboard: As you type, 4 dynamic progress bars evaluate the prompt's health based on strict metrics (Introduction, Context, Input Data, Output Indicator).
  • The "God-Tier" Refactor: With one click, our magic button rewrites a weak, ambiguous text into a perfectly structured, token-optimized prompt.

How we built it

We engineered a multi-agent AI architecture using TypeScript integrated with a proprietary backend:

  • Quantitative Evaluator Agent: Instead of vague feedback, it uses a strict subtractive math system to penalize antipatterns, outputting deterministic JSON.
  • Refactoring Agent: Transforms weak inputs into highly contextualized structures.
  • Native Integration: The IDE UI parses the JSON payload instantly to render the gamified dashboards without leaving the editor.

Challenges we ran into

  1. Taming the LLM: Forcing a non-deterministic AI to reliably output strict, math-based JSON—without breaking the UI parser with conversational filler—required rigorous prompt engineering.
  2. Latency: Balancing API response times so the IDE wouldn't freeze or interrupt the developer's typing flow.

Accomplishments that we're proud of

Successfully bridging the gap between abstract AI text evaluation and concrete, visual metrics. We are incredibly proud of building an ergonomic, gamified UI that feels natively built into the JetBrains ecosystem.

What we learned

We mastered JetBrains plugin architecture and discovered how to constrain LLMs to act as strict code validators rather than chatbots. Crucially, we learned that real-time, visual feedback teaches developers best practices far better than reading documentation.

What's next

  • Local SLMs: Migrating the evaluation engine to a local Small Language Model for zero latency and zero token costs.
  • 🧠 Workspace Context Awareness: Automatically analyzing open files to suggest missing dependencies in the prompt.
  • 🏢 Enterprise Templates: Allowing tech leads to enforce strict prompting standards across entire engineering teams.

Built With

Share this project:

Updates