Inspiration

A need for local-first prompt engineering in my regular day-to-day use of LLM models. I needed to test prompts and store successful prompts, restore and share them. The existence of the necessary elements and this Hackathon inspired me to author Prompt-Laboratory.

What it does

Prompt-Laboratory is a local-first prompt engineering and testing environment that provides a complete, iterative workflow for designing, refining, testing, and managing a library of high-quality system prompts. The system integrates a JavaScript frontend with a Python backend and SQLite database, utilizing a local Ollama instance for AI interactions.

WARNING: The "Kiroween effect" option flashes the screen and emits a startling sound. Please disable if those things cause discomfort. It also dances the Kiro monster ghost across the top of the screen but this shouldn't cause discomfort ;)

How I built it

I had been envisioning this application for the last 6 months. I refined my idea notes before getting Kiro working on the details. I submitted the idea plan to Kiro and Kiro produced it's trademark Requirements.md document which I then utilized other AI chat tools to ask if anything was missing from the requirements. I added 7 more requirements that I then took back to Kiro. Kiro updated the Requirements.md document and then moved to the design phase building the Design.md document. I reviewed that and it looked great so we moved on to the Implementation phase. I iterated through the implementation phase, approving file edits and task tests, watching as Kiro did the work of a small dev team.

Challenges I ran into

I set all tasks to required to make sure to do full testing as I went. Errors came up at each Task, but Kiro was able to conquer them without my interference. Tasks 1 through 6 went quickly in about an hour and while said errors did arise, Kiro was able to troubleshoot and correct the errors quickly.

Accomplishments that I'm proud of

Working with the Spec mode through the Kiro IDE to get to an MVP then utilizing Vibe mode to tweak little problems like saving the generated model along with the prompt. It took several iterations to get the waiting spinner to loop in the center of the buttons and we (Kiro and I) ended up optimizing the code to remove repetitive declarations of the spinner.

What I learned

This is my first hackathon, so I learned some of everything. Most notably that Kiro can take on almost all of the coding tasks itself, and what's left can be done with Kiro's hand holding.

What's next for PromptLab

  1. Connection to LMStudio, Lemonade and other local model endpoints.
  2. Export to ByteStash for long term prompt storage
  3. API connection to ByteStash for automatic prompt storage

Built With

Share this project:

Updates