Inspiration
We tried building several LLM apps over the last few months. Demo was easy but productionizing became a challenge after rounds of prompt engineering. We weren't happy with the reliability we saw.
What it does
Prompt engineering is non-deterministic. So, rather than a human picking the best prompt, we let AI decide the best way to call LLM, based on our given task requirements.
How we built it
Based on user provided examples, they get to pick the best variations. In the background, our AI generates these variations by applying different techniques. The AI learns the developer's style. Based on this, the AI figures out the best way to call the LLM, for any future user queries.
Challenges we ran into
Getting diverse enough test cases to make sure our platform generates more reliable outputs than what humans can do with prompt engineering.
Accomplishments that we're proud of
Saw better reliability for the test cases we tried
What we learned
What's next for Hightime AI
Early access launch where developers can try it out for themselves
Log in or sign up for Devpost to join the conversation.