Devpost won't let me upload loom, but...
Demo Video: https://www.loom.com/share/c55bce55ffc340a9ab371f1dcdd6c73e?sid=174b87b3-a353-4670-8bf6-1eaa29e8d777
Pitch Deck: https://docs.google.com/presentation/d/1NaHMBz0-LuA7-igoKEsvlGLLPepe5VgbdEhMHHMkZxg/edit?usp=sharing
Inspiration
As engineers, we've experienced the challenges of prompting LLMs effectively. Crafting the perfect prompt is time-consuming and requires trial and error, especially when working with different models. We were inspired to create a solution that streamlines this process and helps engineers focus on the desired behavior rather than the intricacies of prompting.
What it does
L'invite parfaite is a tool that allows engineers to specify the desired behavior of an LLM through a series of tests. Instead of manually writing prompts, users define the expected output, and our system iteratively rewrites the prompt to better fit the desired behavior. The tool also adapts the prompts to work optimally with the user's chosen model.
How we built it
We built L'invite parfaite using a combination of technologies:
Mistral-large, a powerful language model, for rewriting system prompts based on test results Semantic similarity algorithms to evaluate the performance of the generated prompts against the desired output A user-friendly interface for defining tests and selecting the target LLM model
Challenges we ran into
One of the main challenges we encountered was ensuring that the generated prompts were tailored to the specific requirements of different LLM models. Each model has its own quirks and best practices for prompting, which required extensive research and experimentation to address.
Another challenge was developing a robust semantic similarity algorithm that could accurately evaluate the performance of the generated prompts against the desired output. We had to strike a balance between strictness and flexibility to allow for creative solutions while maintaining the core requirements.
Accomplishments that we're proud of We are proud of creating a tool that has the potential to significantly streamline the prompting process for engineers working with LLMs. By abstracting away the complexities of prompt engineering, L'invite parfaite allows users to focus on the desired behavior and outcomes rather than the technical details.
Additionally, we are pleased with the adaptability of our solution to work with various LLM models. This flexibility ensures that our tool remains relevant and useful across a wide range of applications and use cases.
What we learned
Throughout the development process, we gained valuable insights into the intricacies of prompt engineering and the unique challenges associated with different LLM models. We learned the importance of iterative testing and refinement in creating effective prompts and the need for a balance between specificity and flexibility in defining desired behaviors.
Moreover, we discovered the power of leveraging semantic similarity algorithms in evaluating the performance of generated prompts, which opened up new possibilities for automating and optimizing the prompting process.
What's next for L'invite parfaite ("the perfect prompt")
Looking ahead, we plan to continue refining and expanding the capabilities of L'invite parfaite. Some of our future goals include:
Integrating support for a wider range of LLM models and architectures Enhancing the semantic similarity algorithms to provide even more accurate and nuanced evaluations Introducing a collaborative feature that allows teams to share and build upon each other's test suites and prompts Exploring the potential for applying our approach to other areas of AI development, such as fine-tuning models for specific tasks or domains
Ultimately, our vision is to empower engineers and developers to harness the full potential of LLMs by providing them with the tools and resources they need to create effective, efficient, and tailored prompts with ease.
Log in or sign up for Devpost to join the conversation.