Inspiration

Every time we don't get the answer we want using Gen AI, the advice is always prompt engineering. However, if you look up prompt engineering, there are so many types and it always involves writing long lengthy prompts that take much longer to write up than googling something. Therefore, we came up with the idea of EZPrompting to overcome this pain point.

What it does

EZPrompting lets the user choose a prompt engineering method and they only have to fill up some key points. These inputs will be passed into a LLM model to generate a prompt using the prompt engineering type the user selected. The user is able to further edit the results before copying and pasting it into the Gen AI model of their choice.

How we built it

We used llama in Databricks to build the models that generate the prompts. The front-end is developed using Streamlit. We have jobs for each prompt type which will send the task once the front-end makes a call to the job.

Challenges we ran into

The most difficult part was getting the LLM model to provide the output we wanted.

Accomplishments that we're proud of

We're proud of the entire project and are excited to share it with the world!!!

What's next for EZPrompting

We plan to expand on the usecases that we have to include more prompt engineering types. Our final goal is to be similar to auto-complete on Google where we can easily suggest a prompt that uses the most suitable prompt engineering method to obtain the answer a user requires.

Built With

  • databricks
  • llama
  • python
  • streamlit
Share this project:

Updates