Inspiration

As a frontend engineer, if you are blocked by the backend, then your progress comes to a halt. With our tool, you can continue without interruption because it simulates traffic and data.

What it does

Puppeteer looks deeper into the runtime by probing into variables. It can change data on the fly. For example, in a frontend application, the database can act like it exists through our puppeteering approach.

How we built it

We started by building a terminal user interface to process user input. The LLM then decides what to probe by looking at key figures. Once probes are attached, the AI runtime is active. It can either send changed items as triggers back to the terminal UI and notify the user in natural language, or it can forward them to the Martian LLM router. The router applies rule based decisions to direct traffic to different LLMs. We also made it multimodal with both text and image generation.

Challenges we ran into

Connecting different LLMs and shaping the environment so that the models return the results we need.

Accomplishments that we are proud of

We built a working product that we truly care about because we have all experienced being blocked and wishing for a simulated backend or orchestrated traffic to test applications before shipping to users at scale. Another accomplishment is pushing the boundaries of the Martian LLM by having it work with image generation models, smarter text models, and lighter text models to save costs as the tool iterates and supports the user.

What we learned

We learned how to work with cutting edge technologies like Martian LLM and Cohere.

What's next for Puppeteer

Next, we want to make Puppeteer faster and more capable on its own.

Built With

+ 8 more
Share this project:

Updates