Inspiration

Our inspiration was that all of us had great difficulty with saving and storing information that we were getting from our interactions with various LLMs. We wanted a way to organize, iterate upon, and collaborate around our various chat interactions.

What it does

What CIMI does is allow you to both have a versatile browser add that allows you to scrape various LLM answers and then upload them into a variable website that stores your different LLM additions into a separate file.

This can be especially helpful for engineers working in complex production environments. An engineer can save and organize various snippets of LLM supplied code and related connections/explanations between code snippets all under the same project and need be relentlessly scrolling through an LLM chat window to access urgently needed information.

On the personal use side, this can be an excellent personal library of curated LLM responses that a user could personally organize and access in a birds eye view manner for seamless browsing. Think of a modular Google Drive for LLM responses that can bring multiple back to library of LLM responses and also allow multiple people to access a library of original chats.

How we built it

First we built a chrome extension that extracts entire chats and selected snippets as its raw text forms into our API . The raw texts were transformed by the LLM to generate a synthesis and recap stored as a markdown creation. This data was then embedded and stored into a vector database to provide semantic search capabilities over your existing chats. These are then stored as a visual library on a central dashboard where user defined projects act as bins for collections of chats/chat snippets.

Built With

Share this project:

Updates