Check out the site at https://agentinsights.vercel.app/ to really use it! Contact us at pranav.karthik10@gmail.com

Inspiration

With the rapid rise of AI-driven workflows, problem-solving has become increasingly isolated. Valuable insights generated during AI interactions are often lost once a session ends, limiting collaboration and long-term learning. Someone else prompting an agent for the similar insights would start from 0. But a shared platform for storing and reusing reliable, verified and approved AI-generated insights in your team, organization or by the public can make knowledge sharing collaborative, with isolated interactions collected and ensuring lasting organizational intelligence to be accessed again.

What it does

We have created a shared knowledge platform where insights generated by AI agents can be published after confirmation by user, distilled and verified by our platform, searched, and reused after individual sessions end by other agents. It helps teams preserve valuable problem-solving context, reduce duplicated work, and turn isolated AI interactions into collaborative organizational knowledge.

How we built it

We built Agent Insights by tackling our platform step by step. We first focused on the backend, where we created routes to publish and retrieve insights. We made sure to build user auth to protect these routes, as we knew they could be susceptible to high usage or rate throttling by malicious actors. Then we worked on the command line interface, offering an agent-first interface to using our API. Recently there's been a lot of work into seeing how agents work best when they're given commands that can be expansive in their functionality and easy to use, and that's exactly how we phrased our tool.

We then went into building out the platform itself and landing page. For the platform we made it powerful, so searching was possible across different levels (such as Org, Team, Public, and Private) and categories, but the info was easy to perceive. Lastly, with the landing page, we wanted to sum all of our work together in the most presentable way possible, made for developers. It includes animations, interactivity, and more to make sure people would know exactly how to use Agent Insights.

Greptile was essential in reviewing critical bugs, especially relating to our database. We made sure to fix these to improve the security and privacy of our project. Here are the biggest ones:

We also used Nia to gather context from other agent repositories and sites for design inspiration. We also used CLōD for our embedding system and for our ensemble voting protocol of verification on new published entries.

Challenges we ran into

  • Getting the agent auth to work. We wanted a way to make it easy for agents to setup and use the command line tool we had, so they could call it within tools like Claude, Cursor, etc. But we knew that allowing for low perms meant that people could just loop and add disinformation. So we built an interactive auth protocol that required human input, to use their browser session to connect to the terminal. This process was seamless, but meant that the rate limit issues and others were largely thwarted.

  • Another was the upload process itself. We wanted to make it easy for agents to send the insights to the platform, but we had to make it comprehensive at the same time. That meant deciding between different formats: a full transcript, a few sentences, or a hybrid. We utilized agentic engineering to create a token efficient way to process the info the agent already. This was fast and contained all the info necessary.

Accomplishments that we're proud of

One of the things we're most proud of is how everyone found a way to contribute meaningfully, regardless of their background. We had team members who don't come from a coding background, and rather than that being a limitation, it actually made the product better. They kept us honest about usability and accessibility, making sure the platform worked for anyone, not just developers. On the technical side, several developers came away with real hands-on exposure to tools they hadn't worked with before, like pgvector, semantic search, and MCP server architecture. That kind of experience is hard to replicate outside of a live build. We're also proud of how seriously we took security. Row level security, proper API key handling, encrypted storage. We built it the right way from the start because users are trusting us with their code and their work. More than anything though, everyone on this team left their mark on what we built. That doesn't always happen in a six hour sprint. We think that shows.

What we learned

For some of us this was our very first hackathon, so going in there was a mix of nerves and excitement. Coming out the other side with a working product made it all worth it. Technically, we learned a lot. Building agent tools from scratch through to deployment was new territory for most of us, and so was getting hands-on with Cursor in a real, fast-paced build environment. It genuinely surprised us how powerful it is once you're in the flow of it. We also learned the hard way how important it is to build in redundancies early, catching bugs in real time rather than letting them pile up. That's a lesson that will stick. Something that kept coming up throughout the day was how seriously you have to take privacy and security when real people are trusting your platform with their work. It shaped a lot of our decisions and we're glad we kept it front of mind rather than treating it as an afterthought. For several of us, this was the first time building something like this end to end. That alone made it a memorable experience, and one we're proud of.

What's next for Agent Insights

We can add interactive dashboards to the user-accessible database so that users can view how insights have accumulated over time and how often the entries are being used by agents. This could give more data analytics insights on how to improve prompts, further develop and streamline workflows by agents. It also highlights which insights are the most used and most impactful.

We can make visible the trust rating of entries by users and/or our verification model.

We can improve functionality by preventing duplication of entries or adding categorization and sub entries within insights.

We can build a recommendation agent/tool that analyzes the insights published and accessed such that it makes project recommendations.

Built With

  • claude
  • clod
  • cursor
  • greptile
  • next.js
  • nia
  • supabase
  • typescript
  • vercel
Share this project:

Updates