Inspiration
Currently, it is quite difficult to observe decisions that the LLM makes in an MCP server and the context as to why it made those decisions. There are no insights as to what tools it was using, as well as any platform to easily access the logs.
What it does
We built a solution that logs at every step the action that the LLM takes as well as the relevant metadata that comes with it. Site reliability engineers are going to love our intuitive frontend, where they can get logs in real time, and get statistics on particular tool calls as well as do advanced filtering.
How we built it
A lot of caffeine, and Python, Typescript, and NextJS. Utilize the Claude Desktop app for LLM calls. Deployed on Vercel and Render.
Challenges we ran into
MCP is still an early technology, and tooling around it is lacking.
Accomplishments that we're proud of
We met each other for the first time just a few hours ago and was able to make quite a bit of progress throughout the day.
Future Aspects
In the future we can have the capability of just taking any mcp server and connecting it to Observee to get logs, issue and debug faster.
Built With
- hono
- next
- python
- typescript
Log in or sign up for Devpost to join the conversation.