Inspiration
In the new era of agentic AI, the power and utility of AI agents are critically dependent on the quality of the context it is provided. We were inspired to create a tool that could bridge the gap between an individual's real-time online activities and the context available to their AI assistants. "Digital Twin Proxy" was born from the idea of turning day-to-day device usage into a persistent, personal memory for AI agents, enabling a truly personalized and proactive AI experience, all while fiercely protecting user privacy.
Our project, "Digital Twin Proxy," existed previously as a proof-of-concept for logging web traffic. However, it was limited to generating basic, passive summaries. The release of gpt-oss was the direct inspiration to fundamentally transform the project. We saw the opportunity to leverage its superior reasoning to build a truly agentic system—one that doesn't just analyze data, but intelligently decides how to interact with it.
What it does
"Digital Twin Proxy" turns your web browsing into a personal memory for your AI agents. It functions as a local network proxy that logs your network traffic and utilizes a gpt-oss model to generate intelligent, content-aware analyses of your personal life.
This isn't just a simple traffic logger. The application employs an agentic LLM that creates a rich, structured log of your digital life that can be used as a real-time context source for other AI applications. This "context engineering" allows other agents to personalize responses, anticipate your needs, and improve their tool usage based on your current tasks and interests.
We leverage gpt-oss to first analyze the stream of URLs and then, leveraging its advanced reasoning, it autonomously decides which pages are interesting enough to warrant a deeper look. It then uses a provided tool to fetch the content of those specific pages for a more detailed analysis. This results in an insightful and relevant analysis of device activity. The final output is a rich, structured context source that other AI agents can use to personalize responses and anticipate your needs.
Crucially, "Digital Twin Proxy" is designed to be privacy-first. By running gpt-oss locally via providers like Ollama, LM Studio, or vLLM, users can ensure their entire browsing history remains on their own machine, completely private and secure.
How we built it
"Digital Twin Proxy" is built with a robust and modern tech stack, prioritizing performance and security:
- Backend: The core application is written in Rust, chosen for its performance and type safety.
- Proxy Server: We utilize Squid Cache, a powerful and battle-tested caching proxy, to reliably intercept and log network traffic.
- AI Model: The reasoning engine is powered by OpenAI's gpt-oss-20b model. We use this open-weight model for its strong reasoning capabilities at a size that makes it feasible for users to run locally.
- Local LLM Integration: The application is designed to connect to any OpenAI-compatible API endpoint. This allows seamless integration with local providers like Ollama, LM Studio, and vLLM.
- Agentic Logic: The core agentic loop periodically sends batches of logged URLs to the gpt-oss model, which then decides which pages to fetch for deeper content analysis.
Challenges we ran into
One of the main challenges was designing an effective agentic workflow. Simply summarizing every visited URL would be inefficient and noisy. We had to craft specific prompts and a decision-making framework for the gpt-oss model to allow it to intelligently discern between routine browsing and significant online activity worthy of deeper analysis. Another challenge was ensuring a seamless setup for a tool that sits at the networking layer, especially considering cross-platform complexities like the Windows Subsystem for Linux (WSL) networking configuration.
Accomplishments that we're proud of
We are incredibly proud of creating a tool that puts user privacy at the forefront of the agentic AI revolution. Building a system where the gpt-oss model acts as an autonomous agent to intelligently curate a user's "digital twin" is a significant accomplishment. The flexibility to connect with numerous local LLM providers makes the tool accessible and powerful for a wide range of developers and researchers.
What we learned
Throughout this hackathon, we gained a deep appreciation for the capabilities of open-weight models like gpt-oss-20b. We also learned a great deal about the nuances of network proxying and the importance of creating clear and comprehensive documentation for developer-focused tools.
What's next for Digital Twin Proxy
We have an exciting roadmap ahead. Our immediate next step is to expose the generated context via an MCP Server, which will allow other AI agents to easily and securely access a user's digital twin. Following that, we plan to develop in-browser context injection, a feature that will allow agentic web applications (like ChatGPT, Perplexity, etc.) to directly and securely access the user's digital twin, creating a truly interactive and personalized web experience.
Built With
- gpt-oss
- lm-studio
- ollama
- rust
- squid-cache
- vllm

Log in or sign up for Devpost to join the conversation.