Inspiration
Social media algorithms shape what we see, often reinforcing the habits we’re trying to break. Just like your environment and the people around you influence your growth, your feed does too. ReFocus was inspired by the idea that recovery and self-improvement start with what surrounds you online. We wanted to build a way to take control back from the algorithm and help users create a cleaner, healthier digital space.
What it does
ReFocus helps users retrain their social media feeds by identifying and skipping suggestive or distracting content. It works by detecting "trigger" categories—content types the user wants to avoid—and automatically acting on them. For example, if a user is trying to avoid suggestive videos, ReFocus helps skip and filter out those posts, gradually teaching the platform’s algorithm to show less of that content over time.
How we built it
We built ReFocus using LiveKit for real-time screen sharing and interaction. When a user shares their screen, the app connects to a Node.js server that creates a new WebRTC room. From there, we used RoomIO for handling communication between the agent and user participants through audio and video tracks.
We enabled live video input via RoomInputOptions(video_enabled=True), allowing the agent to receive frames from the user’s screen and classify them at regular intervals (1 FPS while speaking, 1 frame every 3 seconds otherwise). These frames are resized to 1024×1024 and encoded as JPEG for model processing.
The backend uses FastAPI (Python) to facilitate interactions between the AI agent and the client, handling classification requests and trigger detection.
Challenges we ran into
Getting LiveKit to run smoothly under unstable network conditions (pro tip: don’t test with bad Wi-Fi).
Integrating WebRTC and ensuring real-time responsiveness across browser environments.
Managing the workflow logic — deciding when and where the agent should click, skip, or observe.
Accomplishments that we're proud of
Successfully got LiveKit working end-to-end with real-time screen streaming.
Built a functioning pipeline for detecting visual triggers and responding automatically.
Established a foundation for behavior-driven feed retraining.
Demonstrated that feed detoxification can be automated in a way that supports recovery and focus.
What we learned
We learned how powerful agent-driven automation can be when combined with real-time video and audio analysis. We also realized how critical workflow design is for making automation safe, ethical, and responsive. Most importantly, we gained insight into the potential of using technology not just for engagement—but for digital recovery and mindfulness.
What's next for ReFocus
Expand our content classification model to detect a broader range of triggers.
Refine agent interactions to improve accuracy and reliability across platforms.
Add user customization features for defining personal triggers and recovery goals.
Eventually, create a browser extension that passively retrains your feed as you scroll—helping you build a healthier digital environment, one skip at a time.
Built With
- fastapi
- livekit
- node.js

Log in or sign up for Devpost to join the conversation.