Inspiration
We built OmniClaw because student portals like Omnivox are full of useful information, but they are not built for speed. If a student wants to know whether they have unread MIOs, what assignment is due next, when their next class starts, or whether a teacher posted a new LEA update, they usually have to click through multiple pages and interfaces to find it. We wanted to turn that experience into something much more natural: just ask a question and get the answer immediately.
More broadly, we were excited by the idea of applying AI to a very real, everyday problem. Instead of making a generic chatbot, we wanted to build something grounded in real student workflows and connected to live data people actually care about.
What it does
OmniClaw is an AI-powered Omnivox assistant that lets students interact with their school portal in plain language. A user can ask questions like “Do I have any unread messages?”, “What assignments are due this week?”, “When is my next class?”, or “What’s new in LEA?” and OmniClaw fetches the relevant information from their Omnivox account and returns it in a clean, conversational format.
The project supports multiple interfaces, including a web client, a terminal UI, and a Discord bot, all backed by the same orchestration layer. Under the hood, it can pull live data such as MIOs, announcements, calendar events, class information, assignments, grades, and other LEA details.
How we built it
We built OmniClaw as a modular system with three main layers.
The first layer is an MCP server that connects to Omnivox and exposes school data as tools. It handles things like fetching messages, news, calendar events, LEA classes, assignment details, documents, and grades. Because Omnivox does not offer a simple developer-friendly API for this use case, we built custom logic to authenticate and extract structured data from the platform.
The second layer is a lightweight orchestrator service built with FastAPI. This service receives a user’s message, decides whether it needs tools, calls the appropriate MCP functions, and sends the results back through the model to produce a final answer. We designed it to work with multiple model providers, including OpenAI-compatible models, Claude, Gemini, and Ollama.
The third layer is the client experience. We built a React web app for the main chat interface, plus a terminal client and a Discord bot so users can interact with OmniClaw wherever they already are. All of these clients talk to the same backend, which made the architecture easier to extend.
Challenges we ran into
One of the biggest challenges was dealing with a legacy-style platform that was never designed to be queried conversationally. Extracting reliable data from Omnivox required careful parsing, session management, and a lot of defensive handling for inconsistent page structures.
Another major challenge was getting tool-calling to behave consistently across different model providers. Each provider has slightly different conventions and capabilities, so making the orchestration layer feel unified while still supporting OpenAI, Claude, Gemini, and Ollama took more work than we expected.
We also had to think seriously about safety and trust. Since OmniClaw works with real student data, we needed clear consent, sensible boundaries, and a product experience that stays useful without pretending to do more than it safely should.
Accomplishments that we're proud of
We’re proud that OmniClaw is not just a concept, but a working end-to-end system connected to real student workflows. It can take a natural-language question, fetch live data from Omnivox, and return a useful answer across multiple interfaces.
We’re also proud of the architecture. By separating the Omnivox data layer, the orchestration layer, and the clients, we built something that is much easier to extend than a single monolithic chatbot. That decision gave us flexibility to support web, terminal, and Discord without rebuilding the core logic each time.
Finally, we’re proud that the project feels practical. This is the kind of tool students could actually use every day to save time and reduce friction.
What we learned
We learned that building AI products around real-world systems is as much an engineering problem as an AI problem. The model is only one piece; reliability depends just as much on data access, tool design, error handling, and clear system boundaries.
We also learned how important interface design is for trust. When a tool is connected to personal academic data, the user experience needs to communicate clearly what the system can do, what it cannot do, and what data it is using.
On the technical side, we learned a lot about orchestration, MCP-based tool design, multi-provider LLM integration, and how to design one backend that can power several different user-facing clients.
What's next for OmniClaw
Our next step is to make OmniClaw more capable and more polished. We want to improve the quality of responses, expand support for more Omnivox actions and data sources, and make the web experience feel even more seamless.
We also want to strengthen safety and personalization, including better session handling, clearer permissions, and smarter context about the user’s academic activity. Longer term, we see OmniClaw becoming a true student copilot: not just answering questions, but helping students stay organized, proactive, and informed without having to fight their portal every day.
Built With
- and
- anthropic-api
- beautiful-soup
- discord.py
- fastapi
- fastmcp
- framer-motion
- google-gemini-api
- httpx
- integration
- javascript
- model-context-protocol-(mcp)
- ollama
- omnivox
- openai-api
- playwright
- pydantic
- python
- python-curses
- react
- tailwind-css
- uvicorn
- vite
- web
Log in or sign up for Devpost to join the conversation.