Inspiration
Every student we know gets lost within the vast expanses of a History, Math, or CS teacher's Canvas page. At Andover alone, a single course can be split across the Modules tab, the Files tab, the Announcements feed, an embedded Google Drive folder, a syllabus PDF with its own sub-links, and a weekly email the teacher sends through the Canvas Inbox. The information exists for sure, it's just buried under six clicks and a dozen different UI conventions, and the worst part is that it's different for every teacher, since different teachers like to use Canvas in different ways.
We built Canvas AI Copilot to act as a personal AI assistant layer, injected directly on top of the Canvas page. It is trained to read your entire Canvas account, so it can find specific information quickly, plan out your days, explain concepts, and much more. It’s also trained with specific guidelines and restrictions as to not violate academic integrity. The goal is to alleviate the pressure and confusion of Canvas and a student's coursework, which makes every student’s life easier.
What it does
Canvas AI Copilot injects a slide-out sidebar into every Canvas page, accessed either by the extension's toolbar icon or by a floating launcher in the lower-right of the window. With this interface, the assistant presents a header (status indicator), a contextual chip showing the student's current page, a set of quick-action buttons, a chat region, and an input field.
It then shows quick-action buttons, such as plan the day's work, show upcoming tests and assignments, etc. These were chosen because they represent the tasks students repeat daily and because their answers truly require a stable coalition of course data (assignments, grades, announcements, files) rather than the contents of any one Canvas tab.
Afterwards, the assistant can respond to open-ended chat through the input field. Responses are streamed token-by-token from the model, and citation chips are attached underneath, naming the specific course, file, or announcement the claim is drawn from.
Finally, the assistant maintains a persistent context indicator by reading the URL and Canvas's single-page-application state, so the current course, module, or assignment page is always recognizable in the header.
How we built it
The extension is packaged as a Manifest V3 Chrome extension and loaded as unpacked source. Main tools here are the Chrome Extension APIs (chrome.action, chrome.runtime, chrome.tabs, chrome.scripting) and the browser's native DOM. We pull structured course data from Canvas's own authenticated REST endpoints, including /api/v1/courses, /api/v1/courses/{id}/assignments, /api/v1/announcements, /api/v1/courses/{id}/files, /api/v1/courses/{id}/modules, and /api/v1/courses/{id}/pages. The student's existing session cookies handle authorization, so no separate login is required.
Afterwards, we render the chat UI itself. A restricted subset of Markdown (bold, italic, inline code, fenced code blocks) is parsed in-line; message streaming is implemented as a token-walk with variable delay; and the typing indicator, cursor, and gradient avatar are rendered without any framework. The entire interface is approximately 35 KB of hand-written JavaScript and CSS.
We process the prompts through a custom-tuned privately-run version of GPT-4o (the extension sends requests via the network to our server running the model). The model's response is streamed back to the sidebar, where it appears in real time alongside its citations.
Challenges we ran into
We encountered three main obstacles, each resolved in the course of the project.
First, Chrome Manifest V3 content-script CSS isolation is significantly harder than it first appears. Canvas injects aggressive global styles (wildcard * { box-sizing: ... }, large-scale resets, and universal button selectors), and without proper namespacing the assistant's layout collapsed on approximately one in three pages. Thus, every class in the sidebar was prefixed with cai-, and a z-index of 2147483600 was used to stay above Canvas's own modal dialogs.
Second, Chrome's match-pattern syntax does NOT accept top-level-domain wildcards. The initial manifest specified *://canvas.*/* to catch any institutional subdomain, and the extension refused to load. Each school's Canvas domain must be added explicitly in content_scripts[].matches, host_permissions, and web_accessible_resources.matches. This is not a major concern for usage at a specific school, as it is in this case (Andover), since we can pre-load the TLD.
Third, Canvas's page markup is deeply nested, ARIA-inconsistent, and rendered asynchronously by a single-page-application router. Naive DOM scraping was therefore unreliable for extracting course titles and module structure. We moved to the REST API for structured data and retained the DOM only for lightweight context detection.
Accomplishments that we're proud of
The extension is approximately 35 KB and loads on any Canvas instance with a one-line manifest edit for the TLD. It uses no third-party frameworks and has no external network dependencies beyond Canvas's own authenticated webpages and our own server. The assistant nevertheless functions in a way that is competitive with production AI products deployed at much larger scale.
We're also proud in the modularity of the interface layer. The rendering layer, the prompt-assembly layer, and the model-client layer were deliberately separated, so that any one of the three can change (a new model, a different prompt strategy, a redesigned sidebar) without touching the others.
What we learned
We learned how to build a Chromium extension from scratch, including learning the Manifest V3 lifecycle (service workers, content scripts, message passing between them). We also learned how to host a LLM on our own Ubuntu server. This includes setting up the GPU drivers and CUDA toolkit, loading the model weights, running the inference process behind a small HTTP endpoint, and keeping the service alive through systemd. Operating our own model rather than calling a third-party API was more work upfront, but it gave us full control over prompt templates, rate limits, and (most importantly) cost.
We also learned how to stream model output across the network. The extension opens a long-lived fetch against our server and reads the body as a stream, appending tokens to the DOM as they arrive. Getting this right required learning about Server-Sent Events, chunked transfer encoding, how to handle partial UTF-8 sequences across chunk boundaries, and how to cleanly cancel a stream when the user closes the sidebar mid-response..
What's next for Canvas AI Copilot
We will extend the assistant from a retrieval-only system to one capable of multi-turn tool use, so it can take light actions on the student's behalf (submitting a draft, marking an announcement as read, RSVPing to a Canvas calendar event) instead of only answering questions about Canvas state.
We will also extend integration to external calendars (Google Calendar, Outlook) and to cross-course scheduling checks, so that the "plan my day" workflow respects the student's real-time constraints and not just Canvas deadlines.
Built With
- css3
- html5
- javascript
- json
- openai
Log in or sign up for Devpost to join the conversation.