Inspiration
Clarus started from a simple frustration: students spend more time figuring out what to do next than actually doing the work. In Brightspace, deadlines are spread across assignments, quizzes, content, and calendar pages, and it is easy to miss critical details or underestimate how heavy a week will be. We wanted to build an AI control center that turns scattered LMS data into one clear execution plan.
Our core idea was to reduce academic friction by answering three questions in real time:
- What is most important right now?
- How heavy is this week compared to upcoming weeks?
- Where exactly should I go in Brightspace to complete the next step?
That led to Clarus: an AI academic copilot that syncs LMS data, forecasts workload, prioritizes tasks, and gives students a concrete daily plan.
What it does
Clarus connects to D2L/Brightspace and provides:
- Weekly Workload Forecast with severity levels (
light,moderate,heavy,critical) - Timeline Intelligence for upcoming deadlines and events
- Study Plan Optimizer that turns tasks into practical study blocks
- Prioritization Engine that highlights the highest-leverage task
- Copilot Chat for natural-language planning and guidance
- Assignment/Quiz/Content overview links so students can jump directly to the right place
In short, Clarus transforms LMS data into an actionable "do this next" system.
How we built it
Architecture
- Frontend: Next.js + React + TypeScript + Tailwind
- Backend API: Fastify + TypeScript
- Database: PostgreSQL + Prisma
- Connector Service: Playwright-based integration for Brightspace session workflows
- Data flow:
D2L -> connector -> API normalization -> timeline/workload models -> UI + copilot
System design highlights
Data ingestion and normalization
- Synced courses and timeline events from Brightspace.
- Normalized heterogeneous date sources into a shared event model with explicit
dateKindvalues (due,event,start,end).
Forecasting model
- Built a 4-week workload forecast from event buckets.
- Computed weekly feature vectors:
- assessment count
- total estimated hours
- complexity mix
- deadline clustering
- overlap score
- Derived severity classification using total estimated hours.
Execution features
- Generated weekly and daily study recommendations.
- Added clear "open task / open resources / dropbox-rubric" actions.
- Implemented copilot thread/message flows for planning Q&A.
Reliability and UX iteration
- Fixed chat auto-scroll so assistant responses stay in view.
- Refined schedule UX from a confusing dual mode to a cleaner daily-first flow.
- Restored workload bar chart visibility by fixing layout constraints.
- Reverted a regression in workload input filtering to restore expected behavior.
Challenges we faced
1. LMS date semantics are inconsistent
Different Brightspace tools expose different date fields, and they do not always mean the same thing. Some items expose meaningful due dates; others have end windows that can look deadline-like.
- We initially widened workload ingestion to include
enddates. - That changed forecast behavior and produced heavier-than-expected weeks in some scenarios.
- We traced and reverted that regression path to recover previous, expected workload behavior.
2. Converting data into trustworthy recommendations
Forecasting is only useful if students trust it. We had to tune:
- estimated-hours defaults by assessment type,
- severity thresholds,
- and the explanations shown in UI (e.g., why a week is marked heavy).
We focused on explainability and practical actions rather than opaque scoring.
3. UI density vs clarity
The dashboard has a lot of information. We repeatedly simplified controls:
- removed confusing summary panels,
- removed non-functional start-session actions,
- and made schedule navigation more direct.
The goal was to keep decisions fast for users under time pressure.
What we learned
- Data contracts matter more than visuals. Small backend interpretation changes can dramatically alter user trust in AI outputs.
- Forecasting UX must be explainable. Severity without context feels random; feature-vector details improve confidence.
- Polish is not superficial. Auto-scroll, clear navigation, and visible charts materially change product usability.
- Integration-first products need observability. When models depend on upstream LMS data quality, debugging paths and clear route boundaries are essential.
Impact
Clarus helps students move from "I am overwhelmed" to "I know exactly what to do next."
Instead of manually triangulating assignments, due dates, and course content, students get:
- a workload outlook,
- a prioritized plan,
- and direct links to the right Brightspace destinations.
This is the foundation for a more autonomous academic copilot that can adapt as workload and behavior change over time.
Next steps
- Add smarter effort estimation using richer assignment/rubric signals.
- Improve deduplication and confidence scoring across mixed LMS date types.
- Expand copilot reasoning over historical completion behavior.
- Add proactive alerts for forecast shifts after sync updates.
Built With
- d2l/brightspace-valence-apis
- docker
- fastify
- javascript
- next.js
- node.js
- playwright
- postgresql
- prisma
- react
- rest
- tailwind-css
- typescript
Log in or sign up for Devpost to join the conversation.