CMU Course Reviews is a hackathon-built frontend demo for a CMU course review platform that combines two core ideas: anonymous but verified student feedback and AI-powered academic integrity violation (AIV) checks. The goal is to simulate what a real production system might feel like while remaining completely frontend and in-memory.

The project was inspired by how CMU students actually choose classes: Discord, GroupMe, friends, and scattered spreadsheets. Existing review sites don’t capture CMU’s specific rigor, and they also raise concerns about students accidentally posting exam questions, homework solutions, or project code. We wanted a space where students could be honest without being exposed, and instructors could trust that academic integrity is respected.

Our design centers on a verified-but-anonymous model. Students register with a CMU email (@andrew.cmu.edu or @cmu.edu), complete a simulated email verification flow, and then post only under the label “Verified CMU Student.” Conceptually, we think of trust as depending on both verification and anonymity, something like ( \text{Trust} \approx f(\text{Verification}, \text{Anonymity}) ), and we try to maximize both at once.

We built the app as a pure frontend single-page application using React 18 + TypeScript, Vite for fast builds, Tailwind CSS for styling, React Router for navigation, and Lucide React for icons. All data—users, reviews, questions, and upvotes—is stored in React context, so everything resets on refresh, which is ideal for a safe and portable demo.

The user flow follows a full product story: landing page → registration with CMU email → simulated email verification → login → course browsing. Once logged in, students can search and filter courses, view details, and see per-course ratings on difficulty, workload, and usefulness.

Each course page shows aggregated metrics along with individual anonymous reviews and a Q&A section where students can ask and answer questions. Upvoting highlights helpful reviews and answers, making the platform feel more like a living, student-driven resource rather than a static catalog.

The most important layer is the AI AIV simulation. Every time a user submits a review or answer, the text is passed through a moderation function that estimates a “risk score” ( R(s) \in [0, 1] ) for the submission ( s ). If ( R(s) > \tau ) for some threshold ( \tau ), the content is blocked or warned as a likely academic integrity violation. In our hackathon demo, ( R(s) ) is implemented via rule-based checks (e.g., patterns suggesting exam questions, full solutions, or code snippets), but the interface is designed so a real model can be plugged in later.

One major challenge was making the AI moderation feel realistic without any backend or actual ML model. We had to carefully choose rules and copy so that the system both teaches users what counts as an AIV and feels like a genuine AI gatekeeper, not just a keyword filter. Another challenge was explaining anonymity, verification, and AIV clearly in the UI without overwhelming users.

Through this project, we learned how powerful it is to treat AI as a governance tool instead of just a convenience feature. We also gained experience designing for the tension between anonymity and verification, and architecting a frontend so that future AI services can be cleanly integrated.

In short, CMU Course Reviews shows how AI-powered AIV checks and verified anonymity can coexist in a course review platform, enabling honest student voices while protecting academic integrity and instructor trust.

Built With

  • claude
Share this project:

Updates