Project Story

Inspiration

The inspiration for this project came from the frustration of manually keeping track of completed and upcoming courses. After repeatedly navigating through a cluttered academic portal and copying course information by hand, it became clear that there needed to be a faster and more reliable way to gather and analyze this data. Automating the process and connecting it to a backend capable of generating personalized course suggestions felt like the natural next step.

What I Learned

Working on this project exposed me to multiple parts of the Chrome extension ecosystem and how they communicate with each other. I learned how messaging works between content scripts, background scripts, and frontend components, and how asynchronous communication requires careful handling. I also gained experience integrating a backend API and sending structured data through HTTP requests. Along the way, I strengthened my understanding of modern JavaScript, browser APIs, and debugging asynchronous flows.

How the Project Was Built

I started by writing a content script that scrapes course codes directly from the academic platform. Using DOM queries, I extracted relevant course identifiers, filtered out unwanted entries, and cleaned the data.

The content script then sends the course list to the background script, which stores it temporarily. When the user requests suggestions, the background script forwards the stored course data to the backend API using a JSON POST request.

The backend receives the payload \text{courses: } C, processes it, runs it through a recommendation engine, and returns structured suggestions.

The popup UI then requests these suggestions from the background script, receives the processed result, and displays it to the user.

Overall, the architecture follows a clean flow: 1. Content Script: Scrape and collect data 2. Background Script: Store, relay, and make API calls 3. Backend: Process input and compute recommendations 4. Popup: Display the final results

Challenges Faced

One major challenge was dealing with asynchronous communication inside Chrome extensions. Because background scripts cannot simply return data from an async call, I needed to use sendResponse correctly and signal Chrome to keep the messaging channel open. Forgetting to return true at the right time caused several early failures.

Another challenge was ensuring consistent scraping. The target website was slow to load at times, so I had to implement strategies to wait for the DOM to fully render before reading values.

Integrating the backend was equally tricky. Ensuring the fetch call had the correct headers, body, and URL required careful testing. CORS issues, local server availability, and handling unexpected responses also contributed to the debugging cycle.

Built With

Share this project:

Updates