Inspiration

The motivation behind our project stems from widespread issues we took note of regarding our school's course registration and class planning experience. To name several, factors such as the decentralization of course details, considerations behind when to take particular courses as well as the sheer amount of courses make it difficult to plan ahead for all four years.

What it does

Our project takes user information including their department, major, desired number of quarters, their technical breadth area, and similar applicable fields before helping to generate a list of courses necessary for them to take and optimizing their schedule. We also provide whether to first pass or second pass a course, eliminating the stress of not getting the class you want because demand was too high and you didn't know it.

How we built it

We built our project using Flask to help our webapp interface (HTML/CSS) for our frontend. We used Python scripts to scrape data from sites to accumulate course data and enrollment demand, and used Python libraries (Selenium, BeautifulSoup) to comb through the data and build a customized schedule for the user.

Challenges we ran into

We faced numerous challenges at every step, in particular with the web-scraping process, as well as the step in which we had to combine each of our programs into one. The data we used came from many sites, and in addition was not formatted uniformly between different departments or even across majors in the same department, making it hard to find viable patterns with which to organize information. Another major challenge we faced came from incorporating each of our programs, designed to solve smaller problems, into a larger program, as we had to connect code in different languages as well as design an algorithm to generate the course schedule.

The process of integrating all of our code together: the first pass/second pass enrollment data scraping, the course major description data, and the frontend. As our first time using any web tools (Selenium, BeautifulSoup, Flask), it was tough

Accomplishments that we're proud of

We were able to complete the webscraping process, and sort the vast majority of the data we would need: major requirements, course prerequisites, unit counts, and even enrollment data from past years that would inform a user's decision on when to take a specific course. Unifying the data in this way was an arduous process, and its completion is one of the elements of our project we are proud to present.

What we learned

We learned that the webscraping process, and each of our individual programs, would need to be designed with a larger structure in mind, in order to more easily roll them into one program that would ultimately implement the algorithm to create course schedules based on user input.

The actual code design process also became very complex when mapping courses to their prerequisites. An upper division course may map to two or three prereqs, and those prereqs map to more prereqs, all of which we have to follow and track in efficient data structures (Python Pandas dataframes).

What's next for OptiCourse

The next step for OptiCourse is to expand the scheduling compatibility to other schools, enabling thousands more to utilize OptiCourse to create their own four-year plans stress-free. An immediate next step, for example, might involve drawing data from the various sites in California that map credit transfers from community colleges to one another and to UC schools. Additionally, we will make the frontend aesthetically sleeker.

Share this project:

Updates