The problem

Class registration for students can be tough based on which professors, which section, which class and which difficulty fits the student’s schedules and capability for time commitment. The TAMU registrar allows for students to look through vast amounts of PDF files to find information about classes and professors, however this can take a while and doesn’t yield any benefit if the students don’t know where to look.

The idea

Our idea was to build an innovative web application that could give Texas A&M students detailed reports for classes and professors within TAMU. We wanted to have an easy to use user interface and a large range of data for each professor and class. We wanted the app to function both on IOS and Android to allow for use on the vast majority of smartphones. By creating this app, students could make educated decisions on which classes and professors to sign up for by providing an easy layout of grade distribution, class size, sections taught, number of Q-drops and average GPA. This app would allow for students to have a more prepared class registration experience which ultimately might help reduce the number of Q-drops due to heavy course loads and unexpected difficulties of classes or professors.

How we built it

We built Aggie Analytics front-end with the Ionic and Angular. The back-end was developed with Node.js and Express.js while being hosted on Amazon Web Services. In addition, we hosted MySQL code on GoDaddy. We got class information from Texas A&M Registrar .pdf’s by scripting information using Python. One challenge was not having direct access to the registrar database to get accurate information on grade distributions (number of A's, B's, etc.) for each professor and course. Using a number of Python libraries, mainly PyPDF2, mySQL, and Regular Expressions, we were able to download the .pdfs from the registrar website and parse them on our own. Using a complex system of regular expressions, string modifiers that can extract certain text, we were able to pull all of the grade data and course information, along with professor info. We then uploaded this to a SQL database to host our scraped data to be used on the front end interface. The Front End app interface allows a student to search and view courses and see the grade distributions for each course and professor.

Challenges we ran into

Some of the challenges we first ran into involved figuring out how to actually get the data off of the registrar .pdfs. Another big problem we needed to address was how to connect the full stack architecture of our project, from the back end server setup to the front end app interface. While we had done some initial brainstorming and planning prior to the hackathon, we had not realized the full scope of the project we faced, and had to spend some time into the hackathon doing additional brainstorming. We were able to easily get a local server setup for testing our database operations, however, migrating the database to a server host provider proved to be a challenge that we will remember and learn upon in the future.

Accomplishments that we’re proud of

Working without access to the official registrar database proved to be a challenge, but we found inventive ways to get around this with data scraping via Python. To get the data scraping of the .pdfs perfected, it took many hours of trial and error and testing regular expressions. We were eventually able to near perfectly get all of the data scraped off the pdf file and be usable in our own project.

What we learned

A big takeaway from this hackathon was that we all needed a refresher in back end setup and programming. It certainly was our weak point, as we felt more comfortable working on the front end interface UX and UI. Going forward, we will focus on finding better solutions for database hosting, and brushing up on our skills of database programming. Overall, however, we really enjoyed ourselves at this hackathon and appreciated how well organized the event was.

Share this project:
×

Updates