"The fact is that approximately 53% of college graduates are unemployed or working in a job that doesn't require a bachelor's degree" --- quoted from an article. We realize that there must be something we can do to help the college students grow their professional skills and further improve the student community employment rate after graduation.

Our school always encourage us to explore ourselves and try something new. To a college student, one of the most common things is _ major transfer _. However, for a transferred-majors student who want to get a job after graduation is not quite easy. Only taking some entry-level courses, without a great GPA nor much industrial experience, it will be quite hard for the _ transfers _ to find an internship or job during their summer. If they don't have much to put on their resume and they will hardly find a job after their graduations.

To search for jobs and internships at UC Davis, we used to have _ Aggie Job Link _, which was later replaced by _ Handshake _ in this April. However, due to the high entry-bar of the app/platform and the lack of industrial experience, lots of transfer-students might not be qualify for most of the jobs.

If there is a chance to improve the student community and career education, we want to propose and build a career training platform that connects entry-level students with the professional training programs from the industries. For students, they can meet and work with the industrial profession, learn practical skills from their mentors and build up their resume. For industrial organizations, they can promote companies public reputation around the student groups, establish leadership programs for employees to practice their training/management skills, discover talents in advance, and most importantly, help the community to grow more talents in the campus. This will be a win-win situation on both parties.

What it does

The application is divided into 3 entries.

For students: Students can signup/login to the application and apply for the training program they like by simply providing some basic personal information and a short statement. On the announcement day, they will be notified that they are accepted by the program or not. To impress the interviewers, the student should include their pass experience, target achievements and reason for the applications on their statements.

For training organizations: Teams from different organizations can signup/login to the application to create posts for the training programs, the description should include the tech stacks that the team use, the related major/industry, and extra requirements if any (i.e. return service)

For platform admin: Admin monitor and manage the student-company connection. Suggests the well-fitted students for the organizations and offer most fitted solution over all companies on each season by applying our matching algorithms.

_ In this Hackathon, we started from the admin platform. _

How we built it


We use MongoDB(noSQL) with 5 collections to store documents for programs, students, applications, headcounts and options

Schema for programs (though noSQL is schema-less, we still want to structure the data :-) id application_deadline application_progress company_introduction(crawler) company_logo(crawler) company_name(crawler) company_website(crawler) contact_email contact_number contact_person minimum_degree prefer_majors program_capacity program_city program_description(crawler) program_highlights program_length program_location program_schedule program_slug(crawler) program_title(crawler) training_content(crawler)

Schema for students id birthday email first_name gender gpa higest_degree is_veteran last_name majors phone_number(format) race school

Schema for applications candidate_email candidate_name company deadline personal_statement program_schedule program_slug quality_score

Schema for options companies degrees ethics highlights locations majors schedules schools

Schema for headcounts accepted capacity pending program_slug rejected


Programs: GET: /api/programs

  • Fetches all programs with pagination.
  • params: page_size, page_num, company_name, application_progress, minimum_degree, prefer_major, program_city, - program_highlights, program_schedule


  • Fetches a specific program details by a given program ID


  • Add keyword and column params
  • Fetches by keyword

Students: GET: /api/students

  • Returns all student information with pagination.
  • params: page_size, page_num, gender, race, school, is_veteran, major, highest_degree


  • Fetches the student detailed information by a given student ID.


  • Fetches the student detailed information by a given email address.


  • Fetches by keyword
  • Possible queries: ['first_name', 'last_name', 'school', 'majors']
  • Add keyword and column params

Applications: GET: /api/applications

  • Fetches all applications with pagination.
  • Possible url params: page_size, page_num, candidate_email(e.g., application_status(e.g. accepted), program_schedule(e.g. 2019-q2),


  • Returns all the applications by a given program schedule
  • program_schedule example: 2020-q1, 2020-q3


  • Returns all the schedules options the system has.


  • Fetches student candidates by a given program slug.


  • Fetches all applications associated to the program by a given program slug.


  • Fetches the applied programs by given a candidate email.


  • Fetches all the applications by a given schedule time


  • Fetches the application detail by given an application ID.

Headcounts: GET: /api/headcount/program/string:program_slug

  • Get the headcount information by a given program slug.


  • Get the headcount information by a given headcount ID.

Options: GET: /api/options

  • Returns all options


  • Returns the option object by given an options name (i.e. company, gender, degree…)
  • Possible option name: [‘company’, ‘degree’, ‘race’, ‘gender’, ‘highlight’, ‘location’, ‘major’, ‘schedule’, ‘school’]

Match Algorithms: GET: /api/match/best_score/string:program_schedule /api/match/bipartite_max/string:program_schedule

  • Possible program_schedule ‘2020-q1’, ‘2019-q4’, etc return { 'decisions' 'algorithm' 'timeConsume' 'applicantCount' 'programCount’ 'matchCount' } ‘decisions’ is a list of admission decisions we are trying to max out the matchCount here for more students get chances to get the seats Compare 2 algorithms

Seeds: GET: /seed/programs

  • Cleans up programs collection and re-add all programs data. /seed/students
  • Cleans up students collection and re-add all students data. /seed/applications
  • Cleans up applications collection and re-add all applications data. /seed/headcounts
  • Cleans up applications collection and re-add all headcounts data. /seed/options
  • Cleans up options collection and re-add all options data. /seed/all
  • Shortcut to sync all seed data

Backend Algorithm highlights:

  • Apply Levinstein Distance Scoring to auto-calculate the student's overall quality on the program application based on their personal statement and background match
  • Design a Best-Score based program matching algorithm, which provide plans for program recruiter to get the great fit candidates.
  • Implement a customized-weight-Bipartite-Max algorithm on the program matching, which proves to be very efficient to max out the overall program enrollment of all candidates.

Front End:

  • Implement the UI with React and Redux.
  • Connect the front end and backend with Axios.
  • Apply Semantic UI to style the web page.
  • Display location info by using GoogleMap API.


  • Google App Engine (Backend API server)
  • and domain service)
  • for hosting the front end static page)

Challenges we ran into

Database It is our first to use MongoDB-Atlas, when we followed the tutorial to setup and connected the API to our server, it was working perfectly fine. However, there was quite awhile at the midnight, the entry was suddenly blocked. We spent a long time to research the documentations to figure and ended up by fixing it through updating the user permission on the cluster.

Backend One of a heavy part of our project is to provide the seed data. In pursuit of the best seed data experience, we decided to build a web crawler to fetch some real data around. Because of the wifi issue and internet speed limitation, the crawler run extremely slow. In order to bring as much as data, we need lots of adjustments and workarounds for the schema redesign. Definitely a heavy and unforgettable memory on all the CSV data pre-processing/parsing and regular expression matching works.

Deployment It is also our first time to deploy a Flask app on Google Cloud Platform. After following the tutorial to setup the platform CDK, we were surprisingly found out that the python version of the demo was in python2.7 and our app was built on python3.7. We encounter lots of configuration issue on the app.yaml(It seems like there are quite a lot property difference between the config file for py2 and py3). Then, we kept seeing the 502 Bad Gateway issue after we figured out the yaml file. After lots of trials, we luckily found out that, GCP only(default) accept the entry file name as '' (before, we were putting a '' on the root directory).

DNS Configuration We had a hard time to setup the name server configuration on both the Google Cloud Platform and Our first approach was to configure the Network DNS on GCP, however, GCP keep rejecting the our registered domain name for the reason that it cannot verified the TXT record from the side. After 4 hours hard work, with the help from mentors and team discussion, we successfully found a great combo: use GAE for just handling the backend APIs and use to open a FTPClient to host a static website that routes to the domain we bought on

Accomplishments that we're proud of

  • We are doing actually contribution to the student community and figure out a way to maximize the benefit between the student and the program providers/sponsors
  • We finished a huge project in a very limited amount of time, with lots of new technologies: MonogoDB-Atlas, Google App Engine,, CORS configuration, PyMongo ODM, React Redux, Semantic UI and etc.

What we learned

We learned to split the big work and achieve it by breaking down pieces, this hackathon proves that there are lots of possibilities of our future. If we have a goal, no matter how huge it is, we always can achieve it with a team, with some discussion even some necessary fight backs :-), break the things down and keep approaching it.

What's next for Atlantis

On the strategy

  • we want to reach out to as much as the student groups to deeper understand their needs and collect their feedbacks on campus training program and their career job search.
  • demo our ideas and app to potential industrial training program provider, discuss the possibility and ways of partnership and propose a plan to further develop or even launch our project. On our dev work
  • expend the other 2 entries and implement the authentication/user access management on the app.
  • keep improving the UI and page styling for a better user experience

Credit and Citation (for the seed data on the demo app) (for seed data) data) data) (for images and icons usage on our website)

Built With

  • axios
  • beautiful-soup
  • bipartite-max-algorithm
  • chrom-webdriver
  • cors-handle
  • faker
  • filezila
  • flask-blueprint
  • fuzzywuzzy-match
  • google-app-engine(backendapiserver)
  • google-maps
  • levinstein-distance-score
  • mongodb-atlas
  • pagination-api
  • pandas
  • pymongo(odm)
  • python
  • react
  • react-redux
  • regex
  • selenium
  • semanticui
Share this project: