Being students in a technical field, we all have to write and submit resumes and CVs on a daily basis. We wanted to incorporate multiple non-supervised machine learning algorithms to allow users to view their resumes from different lenses, all the while avoiding the bias introduced from the labeling of supervised machine learning.

What it does

The app accepts a resume in .pdf or image format as well as a prompt describing the target job. We wanted to judge the resume based on layout and content. Layout encapsulates font, color, etc., and the coordination of such features. Content encapsulates semantic clustering for relevance to the target job and preventing repeated mentions.

Optimal Experience Selection

Suppose you are applying for a job and you want to mention five experiences, but only have room for three. will compare the experience section in your CV with the job posting's requirements and determine the three most relevant experiences you should keep.

Text/Space Analysis

Many professionals do not use the space on their resume effectively. Our text/space analysis feature determines the ratio of characters to resume space in each section of your resume and provides insights and suggestions about how you could improve your use of space.

Word Analysis

This feature analyzes each bullet point of a section and highlights areas where redundant words can be eliminated, freeing up more resume space and allowing for a cleaner representation of the user.

How we built it

We used a word-encoder TensorFlow model to provide insights about semantic similarity between two words, phrases or sentences. We created a REST API with Flask for querying the TF model. Our front end uses Angular to deliver a clean, friendly user interface.

Challenges we ran into

We are a team of two new hackers and two seasoned hackers. We ran into problems with deploying the TensorFlow model, as it was initially available only in a restricted Colab environment. To resolve this issue, we built a RESTful API that allowed us to process user data through the TensorFlow model.

Accomplishments that we're proud of

We spent a lot of time planning and defining our problem and working out the layers of abstraction that led to actual processes with a real, concrete TensorFlow model, which is arguably the hardest part of creating a useful AI application.

What we learned

  • Deploy Flask as a RESTful API to GCP Kubernetes platform
  • Use most Google Cloud Vision services

What's next for

We plan on adding a few more features and making into a real web-based tool that working professionals can use to improve their resumes or CVs. Furthermore, we will extend our application to include LinkedIn analysis between a user's LinkedIn profile and a chosen job posting on LinkedIn.

Built With

Share this project: