Inspiration

We got the inspiration for BenefitU from the LevelsFYI website as we noticed how the website would show a company's job titles as well as salaries for those jobs, and we realized that we could create a similar application that could not only show a company's employee benefits but also incorporate an A.I. that could answer/clarify any questions people have relating to that company's benefits.

What it does

What BenefitU does is scrape through the website "LevelsFYI" and essentially gather data relating to a variety of different companies. This data includes the name of the company and the various benefits they provide to their employees. Using this data BenefitU can show us various benefits companies offer and also allow people to ask questions about a particular company's benefits.

How we built it

To build it we first began by splitting our group in half to work on the front-end and back-end of the application. For the back-end side of BenefitU, we first created a program that used web scraping to gather data from LevelsFYI and stored that data in MongoDB. Afterward, we created another program that would allow the user to type a specific company's name, retrieve that company's information (benefits) from MongoDB, and send that information to our A.I. By doing so the A.I can respond to any questions people have relating to a company's particular benefits as it has access to that company's information. For the front-end aspect of BenefitU, we used software known as Figma to create a web app design that our program could potentially be incorporated into.

Challenges we ran into

Challenges we ran into while working on BenefitU were primarily on the back-end side of the program. One challenge was trying to scrape large amounts of data from the LevelsFYI database without running into timing-out errors while collecting. Another challenge we faced was when trying to gather a particular company's benefits after the user typed it in. We needed to gather a company's particular benefits and pass it on to our A.I., however, the challenge came in trying to store that particular set of information into a variable and passing it through to the A.I afterward.

Accomplishments that we're proud of

We were extremely proud of being able to successfully scrape through and collect data relating to companies and their benefits from the LevelsFYI database without our program crashing or timing out. To add on to this we were also proud of successfully passing a company's data to our A.I so it would be able to answer questions directly related to a chosen company's benefits.

What we learned

While working on BenefitU we have learned how to properly web scrape, incorporate openai api, and also make our own custom python api, and also save the information into a MongoDB database.

What's next for BenefitU

The next step for BenefitU is to bring our application design on Figma to life and create a full-on web application that is accessible to everyone. We also plan to build further upon our current program to where it not only shows employee benefits of a particular company but also trends relating to certain job titles in that company including trends in salary and employment rates.

Share this project:

Updates