Since we are a diverse group with computer science & statistics, biomedical engineering, civil engineering, and chemistry major with limited experience in data science or datathon, we thought the City Search Tools challenge would allow us to strengthen old skills and importantly, learn new techniques.

What it does

Our goal is to provide any users to locate a city that matches their criteria or background whether on religion, internet speed, or relationship status.

How we built it

We used Jupyter Notebook (Python), scikit-learn, plotly dash to scrape Google’s Place API, reverse locate for longitude & latitude coordinates, create K-Mean clusters and dashboard to find correlation in the data.

Challenges we ran into

There were numerous of obstacles that arose during this challenge. We spent a large amount of time locating large datasets while merging new datasets with our preexisting dataset. Once we complied the dataset, we soon realized it difficulty to display the dashboard with a such sparse data. Accomplishments that we're proud of Despite of having sparse data, we were able generate a way to rank the cities in order of preferences. We used the Euclidean distance and sort about the length from highest to lowest.

What we learned

We were able to learn more about machine learning, K-Means clustering, and formatting dashboards at an introductory level.

What's next?

If more time was available, we could explore the buying power within each city would have. With this information we can also focus our search tool for those interested in starting a small businesses or start-ups. We could measure if we were given variables it would be easier to understand the relationship and correlation between the failures and strive to decrease profit loss.

Share this project: