What it does:
This project takes in multiple data files, organizes and cleans the data, and does a thorough analysis by taking the mean values of all data files and comparing them to find correlations between values.
How we built it
We utilized Python and Jupyter Notebook to write the code. We used pandas to read the data and organize it as needed and matplotlib to create our graphs to represent the data.
Challenges we ran into
This type of project was a learning curve for most of our team, so we spent a lot of time researching and formulating solutions for each project step. We spent the most time coming up with how to go about the project and researching packages we'd need to use for the project.
Accomplishments that we're proud of
The most difficult challenge was figuring out how to read 200+ data files from a single folder, so it was a huge accomplishment for us once we figured out how to take in that data and utilize the information. We also spent a good amount of time on visuals in order to give a clean, readable presentation that would still stick out.
What we learned
For one of our members, it was their first-ever experience with Python and Jupyter. For all of us, it was a great experience to learn what developing code looks like in a team. We also came out of this feeling more confident in utilizing jupyter notebook and several different python packages.
Built With
- jupyter
- python


Log in or sign up for Devpost to join the conversation.