We were inspired for Parkpak by seeing parks and how they could sometimes be poorly maintained. We asked ourselves if there was some way we could make park managing better.

What it does

This app is an Internet of Things application that reads in a variety of data and makes it all accessible in a web app. It recognizes speech from microphones to be placed around the park premises. The algorithm can then detect whether the speech is normal or distressed, and notifies the park ranger if someone sounds like they are in danger. It also allows rangers to monitor park traffic by giving an estimate on the number of people based on a live video feed. Additionally, it has a mobile hub of sensors that can be carried in a backpack to monitor the conditions of the park. Conditions include UV, light, sound, and moisture.

How we built it

We built the web app using Javascript, and the User Interface with HTML/CSS. All of the data is sent to a firebase database and retrieved by the web app. The speech recognition was coded with the python speechRecognition library, and was processed with NLTK and an algorithm training it with normal and distressed words. The number of people is tracked using facial detection on OpenCV. The sensors are hooked up to a Qualcomm Dragonboard 410c and coded with C++ and arduino.

Challenges we ran into

Challenges we ran into were working with the speechRecognition library which we had previous not tampered with before. Additionally, learning how to process speech with NLTK was also quite a challenge, because it was outside of most of our skill sets.

Accomplishments that we're proud of

We're proud of connecting all of these different programs together into one competent app.

What we learned

We learned that implementing basic machine learning is more simple than we thought it was, and was in fact quite practical.

What's next for ParkPak

We will try to refine the machine learning algorithms for the speech detection and facial recognition to make them more accurate.

Built With

Share this project: