Inspiration
We all live in Oldfield, and often struggle to find a parking space in a convenient location. We also find that often people abuse disabled parking spaces, meaning those who really need them don't get to use them. We wanted to build some tech that would promote an inclusive community, but we also believe that there could be either open-source interest, or interest from the local council etc.
We're fascinated by deep-learning, computer vision and artificial intelligence, and never deny an excuse to build over-the-top infrastructure.
What it does
Using a network of cameras around Oldfield, we collect anonymised (and GDPR compliant!) images of the parked cars outside, and send them to the cloud to be processed.
This information is then delivered to the user through an easy-to-use UI, along with our estimated carbon savings, courtesy of DitchCarbon. To make it a bit more fun and encouraging, we did the maths to figure out alternative ways to convey the quantity of CO2 saved - like the quantity of rubber ducks .
How we built it
We developed our own embedded system using a Raspberry Pi and an old webcam (any camera works for Parkn). Due to the webcam's limited FOV, we use a servo and perform panoramic stitching to gain up to 120degrees of vision.
In the cloud, we run object recognition, and then use an algorithm we designed ourselves to detect free parking spaces. We also check for disabled spots, and whether they're currently being used.
Challenges we ran into
Due to the complex infrastructure we built (including 3 chained lambdas), we faced a few problems due to having so many moving parts being developed simultaneously.
We also spent a LOT of time developing our embedded system, which also had a lot of (literal) moving parts.
Accomplishments that we're proud of
It works! It actually detects car parking spaces reliably, and we snuck to Sainsbury's to buy some Hot Wheels to make a miniature test environment for both our hardware and our object recognition algorithm.
We're also very proud of the breadth of this project - spread across 7 repos. We've written a landing page, frontend, full infrastructure on AWS, and multiple DBs.
If we had the funds, and the interest, we could support many people using our platform to check on parking, and also set up their own camera to contribute to the local community: reducing congestion and pollution.
On top of this, the embedded system also added a lot more work, especially as we wanted to make it as easy as possible for others to replicate our designs, to grow the Parkn network.
We also built a python-native DitchCarbon API package and released it to PyPI. Don't believe us? Just run pip install ditchcarbon (https://pypi.org/project/DitchCarbon/) We wanted to do this to make it easier for us and others to use the API, and thought having hints and documentation within the IDE would reduce some friction.
What we learned
We learned that just having cars detected doesn't mean it's easy to detect where they're not. We spent a lot of time tuning the system, and we'll have more work to do on it in the future.
What's next for Parkn
We will also design a 3d enclosure for our embedded system, as the print that we attempted last night failed :(
We want to further develop the communication of inappropriate use of disabled parking spaces.
In the future we would like to perform the face blurring of pedestrians locally on the embedded systems we produce, as then we would not be storing or even transmitting personal data of people. We plan to use AI accelerated hardware to improve performance of this capability and reduce the strain on AWS.
Log in or sign up for Devpost to join the conversation.