OpenXC vehicle data in the android client
Finding parking in urban areas can be a nightmare. Our vision is cars that communicate to each other whenever they see an open space.
What it does
Our system uses computer vision and infrared sensors to detect gaps large enough to fit a car. It then uploads the location and timestamp to the cloud for other users to see. Any user can get directions to a detected open space. No more endless circling around the block, use the cloud to find parking right away.
How we built it
We combined kinect sensors and camera with OpenXL vehicle interface and an android tablet. A front-facing camera would use Google Vision to detect a parking lot. A kinect on the side of the car would activate its camera and infrared to detect an open parking space. The location would be then uploaded to Firebase for other clients to see.
Challenges we ran into
Networking between different hardware was a huge challenge that took hours to overcome.
Accomplishments that we're proud of
Our computer vision algorithm can successfully detect an open space in a parking lot.
What we learned
Most of us were new to android development, as well as working with vehicle data.
What's next for SmartPark
An extension of this project would be to aggregate enough data to provide the probability of finding parking on any street. Network effect is the greatest potential of this project - deploying at scale. Improving the space detection algorithm for detecting street parking.