Rideshare littering is becoming more and more prevalent in our world today. This is evidenced by the plethora of complaints from schools and cities, as well as YouTube and other social media channels promoting the destruction and vandalism of these shared goods. We can even see the impact of rideshare littering at UCSD, with spin bikes ending up in unnatural places and formations, and the school threatening to shut down the program which is so beneficial to the many people who use the service as intended.

What it does

Our system is a hub that allows us to track where vehicles from all types of rideshare services are, with handy features such as geotagging on google maps and an easy to use app/GUI. Additionally, we trained machine learning algorithms (ConvNets) on a hand-collected dataset to recognize different types of rideshare vehicles which allow us to classify and geotag these vehicles. By running these powerful algorithms on mobile platforms such as the Raspberry Pi, we are able to mobilize our project and collect real time image data using a robot built using the TI Robot kit, and feed this data to our server and display it on our app.

How we built it

With such a data driven project, a significant portion of our process relied upon hand-collecting a dataset of images of rideshare vehicles, and then also preprocessing the data for our learning algorithm. Then we ran InceptionV3 with preinitialized weights, trained on our data, over areas of the input image to detect for object detection and figuring out what the camera is looking at. Additionally we had team members working on setting up and interfacing the Rasperry Pi with the camera and other elements of the project. Another really important piece of the project was we created an android application that integrates with both the spin bike and bird scooter APIs to mark their locations on a map. We then choose a few heavy traffic areas to use as hubs where these rideshare devices can be stored. Finally, we also implemented a simple autonomous vehicle using the TI-RSLK to be able to explore the environment in current camera blindspots. Then it was all a matter of putting everything together into a meaningful and cohesive project.

Challenges we ran into

One of the major challenges we ran into was initially we tried to use Tiny YoloV3 for object detection but after over 12 hours of work we had to scrap the idea in favor of using InceptionV3 to classify portions of each input image. Additionally we ran into issues with the RasPi kernel modules and interfacing it with the webcam.

Accomplishments that we're proud of

We're super proud of all of our individual components, as each of them were a significant challenge on their own, and then being able to bring them all together into a cohesive project, especially after we had to scrap our main idea with less than 8 hours left was really big accomplishment for us.

What we learned

We learned how to work effectively as a team as well as layout a comprehensive plan, so that we could all work on different parts of the project and have them be able to come together at the end. In addition, we learned to make tough decisions and debug tough problems in addition to all the relevant technical knowledge we acquired regarding machine learning, app design, robotics, and more.

What's next for ShareStation-HH

The future of ShareStation is to increase scale and increase our technical capabilities. We want to combine our mobile geotagger with a charging hub for rideshare devices to track and localize all of these services. Through this we can work to clean up our streets while providing a centralized beacon for people who use these vehicles daily. We hope to expand into UCSD and have our campus be the start of a revolution in the way that rideshare devices are seen and used.

Built With

Share this project: