We have all often had problems with carpools, especially with long-distance travel. While the idea in itself might and often does seem attractive, there really is no reliable way to get it done. So we decided to come together and solve this very relevant problem that everyone has encountered at some point or another.

What it does

We comb through different ride sharing pages on Facebook that are available. We extract information such as the source, destination and the message from these posts. We then organize this raw data in our database and make it easily accessible to the end user, enabling them to search by their start and end points.

How I built it

We used Facebook's Graph API to run through the various pages. We used python at the backend and collected all the raw data into a JSON formatted dictionary. To extract information such as the source and destination from this raw data, we chose to implement our own language processor that makes use of popular phrases such as "looking from", "going to" to ascertain the nature of the travel, since most posts had a similar format and were about 10-15 fixed locations. We then decided to make a list of these locations and their possible aliases to better organize our data and improve user experience at the front end. All this organized data was then pushed to Mlabs. Heroku was used to pull the necessary data from the database, which we then rendered on our own website. We also refresh the data in the MLabs database using a cronjob every half an hour to make sure that the listings stay updated.

Challenges I ran into

We as a group ran into a number of problems. For example, we initially planned on using Google's Natural Language Processing API. The idea was that the API would help us better decipher the locations in the posts' messages, allowing us to organize their data. However, there was a bug in Google's interface, which we discovered after we discussed with their onsite engineers, so their API was non-functional for our purposes. Then, when developing our own system to determine the locations, we spent a lot of time accounting and testing for numerous test cases that we did not initially consider. We also had some problems setting up our database: we were not able to pull data from MLabs to our web server as it didn't allow us to use a Mongo Client. We had to set up and migrate everything to Heroku; this was a tiring and nerve-wracking process, but we eventually succeeded.

Accomplishments that I'm proud of

We're proud of the accurate parsing of Facebook posts that we custom implemented, to find the locations that people request. We're also proud of our website design and functionality, like searching by location returning results sorted by time. Last but not least, the monumental task of finally migrating to Heroku was a rewarding experience.

What I learned

Overall, we learned how to create a full-stack application, in the process learning how to set up a database using MongoDB and transfer data to our own website through Heroku. We also learned about the power of Facebook Graph API and how to set up a website using JS, Jquery, HTML, CSS, and PHP.

What's next for SearchMyRide

We plan to port the website to mobile platforms such as iOS and Android in the future. We would also like to integrate Google Maps so that when the user enters his start and end points, he or she is able to see the locations on the map and the travel time between them. We would like to use a natural language processing engine, or perhaps build our own ML algorithms in the future that are able to parse the location, date and all the stuff that we were not able to do presently. Finally, we are looking to expand this website to integrate other groups online into our fold so that more and more students are able to use our service.

Share this project: