In 2014, City of Boston Public Works Department employees walked 1,600 miles to collect data on the quality of sidewalks in Boston. The process was time- and energy-intensive but contributed to the important realization that the current system of fixing sidewalks, which is prioritized based on 311 calls that citizens make themselves, has led to wealthy neighborhoods having sidewalks in much better condition than poorer neighborhoods. Our team worked on the development of an algorithm and app that would aid the City of Boston not only collect information on the state of sidewalks in Boston more efficiently but also equalize infrastructure accessibility and empower individuals to support the city's infrastructure programs.

What it does

The first part of our project is an algorithm that scans images from GoogleMaps and determines the state of a sidewalk based on five categories: in good condition, showing cracks/evidence of deterioration, being dirtied/graffitied, being intruded on by tree roots, or experiencing inhibiting overgrowth. We also taught the algorithm to differentiate between five different attributes: construction, lamp posts, solar trash cans, trash bags, and trash cans. We then fed the algorithm 1,000 of the first 311-call pictures related to sidewalks which created an HTML file showing every picture that had at least a 50% chance of having at least one of the five sidewalk quality categories in it. Each category has a percentage attached to it that represents the probability that that category is shown in the image. The 50% parameter ensures that GoogleMaps images with people in them or taken inside are not included in this HTML file. The City of Boston can now use our algorithm to scan GoogleMaps images on a yearly basis (or every time Google updates GoogleMaps) to determine which sidewalks require the most urgent repair by comparing the percentages and prioritizing "cracked," "overgrowth," or tree "root" sidewalks over "dirty" or in "good" condition sidewalks.

The second part of our project is an app whose mission is to engage citizens in the process of maintaining neighborhood infrastructure. While the City of Boston currently has an app with a similar purpose, called "BOS: 311" (link), we believe our app is the logical next step in technological advancement because it incorporates our newly designed algorithm as well. By having the app scan the photo in real-time for the user to see the breakdown of the sidewalk's quality, the user not only becomes more informed about when the sidewalk will most likely be fixed but also can determine whether their complaint is as urgent as it seems in relation to the other infrastructure issues being reported. The purpose of these features is two-fold: citizens in poorer neighborhoods will feel more comfortable reporting issues if they know that the repairs will be addressed in a timely manner, while citizens in wealthier neighborhoods can see whether or not what they're reporting is a truly urgent matter. We developed these features after hearing that the current issue with 311 calls is that wealthier areas are reporting less urgent repairs more often than poorer areas are reporting more urgent repairs.

We believe that the City of Boston's combined use of our algorithm with GoogleMaps and our app will greatly expedite the sidewalk quality evaluation process while maintaining citizens' abilities to get involved with the city's infrastructure development.

How we built it

To create our algorithm, we used Microsoft's computer vision model Custom Vision and trained the program to recognize the five categories of sidewalk quality by manually labeling over 400 images. We then wrote a python script to create the HTML file. Lastly, we used Android Studios and Java to create our app's user interface.

Challenges we ran into

One of our biggest challenges was training the Custom Vision model to recognize photos on the scale of GoogleMaps images. For example, our algorithm works very well with the scope of an image taken by a user for a 311 call. However, for a street-view image, the algorithm often cannot detect between the street and the sidewalk, so if the street is in good condition but the sidewalk is not, the probability percentages are not quite as accurate since they're taking into account the conditions of the street and the sidewalk.

Our second major challenge was the backend design of the app. We wanted to be able to create a demo app that could take the coordinates of the user's photo, place the image in the GoogleMaps coordinate range, feed it to our algorithm, and then produce the probability percentages in the same way it does for our HTML file. However, in the time allotted, we found that task to be a bit too difficult.

Accomplishments that we're proud of

Seeing as this is our first hackathon for the majority of our team members, we were very proud of our ability to produce a functioning product, despite our algorithm's flaws and app's limited interface. We are also very happy that we were able to incorporate a "social good" aspect to our project that we hope will inspire the team behind StreetCaster to continue promoting greater infrastructure equality in Boston and beyond.

What we learned

We certainly learned a lot about the basics of machine learning by teaching a computer to recognize certain objects in images! Using Google Maps, we were able to download the images of the sidewalks from 2014, run our model on these images and compare that data to the one obtained manually by the Boston engineers. We also learned how to use Java to create an Android app interface and python script to output an HTML file.

What's next for Streamlining & Enhancing Boston's StreetCaster Program

Sleep! After that, we'd love to keep thinking about how we can continue to increase infrastructure accessibility and equality through future projects.

Share this project: