Inspiration

We were inspired by one of our teammate's friend's experience as someone who uses a wheelchair. They mentioned how it can often be hard to identify how accessible places are in advance, and we thought we could try using images from the internet as well as computer vision API's to be able to let people have a better idea of how accessible a place might be, considering that 57 million Americans have some sort of disability.

What it does

Unfortunately, we were not able to fully get to utilize the API's in conjunction with our front-end code. But, what we hoped for it to do was that a user could input the name of a place and city, and then using the Yelp API we would retrieve the images of that location. We would then run the images against Azure, looking to identify features like elevators and ramps, and then return to the user the likelihood that that place was accessible for wheelchair users in a formatted way.

How I built it

We split up the tasks of understanding the API's and tried to get everything to work together.

Accomplishments that I'm proud of

We at least were able to get the API's working individually to get the data out of them, and built a smooth UI!

What's next for Appcessibility

We hope to get the functionality working as well as expand what the service can be used for and how it gets the images.

Share this project:

Updates