Inspiration
We wanted to create a solution for detecting the value on a given bill for those with visual impairments.
What it does
We pass an image through the app to a backend to our CNN that classifies the dollar amount on the bill before returning it to the user. With Screen Reader it will read the balance to the user.
How we built it
We trained a YOLOv11 model to detect U.S. dollar bills and integrated it into our app through a Flask backend, deployed to Render. The frontend, designed in Figma, was built with React Native using Expo and the Expo Camera.
Challenges we ran into
It was challenging to find a dataset suitable for training a robust detection model. We also ran into some issues integrating the camera on the frontend, but after troubleshooting, we managed to overcome them and create a product we’re proud of. Additionally, we had to learn how to make the app fully accessible using the screen reader to fully cater to the visually impaired.
Accomplishments that we're proud of
The object detection model worked very well considering the duration of this hackathon, and the screen-reader implementation worked very well. Overall, we successfully achieved the goals we set at the beginning of the hackathon of creating an accessibility-friendly app.
What we learned
We learned a lot about the accessibility features available on mobile platforms and used them to make our app as accessible and user-friendly as possible.
What's next for VisiBill
In the future, we hope to improve the model by building our own dataset tailored to the specific use cases and conditions of our application.
Log in or sign up for Devpost to join the conversation.