Inspiration

The theme of inclusivity was the main inspiration for this hack. We wanted to create an application that could apply machine learning in a real-world application and would help improve peoples lives. The visually impaired face many issues in their day-to-day lives such as the inability to recognize their surroundings. With SmartVision, we built an application that would employ image recognition and machine learning to identify their loved ones.

What it does

The mobile app uses computer vision, powered by Apple's Vision API, to recognize particular images and then audibly announce the recognized person/object's name.

How we built it

We utilized Swift and XCode for the foundation of the iOS application. Vision was implemented for the facial recognition portion. The AVFoundation library was then used to perform the text-to-speech output.

Challenges we ran into

Our team did not have any experience with iOS development, leading us to start from square one including everything from configuring the project, learning the syntax, and researching common iOS design patterns and philosophies. In addition, amongst our team, we only had one Macbook slowing down the whole process.

Accomplishments that we're proud of

The facial recognition, simultaneous text-to-speech conversion, and working with iOS design.

What we learned

We learned how to work with Swift, Xcode, and the basics of iOS design including using external APIs and libraries.

What's next for SmartVision

While what we have completed is a big accomplishment, it is a small step towards the full potential of SmartVision. We envision an application which will detect nearby objects in real time to assist blind people with navigation. In addition, seeing users can take pictures of objects not included in the data set and label them so that SmartVision is a fully crowd-sourced smart application.

Built With

Share this project:
×

Updates