Inspiration

The inspiration behind this app was my curiosity to understand objects and animals around me. To not just simply identify the object or animal but also have an in-depth analysis of what makes the object or animal tick or function. Instead of asking somebody these questions, I wanted to simply point my phone at the item and have everything I ever wanted to know about the object or animal at my fingertips.

What it does

EyeglassML is an application that identifies objects and animals from just a quick picture. In addition to identifying what it is, it also provides an insightful fact sheet that explains each and every attribute of the object or animal to the very last detail. From its chemical composition, history, common names to even mythical stories about the object.

How we built it

I built the application in Swift with the UIKit framework. I used "pods" to install 3rd party libraries as well as Swift Package Manager for some libraries that were not available through pods. I used the MobileNetV2FP16 machine learning model in tandem with Apple's Vision and CoreML. For information regarding the object or animal, I used Wikipedia's API, Wikimedia. The entire application was made in Storyboard.

Challenges we ran into

Two of the biggest challenges I ran into were choosing the right machine learning model as well as scraping Wikipedia. I tried numerous different models and eventually settled with MobileNetV2FP16 since that model seemed to work the best for both everyday objects and animals. Initially, I attempted to create my own machine learning model but finding the correct datasets were taking too long and the computational power to create them was extremely limited. The second challenge I ran into was scraping Wikipedia. I successfully was able to scrape Wikipedia through an external library but the amount of information the website sent over was overwhelming and it became apparent that filtering all the data within 48 hours would take a lot of time. Instead, I settled on a hacky fix that filters only the important data and somewhat ignores the unnecessary data.

Accomplishments that we're proud of

I am very proud of being able to set up and code the machine learning aspect of the app with little assistance. In addition, I am happy with how the end product turned out and how the app works seamlessly and delivers on the dream I had at the beginning of pointing my phone at an object and learning everything about the object.

What we learned

Thanks to this hackathon I learned a lot more about machine learning and how it works at the fundamental levels. Although the final product does not use the custom machine learning models I attempted to create, it was still fun trying to create the models.

What's next for EyeglassML

The next step for EyeglassML would be a complete UI rewrite. I mainly chose to write this app in UIKit due to the time constraints but since I finished everything I plan on rewriting the entire app in SwiftUI. I have a better experience with SwiftUI and am more confident in making a better functioning UI with it.

Built With

Share this project:

Updates