As I learn more about iOS development, I stumbled into the world of machine learning and decided it would be a cool challenge if I could find a machine learning model online and implement it in my own project.
What it does
Pic Match allows you to take a picture of anything, and then Google Inceptionv3, an image-recognition machine learning model made by Google, will give it's best guess as to what the main object in the frame is.
How I built it
I built it by looking at the documentation for the framework, and doing some research as to how to implement any machine learning model into an Xcode project in general. Then it was a matter of taking those results and displaying them to the user.
Challenges I ran into
Correctly understanding how the machine learning algorithm works and how to get the specific data I desired was the biggest challenge. Besides that, it was just taking those results and presenting them on screen.
Accomplishments that I'm proud of
I'm proud that I was able to make a project using machine learning for the first time. It was definitely a foundational step in building my overall programming experience. It's something cool to show my friends and I have fun taking pictures of random things and seeing what Google Inceptionv3 thinks it is.
What I learned
I learned how to take a machine learning model online, download it and add it to my Xcode project, and use the documentation to use the results I get from the model and parse through it to get my desired data.
What's next for Pic Match
I want to make some UI/UX improvements, I think the app could use some style to really spice it up. Also I think it would be cool if instead of using a pre-made model, I created my own machine learning model and used that.