Inspiration
We wanted to apply the power of AI image processing to mobile devices.
What it does
Our IRIS mobile-optimized network creates a "Super Resolution" output of input images
How We built it
Using samples from ImageNet, Max created a script to downscale images and target the resolution of original images. He trained a keras model on Google Colab using the power Google Cloud's Compute Engine. Michael created an iOS framework that efficiently decomposes an image into 200x200 sub-patches to pass through the model. After predicting super resolution, the framework restitches the overlapping patches to create a full, seamless, super res photo.
Challenges I ran into
Training a model during a hackathon takes a long time! Finding an algorithm to restitch image patches efficiently and smoothly was really difficult. Converting Keras models to CoreML models. Max is sick, and this submission takes forever!
Accomplishments that I'm proud of
We did it! It compiles! On mobile! And it actually increases image quality!
What I learned
Sleep is not important. CNNs are way cool. Managing device memory when computing individual image pixels is a pain. Hackathons are fun!
What's next for IRIS
We want to spend more time training the model. We want to reduce compute time and deliver a first-class user interface.
We also want to try implementing other neural nets to complete a 4x zoom and to develop amazing low light images as well.
Log in or sign up for Devpost to join the conversation.