The landing page of the app.
The image captured.
What ReSight can read in the image.
What ReSight thinks the image is.
As a health-focused hackathon, we knew we wanted to make something that can help people in their everyday lives. After a brainstorming session that lasted most of Saturday night we concluded that an affliction suffered by many that has potential for innovation is by those who are visually impaired. With today's technology, its needless to see people struggling to do something as essential as reading a menu at a restaurant. This app aims to alleviate that problem by leveraging the clarity of your mobile device's camera and the resourcefulness of Google Vision API to bring into focus the things these people need to see.
What it does
How we built it
We knew we were accustomed to the MEAN stack so we wanted to find something that could fit that pattern, but we also wanted a challenge. Because of this we went with ionic for the front end app framework and had to work out all of the quirks of using that framework compared to our more familiar framework of plain Angular. The backend utilizes swagger codegen to help abstract away HTTP calls to the backend into simple to use functions that act as remote procedure calls from the perspective of the front end. All the stylings were done using Sass, and we had hoped we would have had time to add a functionality to switch the color scheme incase some of our users are color blind, however we did not get time to do this.
Challenges we ran into
Ionic's testing tools are fantastic, however difficult to connect to on the CIC WiFi because it does not allow inter-device communication via the WiFi network. This became a challenge because it meant the only way to test the app was to submit a production build which meant every coding iteration from each user would take up to 5 minutes to complete. Thankfully, Erin used his phone as a WiFi hotspot and that allowed us to connect to each other and sped up the development process multiple times over.
What's next for Resight
Stability fixes would be very nice, and also looking more into the color-blind friendly mode would be the next items for us. In addition the Google Vision API returns more data than this, and one idea we were throwing around is returning a breakdown of the composition of colors in the submitted image, again to support our color blind users.