Inspiration

All our members suffer from bad eyesight. Every time we ran late for a class or arrived at a crowded workshop, we noticed many people at the back, including us, pulling out their phones and taking photos of the presentation or whiteboard to see the content. Then, we would all silently squint at our phones, trying to decipher what was on the big screen.

What it does

NoSight allows users to take photos on their phone and display it in higher resolution on their computer within classroom settings. Users simply open the app or website on their phone and snap a pic; then, it automatically will display in the browser on their computer. NoSight uses machine learning to sharpen the text and increase the resolution of uploaded photos, reducing eye strain and helping nearsighted people see far away.

How we built it

We used Firebase and Express for our web API and data storage, Google Cloud to run our ML model, and React and React Native for our web and mobile apps.

Challenges we ran into

We ran into difficulties with integration, as our project unites many different ideas together. Creating a mobile and web app that interact with Firebase that runs our ML model was challenging to all put together.

Accomplishments that we're proud of

Live video is streamed from our web app, whether it is on a computer or a phone. The ML model successfully processes images from Firebase.

What we learned

Our team came into this project with varying degrees of knowledge, each opting to do a new task to find better understanding in new skills. We collectively learned React Native, Firebase, and how to run ML models on Google Cloud.

What's next for NoSight

There are a couple routes we want to pursue. One is easy authentication for users to store their notes for long term use. Another is training a better model to clean and sharpen the images. Finally, we want to look into and pursue routes without strong wifi connection.

Share this project:

Updates