Inspiration

In the 21st century, education is accessible widely. However, individuals with low vision capabilites can't get as much from a class as their peers due to unavailability of visually aid tools. There may be one or more causes of low vision. Some of the most common causes of low vision include age-related macular degeneration, and diabetes. We aim to enhance the learning experience for individuals with low vision capability by developing a tool to aid the visibility of text on whiteboards in a classroom setting.

What it does

The tool live streams any whiteboard/blackboard or a text platform and applies a high contrast filter to it, in real time, enhancing the contrast sensitivity of the image. Contrast sensitivity refers to the ability to detect differences between light and dark areas; therefore, for individuals with low vision, increasing the contrast between an object and its background will make the object more visible.

Furthermore, the tool also provides an option to generate an overlapping text based on the image, which can be accessed by connecting to the server in real time. The downloadable text file simultaneously generated can facilitate taking notes in a classroom setting without the requirement of actual presence.

How we built it

First, we used rts to stream 1080p video from android smartphone to the application (server). The video will be then processed by the application frame by frame.

After splitting the video into frames, we performed Contrast enhancement using image processing techniques. This helps people with low vision see the presentations clearly as well as helps reduce eye strains for students.

Then we implemented OCR for detecting text on the video. The OCR algorithm runs on google cloud and uses google cloud API to send data back and forth to the server. The backend server runs on flask, which send the image to google cloud and receives the text and coordinates.

Then, using the coordinates, the text is displayed on the image as an overlay using opencv. Once we completed the algorithm, we started working on the web app. The web app runs on html, css and jquery with flask as backend. The web app displays a realtime stream of the video with a toggle button, which when toggled changes the video to the contrast enhanced version. On clicking the OCR button, the google cloud API is called by the flask backend and then sent to the HTML file where the text is displayed on the image as well as in the webapp.

The web app also saves the text as a text file, which reduces the efforts of students taking notes and helps them focus on the class instead. The text file, takes into consideration the spacing as well as the new line/paragraph breaks. We also built a front end website which explains how the web app works and serves as an introduction and documentation of this project.

Challenges we ran into

Integration was one of the biggest problems we faced during this hackathon. Also, the short amount of time available for creating everything made this into a very challenging project

Accomplishments that we're proud of

“Developing an end-to-end, deployment-ready visual aid model in 24 hours.” - Srihari

“Developing and integrating a frontend for an application within 24 hours” - Frank

"Developing and integrating a backend for an application within 24 hours" - Hemanth

“Learning about the inner workings of openCV and flask and being able to work with it despite having a Life Sciences background” - Heer

What we learned

“Understanding the impact that smart tools like contrast enhancement and text generation can have in improving the quality of life for individuals with low vision was really a humbling experience.” - Srihari

“I learnt about the extensive applications on openCV and flask in terms of contrast enhancement and text generation” - Heer

“I learnt the importance of the interface in helping people understand about the product. ” - Frank

“I learnt and combated the struggles of building a backend while making it compatible with the frontend and the python algorithm. I integrated the algorithm through API calls and used the front end to render the data from backend. ” - Hemanth

What's next for High Vision

We look forward to implementing this tool by collaborating with non-profits working towards increasing the quality of life for visually impaired individuals. Moreover, we will be working on enhancing the outreach of the tool in all classroom settings.

Share this project:

Updates